abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
A packaged semiconductor device (200) with a particle roughened surface (202) on a portion of the lead frame (203) that improves adhesion between the molding compound (112) and the lead frame (203). A packaged semiconductor device (200) with a particle roughened surface (202) on a portion of the lead frame (203) that improves adhesion between the molding compound (112) and the lead frame (203) and with a reflow wall (210) that surrounds a portion of the solder joint (208) that couples the semiconductor device (110) to the lead frame (203). A packaged semiconductor device (200) with a reflow wall (210) that surrounds a portion of a solder joint (208) that couples a semiconductor device (110) to a lead frame (203). |
CLAIMSWhat is claimed is:1. A packaged semiconductor device comprising:a lead frame;a semiconductor device;a solder joint coupled between the lead frame and a terminal on the semiconductor device;a particle roughened surface on a portion of a surface of the lead frame and comprised of a first polymer containing first particles; andmolding compound covering portions of the semiconductor device, the lead frame, and the particle roughened surface.2. The packaged semiconductor device of claim 1, in which the first polymer is selected from a group consisting of a polyimide, epoxy, and polyester polymer.3. The packaged semiconductor device of claim 1, in which the first particles are nonmetallic particles.4. The packaged semiconductor device of claim 1, in which the particle roughened surface covers portions of the surface of the lead frame adjacent to the solder joint.5. The packaged semiconductor device of claim 1, comprising:a reflow wall that surrounds the solder joint in which the reflow wall is a second polymer selected from the group consisting of polyimide, epoxy, and polyester.6. The packaged semiconductor device of claim 5, in which the reflow wall surrounds the solder joint on at least two opposing sides of the solder joint.7. The packaged semiconductor device of claim 5, in which the reflow wall completely surrounds the solder joint.8. The packaged semiconductor device of claim 5, comprising of a second polymer containing second particles.9. The packaged semiconductor device of claim 5, in which a height of the solder joint is at least as high as a height of the reflow wall.10. The packaged semiconductor device of claim 1, comprising a printed circuit board solder pad on a bottom side of the packaged semiconductor device.11. The packaged semiconductor device of claim 10, in which the printed circuit board solder pad is solder paste.12. The packaged semiconductor device of claim 10, in which the printed circuit board solder pad is a solder paste containing solderable particles formed of a metal selected from the group consisting of silver, copper, nickel, palladium, platinum, tin, gold and alloys thereof.13. The packaged semiconductor device of claim 1 further comprising metallic post coupled between the solder joint and the terminal.14. A packaged semiconductor device comprising:a lead frame;a semiconductor device;a solder joint coupled between the lead frame and a terminal on the semiconductor device;a particle roughened surface, on a portion of a surface of the lead frame, comprised of a first polymer containing first particles;a reflow wall surrounding the solder joint and comprised of second polymer containing second particles; andmolding compound covering portions of the semiconductor device, the lead frame, the metallic post, the reflow wall, and the particle roughened surface.15. The packaged semiconductor device of claim 14, in which the first polymer and the second polymer are selected from the group consisting of polyimide, epoxy, and polyester polymers.16. The packaged semiconductor device of claim 14, in which the first polymer and the second polymer are selected from the group consisting of polyimide, epoxy, and polyester polymers and wherein the first and second particles are nonmetallic particles.17. The packaged semiconductor device of claim 14, in which the first polymer and the second polymer are the same polymer and in which the first particles and the second particles are the same nonmetallic particles.18. The packaged semiconductor device of claim 14, in which the first polymer and the second polymer are selected from the group consisting of polyimide, epoxy, and polyester polymers and in which the first particles are nonmetallic particles and in which the second particles are metal particles composed of a solderable metal selected from the group consisting of silver, copper, nickel, palladium, platinum, tin, gold and alloys thereof.19. The packaged semiconductor device of claim 14, in which the reflow wall surrounds at least two opposing sides of the solder joint.20. The packaged semiconductor device of claim 14, comprising a printed circuit board solder pad comprised of solder paste on a bottom side of the packaged semiconductor device. |
PACKAGED SEMICONDUCTOR DEVICE WITH A PARTICLE ROUGHENED SURFACE[0001] This disclosure relates to the field of packaged semiconductor devices. More particularly, this disclosure relates to packaged semiconductor devices with improved adhesion between the molding compound and lead frame.SUMMARY[0002] The following presents a simplified summary in order to provide a basic understanding of one or more aspects of the disclosure. This summary is not an extensive overview of the disclosure, and is neither intended to identify key or critical elements of the disclosure, nor to delineate the scope thereof. Rather, the primary purpose of the summary is to present some concepts of the disclosure in a simplified form as a prelude to a more detailed description that is presented later.[0003] A packaged semiconductor device with a particle roughened surface on a portion of the lead frame with molding compound. A packaged semiconductor device with a particle roughened surface on a portion of the lead frame with molding compound, with a reflow wall that surrounds a portion of a solder joint that couples the semiconductor device to the lead frame. The particle roughened surface may aid in adhesion between the molding compound and the lead frame.[0004] A packaged semiconductor device with a lead frame and a semiconductor device. A solder joint is coupled between the lead frame and a terminal on the semiconductor device. A reflow wall is on a portion of the lead frame and in contact with the solder joint. Molding compound covers portions of the semiconductor device, the lead frame, the solder joint, and the reflow wall.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 A is a cross-section of a packaged semiconductor device.[0006] FIG. IB is a top view of the lead frame in the packaged semiconductor device in FIG. 1A.[0007] FIGS. 2 A and 2B are cross-sections of packaged semiconductor devices with a lead frame, having a reflow wall and a particle roughened surface area. [0008] FIGS. 3 A and 3B are views of reflow walls.[0009] FIG. 4 is a top view of a lead frame having a reflow wall and a particle roughened surface.[0010] FIGS. 5A and 5B are cross section views of a lead frame and a semiconductor device illustrating the attachment of the semiconductor device to a lead frame.[0011] FIG. 6 is a cross section view of a lead frame with a particle roughened surface.[0012] FIGS. 7A and 7B are cross section views of lead frames with printed circuit board solder pads on the bottom side of the lead frame.[0013] FIGS. 8A, 8B, and 8C are cross section views of a lead frame with reflow walls formed in accordance with embodiments.[0014] FIGS. 9A, 9B, and 9C, are cross section views of packaged semiconductor devices with particle roughened surfaces, reflow walls, and printed circuit board solder pads on the bottom side formed.[0015] FIG. 10 is a cross section view illustrating a packaged semiconductor device with particle roughened surfaces and a solder pad on the top side of the lead frame and with a printed circuit board solder pad on the bottom side of the packaged semiconductor device.[0016] FIGS. 11A through 11C are cross section views illustrating the major manufacturing steps in forming particle roughened surfaces and reflow walls using ink jet printing.[0017] FIGS. 12A and 12B are cross section views illustrating the major manufacturing steps in forming printed circuit board solder pads on the bottom side of the lead frame using ink jet printing.[0018] FIGS. 13A through 13C are cross section views illustrating the major manufacturing steps in forming particle roughened surfaces and reflow walls using screen printing.[0019] FIGS. 14A and 14B are cross sections illustrating the major manufacturing steps in forming printed circuit board solder pads on the bottom side of a lead frame using screen printing.DETAILED DESCRIPTION OF EXAMPLE EMB ODEVIENT S[0020] Embodiments of the disclosure are described with reference to the attached figures. The figures are not drawn to scale and they are provided merely to illustrate the disclosure. Several aspects of the embodiments are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide an understanding of the disclosure. One skilled in the relevant art, however, will readily recognize that the disclosure can be practiced without one or more of the specific details or with other methods. In other instances, well-known structures or operations are not shown in detail to avoid obscuring the disclosure. The embodiments are not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a method.[0021] A packaged semiconductor device 100 is illustrated in the cross section in FIG. 1A. A semiconductor device 110 may be any semiconductor device, for example an integrated circuit, transistor, or diode. The semiconductor device 110 is attached to a lead frame 105 and covered with molding compound 112 to form the packaged semiconductor device 100. The semiconductor device package may be any package form, for example a dual in line package (DIP), a quad flat no lead (QFN) package or a flip chip small outline transistor (FCSOT) package or a radial package. A top view of the lead frame 105 in the packaged semiconductor device 100 is shown in FIG. IB. The lead frame 105 is comprised of a number of leads 102 on which solder pads 104 are formed. The semiconductor device 110 is bonded to the lead frame 105 with solder joints 107 formed between metal posts 108 connected to input /output pads on the semiconductor device 110 and the solder pads 104. The number of leads 102 with solder pads 104 in a lead frame 105 may vary depending upon the number of solder joints 107 needed to mount the semiconductor device 110.[0022] Another embodiment directly connects the solder pads 104 of the lead frame to input / output pages on the semiconductor device.[0023] The cross section of the lead frame 105 in FIG. 1 A is along the dashed line over the top view in FIG. IB.[0024] In the cross section of the packaged semiconductor device 100 in FIG. 1A, solder pads 104 are made of a material that solder easily wets. Solder joints 107 are formed between the solder pads 104 and metallic posts 108 that are connected to input/output (I/O) terminals of the semiconductor device 109. The metallic posts 108 typically are made of a conductive material such as copper, gold, or solder. Portions of this assembly is covered with molding compound 112 to form the packaged semiconductor device 100. Packaged semiconductor device 110 reliability failures may occur when the molding compound 112 delaminates from the lead frame 105 due to poor adhesion. To improve adhesion portions of the surface of the lead frame 105 may be roughened using wet chemical etching, for example.[0025] The semiconductor device 110 is mounted on the first side (top side) of lead frame 105 as described above. Printed circuit board (PCB) solder pads 106 may be formed on the second side (bottom side) of the lead frame 105 to facilitate soldering the packaged semiconductor device 100 to leads on an underlying PCB.[0026] The solder 104 pads on the topside of the lead frame 105 and the PCB solder pads 106 on the bottom side of the lead frame 105 are typically formed at additional cost by electroplating solderable metals such as palladium coated nickel using a masking process during the manufacture of the lead frame 105.[0027] Cross sections of a packaged semiconductor device 200 with a semiconductor device 110 attached to a lead frame 203 with solder joints, 208, are illustrated in FIGS. 2A and 2B. A reflow wall 210 which either partially or completely surrounds the solder joint 208 restricts the lateral reflow of the solder during the formation of the solder joint 208 and consequently forms a taller solder joint 208. The solder joint 208 forms electrical connection between the lead frame 203 and the copper post 108 connected to an input/output (I/O) terminal on the overlying semiconductor device 110. The particle roughened surface 202 formed adjacent to the reflow wall 210 on the surface of the lead frame 203 is formed by bonding a particle containing polymeric material to the surface of the lead frame 203. The lead frame 203, reflow wall 210, solder joint 208, copper post 108, particle roughened surface 202, and semiconductor device 110 assembly are covered with molding compound 112 to form the packaged semiconductor device 200.[0028] A top view of the lead frame 203 is shown in FIG. 4. The lead frame 203 is comprised of number of leads 205 with particle roughened surfaces 202. Reflow walls 210 may also be formed on the leads 205 adjacent to the particle roughened surfaces 202.[0029] The particle roughened surfaces 202 may be a particle containing polymeric material that is ink jet printed or screen printed onto a portion of the surface of lead frame 203 that is adjacent to the solder joints 208. The polymeric material may be a polyimide or epoxy resin. In one example, the particle roughened surface 202 includes an ink residue having polymeric material. The ink residue is formed in response to printing ink having polymeric material from an inkjet printer which is subsequently cured to form the ink residue having polymeric material. The particles that formed the particle roughened surface 202 are typically nonmetallic to avoid forming shorts. Particle sizes may range from nanometers to microns. Larger size particles may be used for screen printing pastes than may be used for ink jet printable inks. The particles may be regularly shaped such as spheres or ovals or may have irregular shapes.[0030] The particle roughened surface 202 provides for improved adhesion between the molding compound 112 and the lead frame 203. The improved adhesion significantly reduces or eliminates packaged semiconductor device 200 failures due to delamination of the molding compound 1 12 from the lead frame 203.[0031] As is illustrated in FIGS. 3A and 3B, reflow walls 210 may completely surround the solder joint 208 or may confine the solder reflow on at least two sides. The inside surface 214 of solder wall 210 restricts the lateral flow of solder when the solder joint 208 is formed resulting in a taller solder joint 208. The taller solder joint 208 increases the distance 215 between the semiconductor device 210 and the lead frame 203. The increased distance reduces stress on the solder joint 208 as a result of the mismatch in thermal expansion (coefficient of thermal expansion (CTE) mismatch) between the semiconductor device 110 and the underlying lead frame 203 to which it is attached. Under some circumstances, particularly temperature extremes, mismatches in thermal expansion can lead to solder joint failure.[0032] The reflow wall 210 may be formed of a polymeric material such as a polyimide, polyester, or epoxy or may be formed of a polymeric material containing nonmetallic or metallic particles. Particles embedded in the polymer reinforce the reflow wall 210. When the particles are formed of a solderable metal, the solder in the solder joint 208 may bond to the particles and increase the strength of the solder joint 208. The stronger solder joint 208 may reduce the failure rate of the solder joints 208 due to mechanical or thermal stress. A solderable metal may be a metal such as copper, silver, gold, platinum, nickel, palladium, brass, or alloys thereof that is easily wetted by molten solder during reflow.[0033] Perspective views of example reflow walls 210 are shown in FIGS. 3A and 3B. Reflow walls have a thickness 219. Although circular and rectangular reflow walls 210 are depicted in FIGS. 3A and 3B other shapes such as ovals, octagons, squares, and other shapes may be used. The reflow walls 210 may completely surround the solder joint as shown in FIG. 3A in a circular shape or may confine the solder joint 208 on four sides of a rectangle as shown in FIG. 3B. The cross section of the reflow wall 210 in FIGS. 2 A, and 2B are taken along the dashed line, 2 A and 2B, in FIGS. 3A and 3B.[0034] Figure 5 A and 5B are cross sections illustrating the formation of solder joints between a semiconductor device 110 and a lead frame 203. As is illustrated in FIG. 5 A, solder caps 111 on top of copper posts 108 that project downward from I/O's on the semiconductor device 110 may be positioned inside the reflow wall 210 prior to refl owing the solder and forming the solder joints 208 between the copper posts 108 and the surface of the lead frame 203. In a first alternative process as shown in FIG. 5B, the cavity between the reflow walls 210 may first be filled with a solder paste 113 and the top of the copper post 108 brought into contact with the solder paste 113 prior to reflowing the solder paste 113 and forming the solder joint 208. In a second alternative process the cavity between the reflow walls 210 may be filled with a solder paste 113 and a copper post 108 with a solder cap 111 may brought into contact with the solder paste 113 prior to reflowing the solder paste 1 13 and forming the solder joint 208.[0035] The volume of solder in the solder cap 111 or the volume of the solder paste 113 inside the cavity between the reflow walls 210 is chosen so the solder joint 208 is at least as tall as the solder wall 210. The volume of the solder 111, 113 is preferably chosen so that the height of the solder joint is greater than the height of the solder wall. The height of the solder joint 208 may be increased by increasing the height of the reflow wall 210. Increased height of the solder joint 208 may improve solder joint reliability.[0036] The particle roughened surface 202 in FIG. 6 may be formed using ink jet printing to dispense a particle containing ink onto the surface of the lead frame 603. The ink may be comprised of particles 605 dispersed in a resin 604 such as a polyimide or epoxy resin. After the ink is dispensed onto the surface, the ink may be thermally cured at a temperature in the range of about 80 °C to 300 °C to drive off solvent forming the particle roughened surface 202.[0037] Alternatively screen printing may be used to apply a screen print paste to the surface of the lead frame 603. The screen print paste may be formed of particles 203 dispersed in a resin 604 such as a polyimide or epoxy resin. After the screen print paste is dispensed onto the surface it may be thermally cured at a temperature in the range of about 180 C to 300 C to drive off solvent forming the particle roughened surface 202.[0038] As is illustrated in FIGS. 7A and 7B, PCB solder pads 702 and 705 may be formed on the bottom side of the lead frame 703. FIG. 7A shows a PCB solder pad 702 formed using solder paste 704. Shown in FIG. 9A is a packaged semiconductor device 900 with a PCB solder pad 702 formed on the bottom side of the packaged semiconductor device 900 using solder paste 704. FIG. 7B shows a PCB solder pad 705 on the bottom side of the lead frame 703 formed using a solder paste 704 in which solderable particles 708 are dispersed. Shown in FIG. 9B is a packaged semiconductor device 901 with a PCB solder pad 705 formed on the bottom side using solder paste 704 in which solderable particles 708 are dispersed. The solderable particles 708 may be formed of metals such as silver, gold, platinum, nickel, palladium, brass, or alloys thereof. The solderable particles 708 add reinforcement to solder joints formed between the PCB solder pad 705 on the bottom side of the lead frame 703 and an electrical lead on a printed circuit board. Forming PCB solder pads 702 and 705 on the bottom side of the lead frame 903 using ink jet printing or screen printing eliminates the expensive step of electroplating these pads during lead frame 903 manufacture.[0039] FIGS. 8 A, 8B, and 8C illustrate a few reflow sidewall 810 options. FIG. 8 A shows a reflow sidewall 810 that is composed of a polymeric material 802 such as polyimide, epoxy or polyester. Shown in FIG. 9A is a packaged semiconductor device 900 with a reflow sidewall 810 composed of a polymeric material 802. FIG. 8B shows a reflow sidewall 810 that is composed of solderable particles 804 dispersed in a polymeric material 802. FIG. 9B shows a packaged semiconductor device 901 with a reflow sidewall 810 composed of solderable particles 804 dispersed in a polymeric material 802. FIG. 8C shows a reflow sidewall 810 that is composed of solderable particles 804 dispersed in a polymeric material 802. A solder pad 806 that is composed of solderable particles 809 dispersed in solder flux 811 is formed on the surface of the lead frame 903 between the reflow walls 810. FIG. 9C shows a packaged semiconductor device 905 with a reflow sidewall 810 composed of solderable particles 804 dispersed in a polymeric material 802 and with a solder pad 806 that is composed of solderable particles 809 dispersed in solder flux 811 on the surface of the lead frame 903 between the reflow walls 810. In one example, the solderable particles 804 dispersed in solder flux 811 is deposited on the surface prior to attaching the solder joint 208.[0040] As is illustrated in FIG. 10, solder pads 910 may be formed on the topside of the lead frame 903 where the semiconductor device 110 is mounted to form packaged semiconductor device 907. These topside solder pads 910 may be formed using the same material and process as is used to form the PCB solder pads, 702 and 705, on the bottom side of the lead frame 903. [0041] The major steps for forming particle roughened surface areas 202 and reflow walls 810 are described in cross sections in FIGS. 11 A through 11C and FIGS. 13A through 13C.[0042] A first method for forming reflow walls 809 and a particle roughened surface 202 on a lead frame 903 using ink jet printing is illustrated in FIGS. 11 A through 11C.[0043] In FIG. 11 A the reflow wall 809 is printed onto the surface of the lead frame 903 using an inkjet printer 174.[0044] FIG. 11B illustrates the deposition of a particle roughened surface 162 on the lead frame 903 using an inkjet printer 176. The ink may be the same ink used to print the reflow wall 810 or it may be a different ink.[0045] FIG. l lC shows the structure after sintering at a temperature in the range of about 80 °C and about 300 °C to form the particle roughened surface 202 and to form the reflow wall 810.[0046] A second method for forming a particle roughened surface 202 and reflow walls 810 on a lead frame 903 using screen printing is illustrated in FIGS. 13A through 13C.[0047] In FIG. 13 A a first stencil 180 is positioned on the surface of the lead frame 903 and a first paste 182 is applied to areas where a particle roughened surface 202 is to be formed. The first stencil 180 is removed after the first paste 182 is applied.[0048] In FIG. 13B a second stencil 184 is positioned on the surface of the lead frame 903 and a second paste 186 is applied to areas where the reflow walls 810 are being formed. The second stencil 184 is removed after the second paste 184 is applied.[0049] FIG. 13C shows the lead frame 903 after the pastes are sintered at a temperature in the range of about 80 °C and about 300 °C to form the reflow wall 810 and the particle roughened surface 202.[0050] The method illustrated in FIG.13A through 13C, enables the reflow wall 810 and particle roughened surface 202 to be formed with different thicknesses and to be formed using different pastes. Alternatively, one stencil with openings for both reflow walls 810 and for particle roughened surface 202 may be utilized. In this case the same paste may be used to form both the reflow walls 810 and the particle roughened surface 202. This method may be used to reduce manufacturing cost.[0051] The major steps for forming PCB solder pads 705 on the backside of the lead frame 903 are described in cross sections in FIGS. 12A and 12B, and in FIGS. 14A and 14B. [0052] FIG. 12A and 12B illustrate steps in the formation of the PCB solder pads 705 on the bottom side of the lead frame 903 using ink jet printing. As is illustrated in FIG. 12A, the PCB solder pads 701 are printed using an ink jet printer 178. FIG. 12B shows PCB solder pads 705 after the ink is sintered at a temperature in the range of about 80 °C and about 300 °C to drive off solvent and to cure the ink resin.[0053] FIGS. 14A and 14B illustrate the formation of PCB solder pads 705 on the backside of the lead frame 903 using screen printing.[0054] In FIG. 14A a stencil 190 is applied to the bottom side of the lead frame 903 with openings where the PCB solder pads 705 are to be formed. The stencil 190 is removed after the paste 704 is applied.[0055] FIG. 14B shows the lead frame 903 after the paste 192 is sintered at a temperature in the range of about 80 °C and about 300 °C to form the PCB solder pads 705 on the bottom side of the lead frame 903.[0056] In various example embodiments, terms such as top, bottom, and the like are used in a relative sense to describe a positional relationship of various components. These terms are used with reference to the position of components shown in the drawings, and not in an absolute sense with reference to a field of gravity. For example, the top side of the lead frame 105 would still be properly referred to as the top side of the lead frame, even if the packaged semiconductor devices are placed in an inverted position with respect to the position shown in the drawings.[0057] While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only and not limitation. Numerous changes to the disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit or scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above described embodiments. Rather, the scope of the disclosure should be defined in accordance with the following claims and their equivalents. |
Aspects disclosed include reducing or avoiding metal deposition from etching magnetic tunnel junction (MTJ) devices. In one example, a width of a bottom electrode of an MTJ device is provided to be less than a width of the MTJ stack of the MTJ device. In this manner, etching of the bottom electrode may be reduced or avoided to reduce or avoid metal redeposition as a result of over-etching the MTJ device to avoid horizontal shorts between an adjacent device(s). In another example, a seed layer is embedded in a bottom electrode of the MTJ device. In this manner, the MTJ stack is reduced in height to reduce or avoid metal redeposition as a result of over-etching the MTJ device. In another example, an MTJ device includes an embedded seed layer in a bottom electrode which also has a width less than a width of the MTJ stack. |
What is claimed is:1. A magnetic tunnel junction (MTJ) device, comprising:a bottom electrode having a width;a seed layer; andan MTJ stack pillar having a width larger than the width of the bottom electrode and disposed above and in electrical contact with the bottom electrode, the MTJ stack pillar comprising:a pinned layer disposed above the seed layer;a free layer disposed above the seed layer; anda tunnel barrier disposed between the pinned layer and the free layer, the tunnel barrier configured to provide a tunnel magnetoresistance between the pinned layer and the free layer.2. The MTJ device of claim 1, further comprising a dielectric material layer disposed adjacent to the bottom electrode.3. The MTJ device of claim 2, further comprising an over-etch trench disposed in the dielectric material layer adjacent to the MTJ stack pillar.4. The MTJ device of claim 3, wherein the over-etch trench does not extend into the bottom electrode.5. The MTJ device of claim 2, further comprising a redeposited dielectric material from the dielectric material layer adjacent to an outer surface of the MTJ stack pillar.6. The MTJ device of claim 1, wherein the width of the bottom electrode comprises a largest cross-section width of the bottom electrode.7. The MTJ device of claim 1, wherein the width of the MTJ stack pillar comprises a largest cross-section width of the MTJ stack pillar.8. The MTJ device of claim 2, further comprising:a lower metal layer, the bottom electrode disposed above the lower metal layer; andan inter-metal block layer disposed between the dielectric material layer and the lower metal layer, the bottom electrode further disposed adjacent to the inter-metal block layer.9. The MTJ device of claim 1, wherein the MTJ stack pillar comprises the seed layer.10. The MTJ device of claim 1, further comprising a dielectric material layer comprising a top surface, and wherein:the bottom electrode is disposed in an opening in the dielectric material layer, the bottom electrode comprising a top surface disposed below the top surface of the dielectric material layer;the seed layer comprising an embedded seed layer disposed in the opening in the dielectric material layer in contact with the top surface of the bottom electrode; andthe MTJ stack pillar disposed above the dielectric material layer and in electrical contact with the embedded seed layer.11. The MTJ device of claim 10, wherein the MTJ stack pillar does not include the seed layer.12. The MTJ device of claim 10, further comprising an over-etch trench disposed in the dielectric material layer adjacent to the MTJ stack pillar.13. The MTJ device of claim 12, wherein the over-etch trench does not extend into the bottom electrode.14. The MTJ device of claim 10, further comprising a redeposited dielectric material from the dielectric material layer adjacent to an outer surface of the MTJ stack pillar.15. The MTJ device of claim 10, further comprising a second seed layer disposed above the dielectric material layer, wherein the MTJ stack pillar is disposed above and in further electrical contact with the second seed layer.16. The MTJ device of claim 10, further comprising a lower metal layer, the bottom electrode disposed above the lower metal layer;wherein the dielectric material layer comprises a capping material layer disposed above the lower metal layer, and a buffer material layer disposed above the capping material layer.17. The MTJ device of claim 1 incorporated into an MTJ bit cell, the MTJ bit cell comprising:an access transistor comprising a gate, a first electrode, and a second electrode, wherein:the gate of the access transistor is coupled to a word line; the bottom electrode of the MTJ device is coupled to the first electrode of the access transistor; anda top electrode of the MTJ device is coupled to a bit line; the MTJ device configured to receive a current between the first and second electrodes in response to a signal on the word line activating the access transistor and a voltage applied to the bit line.18. The MTJ device of claim 1 integrated into an integrated circuit (IC).19. The MTJ device of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a mobile phone; a cellular phone; a smart phone; a tablet; a phablet; a computer; a portable computer; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; and an automobile.20. A method of fabricating a magnetic tunnel junction (MTJ) device, comprising: disposing a dielectric material layer above a lower metal layer in a semiconductor wafer, the dielectric material layer comprising a top surface;removing a portion of dielectric material of the dielectric material layer to form an opening having an opening width;disposing one or more metal materials in the opening to form a bottom electrode having a width of the opening width;disposing an MTJ stack having a width larger than the opening width above and in electrical contact with the bottom electrode, the MTJ stack comprising:a pinned layer disposed above a seed layer;a free layer disposed above the seed layer; anda tunnel barrier disposed between the pinned layer and the free layer, the tunnel barrier configured to provide a tunnel magnetoresistance between the pinned layer and the free layer; andremoving material from the MTJ stack to form an MTJ stack pillar having a width larger than the width of the bottom electrode.21. The method of claim 20, further comprising polishing a top surface of the one or more metal materials disposed in the opening to be substantially planar to the top surface of the dielectric material layer.22. The method of claim 20, further comprising:recessing the bottom electrode disposed in the opening below the top surface of the dielectric material layer; anddisposing a second metal material of the one or more metal materials to form the bottom electrode.23. The method of claim 20, wherein removing the material from the MTJ stack comprises etching the MTJ stack to form the MTJ stack pillar having the width larger than the width of the bottom electrode.24. The method of claim 20, further comprising over-etching an over-etch trench in the dielectric material layer adjacent to the MTJ stack pillar.25. The method of claim 24, further comprising redepositing the dielectric material from the dielectric material layer adjacent to an outer surface of the MTJ stack pillar during the over-etching over the over-etch trench in the dielectric material layer adjacent to the MTJ stack pillar.26. The method of claim 20, wherein removing the portion of the dielectric material of the dielectric material layer comprising etching the portion of the dielectric material of the dielectric material layer to form the opening having the opening width.27. The method of claim 20, further comprising:embedding a seed layer material in the opening in the dielectric material layer in contact with a top surface of the bottom electrode;removing a portion of the seed layer material to be substantially planar with the top surface of the dielectric material layer; anddisposing the MTJ stack having the width larger than the opening width above the dielectric material layer and in electrical contact with a remaining portion of the seed layer material to be in electrical contact with the bottom electrode, the MTJ stack comprising:a pinned layer;a free layer; anda tunnel barrier disposed between the pinned layer and the free layer, the tunnel barrier configured to provide a tunnel magnetoresistance between the pinned layer and the free layer.28. The method of claim 27, wherein removing the portion of the seed layer material comprises polishing the portion of the seed layer material to be substantially planar with the top surface of the dielectric material layer.29. The method of claim 27, wherein removing the portion of the dielectric material of the dielectric material layer comprises etching the portion of the dielectric material of the dielectric material layer to form the opening having the opening width.30. The method of claim 27, further comprising disposing a second seed layer above the dielectric material layer, andwherein disposing the MTJ stack above the dielectric material layer comprises disposing the MTJ stack pillar above and in further electrical contact with the second seed layer. |
REDUCING OR AVOIDING METAL DEPOSITION FROM ETCHING MAGNETIC TUNNEL JUNCTION (MTJ) DEVICES, INCLUDING MAGNETICRANDOM ACCESS MEMORY (MRAM) DEVICESPRIORITY CLAIM[0001] The present application claims priority to U.S. Provisional Patent Application Serial No. 62/370,929 filed on August 4, 2016 and entitled "REDUCING OR AVOIDING METAL DEPOSITION FROM ETCHING MAGNETIC TUNNEL JUNCTION (MTJ) DEVICES, INCLUDING MAGNETIC RANDOM ACCESS MEMORY (MRAM) DEVICES," the contents of which is incorporated herein by reference in its entirety.[0002] The present application also claims priority to U.S. Patent Application Serial No. 15/241,595 filed on August 19, 2016 and entitled "REDUCING OR AVOIDING METAL DEPOSITION FROM ETCHING MAGNETIC TUNNEL JUNCTION (MTJ) DEVICES, INCLUDING MAGNETIC RANDOM ACCESS MEMORY (MRAM) DEVICES," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0003] The technology of the disclosure relates generally to magnetic tunnel junction (MTJ) devices, which may be employed in a resistive memory, such as a magnetic random access memory (MRAM) for example, and more particularly to fabrication of MTJ devices.II. Background[0004] Semiconductor storage devices are used in integrated circuits (ICs) in electronic devices to provide data storage. One example of a semiconductor storage device is magnetic random access memory (MRAM). MRAM is non-volatile memory in which data is stored by programming a magnetic tunnel junction (MTJ) as part of an MRAM bit cell. One advantage of an MRAM is that MTJs in MRAM bit cells can retain stored information even when power is turned off. This is because data is stored in the MTJ as a small magnetic element rather than as an electric charge or current. [0005] In this regard, an MTJ comprises a free ferromagnetic layer ("free layer") disposed above or below a fixed or pinned ferromagnetic layer ("pinned layer"). The free and pinned layers are separated by a tunnel junction or barrier formed by a thin non-magnetic dielectric layer. The magnetic orientation of the free layer can be changed, but the magnetic orientation of the pinned layer remains fixed or "pinned." Data can be stored in the MTJ according to the magnetic orientation between the free and pinned layers. When the magnetic orientations of the free and pinned layers are anti-parallel (AP) to each other, a first memory state exists (e.g., a logical ). When the magnetic orientations of the free and pinned layers are parallel (P) to each other, a second memory state exists (e.g., a logical "0"). The magnetic orientations of the free and pinned layers can be sensed to read data stored in the MTJ by sensing a resistance when current flows through the MTJ. Data can also be written to and stored in the MTJ by applying a magnetic field to change the orientation of the free layer to either a P or AP magnetic orientation with respect to the pinned layer.[0006] Recent developments in MTJ devices involve spin-transfer torque (STT)- MRAM devices. In STT-MRAM devices, the spin polarization of carrier electrons, rather than a pulse of a magnetic field, is used to program the state stored in the MTJ (i.e., a '0' or a T). Figure 1 below illustrates an MTJ 100. The MTJ 100 is provided as part of an MRAM bit cell 102 to store non- volatile data. A metal-oxide semiconductor (MOS) (typically N-type MOS, i.e., NMOS) access transistor 104 is provided to control reading and writing to the MTJ 100. A drain (D) of the access transistor 104 is coupled to a bottom electrode 106 of the MTJ 100, which is coupled to a pinned layer 108 for example. A word line (WL) is coupled to a gate (G) of the access transistor 104. A source (S) of the access transistor 104 is coupled to a voltage source (Vs) through a source line (SL). The voltage source (Vs) provides a voltage (VSL) on the source line (SL). A bit line (BL) is coupled to a top electrode 110 of the MTJ 100, which is coupled to a free layer 112 for example. The pinned layer 108 and the free layer 112 are separated by a tunnel barrier 114.[0007] With continuing reference to Figure 1, when writing data to the MTJ 100, the gate (G) of the access transistor 104 is activated by activating the word line (WL). A voltage differential between a voltage (VBL) on the bit line (BL) and the voltage (VSL) on the source line (SL) is applied. As a result, a write current (I) is generated between the drain (D) and the source (S) of the access transistor 104. If the magnetic orientation of the MTJ 100 in Figure 1 is to be changed from AP to P, a write current (IAP-P) flowing from the free layer 112 to the pinned layer 108 is generated. This induces an STT at the free layer 112 to change the magnetic orientation of the free layer 112 to P with respect to the pinned layer 108. If the magnetic orientation is to be changed from P to AP, a current (IP-AP) flowing from the pinned layer 108 to the free layer 112 is produced, which induces an STT at the free layer 112 to change the magnetic orientation of the free layer 112 to AP with respect to the pinned layer 108.[0008] Figure 2 is a schematic diagram illustrating exemplary layers of a conventional perpendicular (pMTJ) 200 provided in an MTJ stack pillar 202 that can be employed in the MTJ 100 in Figure 1. The pMTJ 200 includes highly reliable pinned/reference layers that can be provided by high perpendicular magnetic anisotropy (PMA) materials (i.e., materials having a perpendicular magnetic easy axis). In this regard, the MTJ stack pillar 202 includes a pinned layer 204 of a high PMA material disposed on a seed layer 205 (e.g., a Tantalum (Ta)/Platinum (Pt) bilayer) above a bottom electrode 206 (e.g., made of Tantalum (Ta) Nitride (N) (TaN)) electrically coupled to the pinned layer 204. A tunnel barrier 208 provided in the form of a Magnesium Oxide (MgO) layer in this example is disposed above the pinned layer 204. The MgO tunnel barrier 208 has been shown to provide a high tunnel magnetoresistance ratio (TMR). A free layer 210, shown as a Cobalt (Co)-Iron (Fe)-Boron (B) (CoFeB) layer in this example, is disposed above the tunnel barrier 208. The CoFeB free layer 210 is a high PMA material that allows for effective current-induced magnetization switching for a low current density. A conductive, non-magnetic capping layer 212, such as a thin Magnesium Oxide (MgO) and/or Tantalum (Ta) material for example, is disposed above the free layer 210 to protect the layers of the MTJ stack pillar 202. A top electrode 214 is disposed above the capping layer 212 to provide an electrical coupling to the free layer 210.[0009] In the MTJ stack pillar 202 in Figure 2, the magnetic orientation of the pinned layer 204 is fixed. Accordingly, the pinned layer 204 generates a constant magnetic field, also known as a "net stray dipolar field," that may affect, or "bias," a magnetic orientation of the free layer 210. This magnetic field bias, at best, can cause an asymmetry in the magnitude of current necessary to change the magnetic orientation of the free layer 210 (i.e., IP_AP is different than IAP-P). The current necessary to change the magnetic orientation of the free layer 210 towards the bias orientation is reduced while the current necessary to change the magnetic orientation of the free layer 210 against the bias is increased. At worst, this magnetic field bias can be strong enough to "flip" the value of a memory bit cell employing the pMTJ 200 in Figure 2, thus decreasing the reliability of the subject MRAM. To reduce or prevent a magnetic field bias being provided by the pinned layer 204 on the free layer 210, the pinned layer 204 in the MTJ stack pillar 202 in Figure 2 includes a synthetic anti-ferromagnetic (SAF) structure 216. The SAF structure 216 includes a hard, first anti-parallel ferromagnetic (API) layer and a second anti-parallel ferromagnetic (AP2) layer separated by a nonmagnetic anti-ferromagnetic coupling (AFC) layer 218 (e.g., a Ruthenium (Ru) layer). The API and AP2 layers are permanently magnetized and magnetically coupled in opposite orientations to generate opposing magnetic fields. The opposing magnetic fields produce a zero or near- zero net magnetic field towards the free layer 210, thus reducing the magnetic field bias problem at the free layer 210.[0010] MTJ patterning or etching processes are used to fabricate MTJs, such as the MTJ stack pillar 202 in Figure 2. MTJ etching involves the need to etch complicated metal stacks. Currently known methods for MTJ etching, especially at tight pitches, include ion beam etching (IBE) and chemical etching in a reactive ion etching (RIE), both of which have challenges. RIE processes are known to create damage zones around the perimeter of the MTJ. Etching damage in the transition metals (i.e., the pinned layer 204, the free layer 210, and the bottom and top electrodes 206, 214) in the MTJ can affect factors such as a tunnel magnetoresistance ratio (TMR) and energy barrier (Eb) variations, which can result in poor MTJ performance. Another method of MTJ etching involves IBE. IBE may be used for etching materials that have tendencies to not react well to chemical etching. An IBE process can avoid or reduce damage zones over RIE processes, but no chemical component is involved to improve etching selectivity. IBE involves directing a charged particle ion beam at a target material to etch the material.[0011] With both RIE and IBE processes, etched metal can be redeposited at a tunnel barrier of an etched MTJ stack pillar. For example, Figure 3 illustrates exemplary MTJ devices 300(1), 300(2) similar to the MTJ stack pillar 202 in Figure 2 fabricated in a semiconductor wafer 302 that have metal redeposition 304(1), 304(2) around an MTJ stack pillar 306(1), 306(2) as a result of etching MTJ stacks and over- etching at the end of an MTJ device etch process. Areas in a dielectric material layer 308 adjacent to the MTJ stack pillars 306(1), 306(2) are over-etched to form over-etch trenches 310(1), 310(2) to avoid horizontal shorts between adjacent devices at smaller pitches. However, bottom electrodes 312(1), 312(2) of the MTJ devices 300(1), 300(2) are also etched as a result of this over-etching. Metal etched from the bottom electrodes 312(1), 312(2) is redeposited as the metal redeposition 304(1), 304(2) around an MTJ stack pillar 306(1), 306(2). Even tiny amounts of redeposited metal material can cause metal shorts across a tunnel barrier of the MTJ stack pillar 306(1), 306(2), because the tunnel barrier in the MTJ stack pillar 306(1), 306(2) may be as small as one (1) nanometer (nm) in height. This metal redeposition 304(1), 304(2) can lead to metal shorts. Thus, as MTJ devices become scaled down, such as in high-density MRAMs, this redeposition from over-etching can limit the amount of downscaling.SUMMARY OF THE DISCLOSURE[0012] Aspects of the present disclosure involve reducing or avoiding metal deposition from etching of magnetic tunnel junction (MTJ) devices. For example, such MTJ devices may be employed to provide resistive memory bit cells for magnetic random access memory (MRAM). In one exemplary aspect disclosed herein, a width of a bottom electrode of an MTJ device is provided to be less than a width of the MTJ stack pillar in the MTJ device. In this manner, when the MTJ device is over-etched to avoid horizontal shorts between an adjacent device(s), etching of the bottom electrode is reduced or avoided to reduce or avoid metal redeposition on the MTJ stack pillar of the MTJ device. In another exemplary aspect disclosed herein, a metal seed layer for providing a textured conductive coupling of an MTJ stack pillar of the MTJ device to a bottom electrode of the MTJ device is embedded in the bottom electrode. In this manner, the MTJ stack pillar is reduced in height to reduce metal material in the MTJ stack pillar that can be redeposited on a sidewall of the MTJ device during etching. In another exemplary aspect disclosed herein, an MTJ device can be provided that includes an embedded seed layer in a bottom electrode which also has a width less than the width of the MTJ stack pillar in the MTJ device. In this manner, when the MTJ device is etched to form the MTJ stack pillar, etching of the metal material is reduced that can reduce or avoid metal redeposition on the MTJ stack pillar of the MTJ device. Further, an over-etching of the MTJ device to avoid horizontal shorts between an adjacent device(s) may not have to extend as deep or etch as much of the bottom electrode to avoid metal redeposition on the MTJ stack pillar of the MTJ device from over-etching of the bottom electrode.[0013] In this regard, in one exemplary aspect, an MTJ device is provided. The MTJ device comprises a bottom electrode having a width. The MTJ device also comprises a seed layer. The MTJ device also comprises an MTJ stack pillar having a width larger than the width of the bottom electrode and disposed above and in electrical contact with the bottom electrode. The MTJ stack pillar comprises a pinned layer disposed above the seed layer, a free layer disposed above the seed layer, and a tunnel barrier disposed between the pinned layer and the free layer. The tunnel barrier is configured to provide a tunnel magnetoresistance between the pinned layer and the free layer.[0014] In another exemplary aspect, a method of fabricating an MTJ device is provided. The method comprises disposing a dielectric material layer above a lower metal layer in a semiconductor wafer, the dielectric material layer comprising a top surface. The method also comprises removing a portion of dielectric material of the dielectric material layer to form an opening having an opening width. The method also comprises disposing one or more metal materials in the opening to form a bottom electrode having a width of the opening width. The method also comprises disposing an MTJ stack having a width larger than the opening width above and in electrical contact with the bottom electrode. The MTJ stack comprises a pinned layer disposed above a seed layer, a free layer disposed above the seed layer, and a tunnel barrier disposed between the pinned layer and the free layer. The tunnel barrier is configured to provide a tunnel magnetoresistance between the pinned layer and the free layer. The method also comprises removing material from the MTJ stack to form an MTJ stack pillar having a width larger than the width of the bottom electrode.[0015] In another exemplary aspect, an MTJ device is provided. The MTJ device comprises a dielectric material layer comprising a top surface. The MTJ device also comprises a bottom electrode disposed in an opening in the dielectric material layer, the bottom electrode comprising a top surface disposed below the top surface of the dielectric material layer. The MTJ device also comprises an embedded seed layer disposed in the opening in the dielectric material layer in contact with the top surface of the bottom electrode. The MTJ device also comprises an MTJ stack pillar disposed above the dielectric material layer and in electrical contact with embedded the seed layer. The MTJ stack pillar comprises a pinned layer, a free layer, and a tunnel barrier disposed between the pinned layer and the free layer. The tunnel barrier is configured to provide a tunnel magnetoresistance between the pinned layer and the free layer.[0016] In another exemplary aspect, a method of fabricating an MTJ device is provided. The method comprises disposing a dielectric material layer above a lower metal layer in a semiconductor wafer, the dielectric material layer comprising a top surface. The method also comprises removing a portion of dielectric material of the dielectric material layer to form an opening having an opening width. The method also comprises disposing one or more metal materials in the opening below the top surface of the dielectric material layer to form a bottom electrode having a width of the opening width. The method also comprises embedding a seed layer material in the opening in the dielectric material layer in contact with the top surface of the bottom electrode. The MTJ device also comprises removing a portion of the seed layer material to be substantially planar with the top surface of the dielectric material layer. The method also comprises disposing an MTJ stack above the dielectric material layer and in electrical contact with a remaining portion of the seed layer material. The MTJ stack comprises a pinned layer, a free layer, and a tunnel barrier disposed between the pinned layer and the free layer. The tunnel barrier is configured to provide a tunnel magnetoresistance between the pinned layer and the free layer. The method also comprises removing material from the MTJ stack to form an MTJ stack pillar.BRIEF DESCRIPTION OF THE FIGURES[0017] Figure 1 is an exemplary magnetic tunnel junction (MTJ) provided in a magnetic random access memory (MRAM) bit cell to store data as a function of magnetization directions of a pinned layer and a free layer in the MTJ;[0018] Figure 2 is a schematic diagram illustrating a conventional perpendicular MTJ (pMTJ) and exemplary conventional layers provided therein; [0019] Figures 3 illustrates a conventional MTJ device in a semiconductor wafer that has metal redeposition as a result of over-etching;[0020] Figure 4 is a schematic diagram of exemplary MTJ devices in a semiconductor die wherein a width of a bottom electrode of the MTJ devices is less than a width of their MTJ stack pillar, to reduce an amount of metal material that can be over-etched to reduce or avoid metal redeposition;[0021] Figure 5 is a flowchart illustrating an exemplary process of fabricating the MTJ devices in Figure 4, including etching of an MTJ stack to form an MTJ stack pillar and over-etching the MTJ devices to avoid horizontal shorts between an adjacent device;[0022] Figures 6A-6E illustrate exemplary process stages during fabrication of an MTJ device in a semiconductor wafer according to the exemplary process in Figure 5, wherein a width of a bottom electrode is less than a width of an MTJ stack pillar to reduce an amount of metal material that can be over-etched to reduce or avoid metal redeposition;[0023] Figure 7 is a schematic diagram of an exemplary conventional MTJ device that has been over-etched with metal material from a bottom electrode redeposited on an MTJ stack pillar forming a short across a tunnel barrier;[0024] Figure 8 is a schematic diagram of an MRAM bit cell employing an MTJ device wherein a width of a bottom electrode is less than a width of an MTJ stack pillar to reduce an amount of metal material that can be over-etched to reduce or avoid metal redeposition;[0025] Figure 9 is a schematic diagram of other exemplary MTJ devices with a seed layer embedded in a bottom electrode to reduce a height of an MTJ stack pillar to reduce an amount of metal material that is etched to reduce or avoid metal redeposition;[0026] Figure 10 is flowchart illustrating an exemplary process of fabricating the MTJ devices in Figure 9, including etching of an MTJ stack to form an MTJ stack pillar and over-etching of the MTJ devices to avoid horizontal shorts between an adjacent device;[0027] Figures 1 lA-11G illustrate exemplary process stages during fabrication of an MTJ device in a semiconductor wafer according to the exemplary process in Figure 10; [0028] Figures 12A-12C illustrate an exemplary process of processing a seed layer embedded with a bottom electrode of an MTJ device to form a textured surface for depositing a perpendicular magnetic anisotropy (PMA) layer as part of an MTJ stack of the MTJ device;[0029] Figure 13 is a schematic diagram of an MRAM bit cell employing an MTJ device with a seed layer embedded in a bottom electrode to reduce a height of an MTJ stack pillar to reduce an amount of metal material that can be etched to reduce or avoid metal redeposition; and[0030] Figure 14 is a block diagram of an exemplary processor-based system that includes MTJ devices with reduced or avoided metal redeposition from etching, and according to the exemplary aspects disclosed herein.DETAILED DESCRIPTION[0031] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0032] Aspects of the present disclosure involve reducing or avoiding metal deposition from etching of magnetic tunnel junction (MTJ) devices. For example, such MTJ devices may be employed to provide resistive memory bit cells for magnetic random access memory (MRAM). In one exemplary aspect disclosed herein, a width of a bottom electrode of an MTJ device is provided to be less than a width of the MTJ stack pillar in the MTJ device. In this manner, when the MTJ device is over-etched to avoid horizontal shorts between an adjacent device(s), etching of the bottom electrode is reduced or avoided to reduce or avoid metal redeposition on the MTJ stack pillar of the MTJ device. In another exemplary aspect disclosed herein, a metal seed layer for providing a textured conductive coupling of an MTJ stack pillar of the MTJ device to a bottom electrode of the MTJ device is embedded in the bottom electrode. In this manner, the MTJ stack pillar is reduced in height to reduce metal material in the MTJ stack pillar that can be redeposited on a sidewall of the MTJ device during etching. In another exemplary aspect disclosed herein, an MTJ device can be provided that includes an embedded seed layer in a bottom electrode which also has a width less than the width of the MTJ stack pillar in the MTJ device. In this manner, when the MTJ device is etched to form the MTJ stack pillar, etching of the metal material is reduced that can reduce or avoid metal redeposition on the MTJ stack pillar of the MTJ device. Further, an over-etching of the MTJ device to avoid horizontal shorts between an adjacent device(s) may not have to extend as deep or etch as much of the bottom electrode to avoid metal redeposition on the MTJ stack pillar of the MTJ device from over-etching of the bottom electrode.[0033] In this regard, Figure 4 is a schematic diagram of exemplary MTJ devices 400(1), 400(2) in a semiconductor die 402 wherein widths Wi of bottom electrodes 404(1), 404(2) of the MTJ devices 400(1), 400(2) is less than widths W2of their respective MTJ stack pillars 406(1), 406(2). The semiconductor die 402 can be provided in an integrated circuit (IC) 407. The bottom electrodes 404(1), 404(2) are formed from one or more metal materials, such as Copper (Cu), Tungsten (W), Tantalum (Ta), or Tantalum (Ta) Nitride (N) (TaN) materials, as examples. As will be discussed in more detail below, when the MTJ devices 400(1), 400(2) are over-etched to form over-etch trenches 408(1), 408(2) adjacent to the MTJ stack pillars 406(1), 406(2) as shown in Figure 4, to avoid horizontal shorts between adjacent devices for example, etching of the bottom electrodes 404(1), 404(2) is reduced or avoided. A dielectric material layer 410 adjacent to the bottom electrodes 404(1), 404(2) and the MTJ stack pillars 406(1), 406(2) is etched during over-etching. Thus, redeposited dielectric materials 412(1), 412(2) are a result of etching the dielectric material layer 410, including during an over-etching process, and may be deposited on side walls 414(1), 414(2) of the MTJ stack pillars 406(1), 406(2) as shown in Figure 4. However, the redeposited dielectric material 412(1), 412(2) does not cause metal shorts across the layers in the MTJ stack pillars 406(1), 406(2), including across their respective tunnel barriers 416(1), 416(2). Thus, by providing the bottom electrodes 404(1), 404(2) of the MTJ devices 400(1), 400(2) to be a width Wi less than the width W2of the MTJ stack pillars 406(1), 406(2), the etching of metal material that can be redeposited around the MTJ stack pillars 406(1), 406(2), including during an over-etching process, is reduced or avoided. [0034] With continuing reference to Figure 4, the MTJ stack pillars 406(1), 406(2) were formed from etching or other removal of materials from an MTJ stack (not shown) of material layers. The MTJ stack pillars 406(1), 406(2) in Figure 4 include seed layers 418(1), 418(2) disposed above an electrical contact with the respective bottom electrodes 404(1), 404(2). For example, the seed layers 418(1), 418(2) may be thick layers that are between five (5) and ten (10) nanometers (nm) in thickness. In this example, pinned magnetization layers ("pinned layers") 420(1), 420(2) are disposed above and in electrical contact with the seed layers 418(1), 418(2). The seed layers 418(1), 418(2) provide textured surfaces to promote smooth and epitaxial crystal growth of the pinned layers 420(1), 420(2) in a specific desired orientation to provide desired magnetic properties. The seed layers 418(1), 418(2) can also be processed into smooth surfaces to reduce the roughness of the interface of the bottom electrodes 404(1), 404(2), that could otherwise cause uneven growth imperfections or variations in the pinned layers 420(1), 420(2) due to uneven deposition. These imperfections could propagate through the MTJ stack pillars 406(1), 406(2), thus creating "rough" surfaces at a base of the tunnel barriers 416(1), 416(2) and reducing a tunnel magnetoresistance ratio (TMR). The material chosen for the seed layers 418(1), 418(2) will depend on the materials chosen for the pinned layers 420(1), 420(2). For example, the seed layers 418(1), 418(2) could be selected from metal materials, such as Platinum (Pt), Tantalum (Ta), or Ruthenium (Ru), or alloys such as Ta Nitride (TaN). The tunnel barriers 416(1), 416(2) are disposed above the pinned layers 420(1), 420(2). Free magnetization layers ("free layers") 422(1), 422(2) are disposed above the tunnel barriers 416(1), 416(2).[0035] With continuing reference to Figure 4, the widths Wi of the bottom electrodes 404(1), 404(2) may be deemed a largest cross-section width of the bottom electrodes 404(1), 404(2) if the bottom electrodes 404(1), 404(2) do not have a straight vertical profile in the Y direction. For example, the widths Wi of the bottom electrodes 404(1), 404(2) may be between fifteen (15) and fifty (50) nanometers (nm) as non- limiting examples. Further, the bottom electrodes 404(1), 404(2) may be made from two or more metal materials, such as first metal materials 424(1), 424(2), such as Tungsten (W), and second metal materials 426(1), 426(2) such as a Tantalum Nitride (TaN) material disposed above the first metal materials 424(1), 424(2). Further, the widths W2 of the MTJ stack pillars 406(1), 406(2) labeled in Figure 4 are the largest cross-section widths of the MTJ stack pillars 406(1), 406(2) since the MTJ stack pillars 406(1), 406(2) do not have a completely vertical profile as a result of etching in this example. For example, the widths W2 of the MTJ stack pillars 406(1), 406(2) may be between twenty (20) and sixty (60) nanometers (nm) as non-limiting examples. Hard masks (HM) 428(1), 428(2) disposed above the MTJ stack pillars 406(1), 406(2) control the shape of the etched and formed MTJ stack pillars 406(1), 406(2), and thus may control the distances and locations of the over-etch trenches 408(1), 408(2) formed in the dielectric material layer 410.[0036] With continuing reference to Figure 4, the over-etch trenches 408(1), 408(2) may extend a depth Di below bottom surfaces 430(1), 430(2) of the MTJ stack pillars 406(1), 406(2) or a top surface 432 of the dielectric material layer 410. For example, the bottom surfaces 430(1), 430(2) of the MTJ stack pillars 406(1), 406(2) may be bottom surfaces of the seed layers 418(1), 418(2) at the interface between the seed layers 418(1), 418(2) and top surfaces 434(1), 434(2) of the bottom electrodes 404(1), 404(2). For example, this depth Di may be between approximately five (5) and twenty (20) nanometers (nm). Note that the over-etch trenches 408(1), 408(2) in this example do not extend into the bottom electrodes 404(1), 404(2), because the bottom electrodes 404(1), 404(2) are reduced in the horizontal direction X due to their reduced widths Wi, as discussed above. The over-etch trenches 408(1), 408(2) are disposed a minimum distance D2from outer surfaces 436(1), 436(2) of the bottom electrodes 404(1), 404(2). A distance between the over-etch trenches 408(1), 408(2) and the outer surfaces 436(1), 436(2) of the bottom electrodes 404(1), 404(2) may vary between the minimum distance D2 and a maximum distance D3 if the etch profile of the over-etch trenches 408(1), 408(2) is not straight in the vertical Y direction, as shown in Figure 4. As an example, minimum distance D2 between the over-etch trenches 408(1), 408(2) and the outer surfaces 436(1), 436(2) of the bottom electrodes 404(1), 404(2) may be at least two (2) nanometers (nm). As an example, maximum distance D3 between the over-etch trenches 408(1), 408(2) and the outer surfaces 434(1), 434(2) of the bottom electrodes 404(1), 404(2) may be at least fifty (50) nanometers (nm). The over-etch trenches 408(1), 408(2) may also extend below the dielectric material layer 410 into an inter- metal block layer 438 and/or a lower metal layer 440 (e.g., a metal 2 (M2) or metal 3 (M3) layer) in which the dielectric material layer 410 is disposed above in the semiconductor die 402.[0037] To further discuss fabrication of an MTJ device that has an MTJ stack pillar having a larger width than the width of its bottom electrode, such as the MTJ devices 400(1), 400(2) in Figure 4, Figures 5-6E are provided. Figure 5 is a flowchart illustrating an exemplary process 500 of fabricating an MTJ device, such as the MTJ devices 400(1), 400(2) in Figure 4. Figures 6A-6E illustrate exemplary process stages 600(l)-600(5) during the fabrication of an MTJ device 400 in a semiconductor wafer 602 according to the exemplary process 500 in Figure 5. The details discussed above with regard to the exemplary MTJ devices 400(1), 400(2) in Figure 4 are also applicable to the MTJ device 400 fabricated in the process stages 600(l)-600(5) in Figures 6A-6E, and thus will not be repeated. Common elements between the MTJ devices 400(1), 400(2) in Figure 4 and elements shown in the process stages 600(l)-600(5) in Figures 6A-6E are shown with common element numbers.[0038] In this regard, Figure 6A illustrates a first exemplary process stage 600(1) of fabricating an MTJ device that will have a final MTJ stack pillar with a larger width than a width of the bottom electrode. As shown in Figure 6A, the dielectric material layer 410 is disposed above the lower metal layer 440 in a semiconductor wafer 602 (block 502 in Figure 5). The dielectric material layer 410 may be disposed on an inter- metal block layer 438 that is disposed on the lower metal layer 440. A top surface 432 will be formed on the dielectric material layer 410.[0039] Further, as shown in an exemplary process stage 600(2) in Figure 6B, a bottom electrode 404 is formed in the dielectric material layer 410 such that a dielectric material 604 from the dielectric material layer 410 is adjacent to an outer surface 436 of the bottom electrode 404. For example, a portion of the dielectric material 604 of the dielectric material layer 410 forms an opening 606 having an opening width Wi (block 504 in Figure 5). Thereafter, one or more metal materials 608, which are the first and second metal materials 424, 426 in this example, are disposed in the opening 606 to form the bottom electrode 404 also having the opening width Wi (block 506 in Figure 5). If it is desired to provide more than one metal material 608 in the opening 606, a first metal material 608(1) may be disposed in the opening 606 as shown in Figure 6B, followed by a recessing of the first metal material 608(1) below the top surface 432 of the dielectric material layer 410. Then, a second metal material 608(2) may be disposed in the opening 606 above and in contact with the first metal material 608(1) to form the bottom electrode 404.[0040] Note that the opening 606 in the process stage 600(2) in Figure 6B extends through the inter-metal block layer 438, because the bottom electrode 404 will be electrically connected to another device in the semiconductor wafer 602 through the lower metal layer 440 in this example. Further, note that a top surface 610 of the bottom electrode 404 could be further processed, such as through a CMP process, to provide a smooth top surface 610 that is substantially planar to the top surface 432 of the dielectric material layer 410.[0041] Next, as shown in an exemplary process stage 600(3) in Figure 6C, an MTJ stack 406S of a width larger than the opening width Wi is disposed above and in electrical contact with the bottom electrode 404 (block 508 in Figure 5). The MTJ stack 406S comprises a plurality of layers that have not yet been further processed, such as etched, to form MTJ stack pillars for MTJ devices. The MTJ stack 406S comprises a pinned layer 420L disposed above a seed layer 418L, a free layer 422L disposed above the seed layer 418L, and a tunnel barrier layer 416L disposed between the pinned layer 420L and the free layer 422L. The tunnel barrier layer 416L is configured to provide a tunnel magnetoresistance between the pinned layer 420L and the free layer 422L. After the MTJ stack 406S is disposed on the dielectric material layer 410 on contact with the bottom electrode 404, the MTJ stack 406S may be annealed as an example to provide the desired electrical properties in the MTJ stack 406S. A hard mask layer 428L may then be disposed on the MTJ stack 406S to protect portions of the MTJ stack 406S during etching, such as IBE, to form an MTJ stack pillar, as shown in a process stage 600(4) in Figure 6D.[0042] Note that in this example, the pinned layer 420L of the MTJ stack 406S as shown in Figure 6C is disposed directly above the seed layer 418L and below the tunnel barrier layer 416L, and the free layer 422L is disposed above the tunnel barrier layer 416L. However, note that in the alternative, the pinned layer 420L could be disposed above the tunnel barrier layer 416L, with the free layer 422L disposed below the tunnel barrier layer 416L. [0043] As shown in the exemplary process stage 600(4) in Figure 6D, material is removed from the MTJ stack 406S to form the MTJ stack pillar 406 having a width W2 larger than the width Wi of the bottom electrode 404 (block 510 in Figure 5). For example, a lithography process may be used to form openings in the hard mask layer 428L to then remove portions of the hard mask layer 428L to leave a remaining hard mask 428 above the location where the MTJ stack pillar 406 is to be formed. Then, as an example, an ion beam 612 may be directed toward the MTJ stack 406S to form the MTJ stack pillar 406, as shown in Figure 6D to form the MTJ device 400. The hard mask 428 protects the MTJ stack 406S to be etched at the desired width characteristics. Then, as shown in an exemplary process stage 600(5) in Figure 6E, an over-etching process may be employed to form the over-etch trenches 408 to avoid or reduce horizontal metal shorts between adjacent devices as previously described and shown in Figure 4.[0044] Compare the MTJ device 400 in Figure 6E to the MTJ device 300 in Figure 7, which is the MTJ device 300 previously discussed above with reference to Figure 3. Note that the bottom electrode 312 of the MTJ device 300 has been etched during the over-etching process resulting in metal redeposition 304 around an MTJ stack pillar 306. This metal redeposition 304 risks metal shorts in the MTJ stack pillar 306. Further processing steps may be required to clean and remove this metal redeposition 304 to avoid metal shorts. In the MTJ device 400 in Figure 6E, the width Wi of the bottom electrodes 404 is less than the width W2 of the MTJ stack pillar 406. Thus, when the MTJ device 400 is over-etched to form the over-etch trench 408 adjacent to the MTJ stack pillar 406 to avoid horizontal shorts between adjacent devices for example, etching of the bottom electrode 404 is reduced or avoided. The redeposited dielectric material 412 are a result of etching the dielectric material layer 410, including during an over-etching process, and may be deposited on the side walls 414(1), 414(2) of the MTJ stack pillars 406(1), 406(2) as shown in Figure 4. However, the redeposited dielectric material 412(1), 412(2) does not causes metal shorts across the layers in the MTJ stack pillars 406(1), 406(2), including across their respective tunnel barriers 416(1), 416(2). Thus, by providing the bottom electrodes 404(1), 404(2) of the MTJ devices 400(1), 400(2) to be a width Wi less than the width W2of the MTJ stack pillars 406(1), 406(2), the etching of metal material that can be redeposited around the MTJ stack pillars 406(1), 406(2), including during an over-etching process, is reduced or avoided[0045] Figure 8 is a schematic diagram of a memory bit cell 800 employing the MTJ device 400 in Figure 6D as a storage element when used in a resistive memory 802, such as an MRAM, for example. The resistive memory 802 may be included in an IC 804. As shown in Figure 8, the memory bit cell 800 includes an access transistor 806 for controlling read and write operations to the MTJ device 400 acting as a storage element. The access transistor 806 is provided in the form of an NMOS transistor in this example, that includes a gate (G) coupled to a word line (WL), a first electrode 808 (e.g., a drain), and a second electrode 810 (e.g., a source). The bottom electrode 404 of the MTJ device 400 is coupled to the first electrode 808 of the access transistor 806. A top electrode 812 is electrically coupled to the free layer 422 of the MTJ device 400 and to a bit line (BL) to couple the MTJ device 400 to the bit line (BL). When accessing the MTJ device 400, the MTJ device 400 is configured to receive a current IAP-P or IP_AP flowing between the top and bottom electrodes 812, 404 as a result of the voltage differential between a voltage (VBL) coupled to the bit line (BL) and a voltage (Vs) coupled to the bottom electrode 404 when a signal 814 on the word line (WL) activates the access transistor 806 to couple the voltage (Vs) coupled to the bottom electrode 404. The amount of current IAP-P or IP_AP is controlled by voltage (VBL) and voltage (Vs) and whether the operation is a read or write operation. Write operations take more current to change the magnetization state of the free layer 422. The direction of the current IAP-P or IP_AP controls whether a write operation changes the magnetization state of the free layer 422 from AP to a P state, or vice versa. During a read operation, the amount of current IAP-P or IP_AP is controlled by the resistance of the MTJ device 400, which depends on its magnetic state AP or P.[0046] Another way to reduce or avoid metal redeposition on an MTJ stack pillar, during etching and over-etching, is to reduce the amount of metal material in an etched MTJ stack. For example, if the height of an MTJ stack can be reduced while achieving the desired performance, there is less metal material in the MTJ stack that may be etched and redeposited.[0047] In this regard, Figure 9 is a schematic diagram of other exemplary MTJ devices 900(1), 900(2) in a semiconductor die 902 wherein respective metal seed layers 918(1), 918(2) for providing a textured conductive coupling of MTJ stack pillars 906(1), 906(2) of the MTJ devices 900(1), 900(2) to bottom electrodes 904(1), 904(2) are embedded in the bottom electrodes 904(1), 904(2). Thus, seed layers are not included in the MTJ stack pillars 906(1), 906(2) in the MTJ devices 900(1), 900(2) in this example. As will be discussed in more detail below, the seed layers 918(1), 918(2) are embedded with the bottom electrodes 904(1), 904(2) and located below a top surface 932 of a dielectric material layer 910 to reduce the overall height Hi of the MTJ stack pillars 906(1), 906(2). By reducing the height ¾ of the MTJ stack pillars 906(1), 906(2), the amount of metal material that is removed or etched to form the MTJ stack pillars 906(1), 906(2) can be reduced, thus avoiding or reducing the amount of metal redeposition on side walls 914(1), 914(2) of the MTJ stack pillars 906(1), 906(2).[0048] Further, as will be discussed in more detail below, the MTJ devices 900(1), 900(2) that have the seed layers 918(1), 918(2) embedded with the bottom electrodes 904(1), 904(2) may also optionally provide for width W4of the bottom electrodes 904(1), 904(2) of the MTJ devices 900(1), 900(2) to be less than the widths W4of their respective MTJ stack pillars 906(1), 906(2) similar to the MTJ devices 400(1), 400(2) in Figure 4. Thus, etching of the bottom electrodes 904(1), 904(2) of the MTJ devices 900(1), 900(2) during etching and/or over-etching processes may be reduced or avoided to reduce or avoid metal redeposition on the MTJ stack pillars 906(1), 906(2) from etched material of the bottom electrodes 904(1), 904(2). The etching of dielectric material 912(1), 912(2) are a result of over-etching the dielectric material layer 910, including during an over-etching process, and may be deposited on the side walls 914(1), 914(2) of the MTJ stack pillars 906(1), 906(2) as shown in Figure 9. However, the redeposited dielectric material 912(1), 912(2) does not cause metal shorts across the layers in the MTJ stack pillars 906(1), 906(2).[0049] In this regard, with reference to Figure 9, the exemplary MTJ devices 900(1), 900(2) are shown in the semiconductor die 902. The semiconductor die 902 can be provided in an IC 907. The bottom electrodes 904(1), 904(2) are formed from one or more metal materials, such as Copper (Cu), Tungsten (W), Tantalum (Ta), or a Tantalum (Ta) Nitride (N) (TaN) material, as examples. As will be discussed below in more detail, when the MTJ stack pillars 906(1), 906(2) of the MTJ devices 900(1), 900(2) are formed, metal redeposition on the side walls 914(1), 914(2) may occur. Further, as will be discussed in more detail below, when the MTJ devices 900(1), 900(2) are over-etched to form over-etch trenches 908(1), 908(2) adjacent to the MTJ stack pillars 906(1), 906(2) as shown in Figure 9, to avoid horizontal shorts between adjacent devices for example, it may be desired to avoid or reduce etching of the bottom electrodes 904(1), 904(2). The dielectric semiconductor material layer 910 adjacent to the bottom electrodes 904(1), 904(2) and the MTJ stack pillars 906(1), 906(2) is etched during over-etching. Thus, the redeposited dielectric material 912(1), 912(2) are a result of etching the dielectric material layer 910, including during an over-etching process, and may be deposited on the side walls 914(1), 914(2) of the MTJ stack pillars 906(1), 906(2) as shown in Figure 9. However, the redeposited dielectric material 912(1), 912(2) does not cause metal shorts across the layers in the MTJ stack pillars 906(1), 906(2), including across their respective tunnel barriers 916(1), 916(2).[0050] With continuing reference to Figure 9, the MTJ stack pillars 906(1), 906(2) were formed from etching or other removal of materials from an MTJ stack (not shown) of material layers. The bottom electrodes 904(1), 904(2) in the MTJ devices 900(1), 900(2) in Figure 9 include the embedded seed layers 918(1), 918(2). By embedded seed layers 918(1), 918(2), it is meant that a metal seed material is fabricated with the fabrication of the bottom electrodes 904(1), 904(2) before an MTJ stack is disposed above and in electrical contact with the bottom electrodes 904(1), 904(2), thus allowing the seed layers 918(1), 918(2) to not be included in the MTJ stack pillars 906(1), 906(2). Embedding the embedded seed layers 918(1), 918(2) with the bottom electrodes 904(1), 904(2) can allow reduced height Hi of an MTJ stack that reduces the amount of metal material that is removed from an MTJ stack to from the MTJ stack pillars 906(1), 906(2). For example, the height Hi of the MTJ stack pillars 906(1), 906(2) may be between approximately five (5) and twenty (20) nanometers (nm), as an example, including between approximately seven (7) and twenty (20) nanometers (nm), and approximately five (5) and fifteen (15) nanometers (nm) as non-limiting examples. In the example in Figure 9, the bottom electrodes 904(1), 904(2) are disposed in a capping layer 948 provided in the dielectric material layer 910. The capping layer 948 may be provided for patterning openings for depositing metal materials therein to form the bottom electrodes 904(1), 904(2) and the embedded seed layers 918(1), 918(2). For example, the capping layer 948 may be a Silicon (Si) Nitride (N) (SiN) material. An additional, second seed layer 950 is disposed over the capping layer 948. The second seed layer 950 may be a thin layer that is less than twenty (20) Angstroms (A) as an example. The second seed layer 950 can be provided and processed to provide the desired textured interface to the MTJ stack pillars 906(1), 906(2) in case the process of embedding the seed layers 918(1), 918(2) does not provide the desired textured surface. For example, the embedded seed layers 918(1), 918(2) may be polished, such as through a CMP process, wherein depositing the second seed layer 950 provides a more uniform or smooth textured surface for interfacing with the MTJ stack pillars 906(1), 906(2). A CMP buffer layer 952 may be disposed over the capping layer 948 in the dielectric material layer 910 before the second seed layer 950 is provided to provide a buffer layer for polishing the second seed layer 950. When the bottom electrodes 904(1), 904(2) are formed in the capping layer 948 and the dielectric material layer 910, the second seed layer 950 is then disposed on the embedded seed layers 918(1), 918(2) to be in electrical contact with the embedded seed layers 918(1), 918(2) and thus the bottom electrodes 904(1), 904(2).[0051] With continuing reference to Figure 9, the MTJ stack pillars 906(1), 906(2) include pinned magnetization layers ("pinned layers") 920(1), 920(2) that are disposed above and in electrical contact with the second seed layer 950, to provide the pinned layers 920(1), 920(2) in contact with the seed layers 918(1), 918(2) and bottom electrodes 904(1), 904(2). The seed layers 918(1), 918(2) provide textured surfaces to promote smooth and epitaxial crystal growth of the pinned layers 920(1), 920(2) in a specific desired orientation to provide desired magnetic properties. The second seed layer 950 and embedded seed layers 918(1), 918(2) can also be processed into a smooth surface to reduce roughness of that could otherwise cause uneven growth imperfections or variations in the pinned layers 920(1), 920(2) due to uneven deposition. These imperfections could propagate through the MTJ stack pillars 906(1), 906(2), thus creating "rough" surfaces at a base of the tunnel barriers 916(1), 916(2) and reducing a tunnel magnetoresistance ratio (TMR). The material chosen for the seed layers 918(1), 918(2) will depend on the materials chosen for the pinned layers 920(1), 920(2). For example, the seed layers 918(1), 918(2) could be selected from metal materials, such as Platinum (Pt), Tantalum (Ta), or Ruthenium (Ru), or alloys such as Tantalum (Ta) Nitride (N) (TaN). The tunnel barriers 916(1), 916(2) are disposed above the pinned layers 920(1), 920(2). Free magnetization layers ("free layers") 922(1), 922(2) are disposed above the tunnel barriers 916(1), 916(2).[0052] With continuing reference to Figure 9, as discussed above, the bottom electrodes 904(1), 904(2) may be provided of a smaller width than the MTJ stack pillars 906(1), 906(2) to avoid or reduce etching of the bottom electrodes 904(1), 904(2) from causing metal redeposition on the MTJ stack pillars 906(1), 906(2). Thus, for example, as shown in Figure 9, the widths W3of the bottom electrodes 904(1), 904(2) may be deemed a largest cross-section width of the bottom electrodes 904(1), 904(2) if the bottom electrodes 904(1), 904(2) do not have a straight vertical profile in the Y direction. For example, the widths W3of the bottom electrodes 904(1), 904(2) may be between fifteen (15) and fifty (50) nanometers (nm) as non-limiting examples. Further, the bottom electrodes 904(1), 904(2) may be made of a metal material 924(1), 924(2), such as Tungsten (W). Further, the widths W4of the MTJ stack pillars 906(1), 906(2) labeled in Figure 9 are the largest cross-section widths of the MTJ stack pillars 906(1), 906(2) since the MTJ stack pillars 906(1), 906(2) do not have a completely vertical profile as a result of etching in this example. For example, the widths W4of the MTJ stack pillars 906(1), 906(2) may be between twenty (20) and sixty (60) nanometers (nm) as non-limiting examples. Hard masks (HM) 928(1), 928(2) disposed above MTJ stack pillars 906(1), 906(2) control the shape of the etched and formed MTJ stack pillars 906(1), 906(2), and thus may control the distances and locations of the over-etch trenches 908(1), 908(2) formed in the dielectric material layer 910.[0053] With continuing reference to Figure 9, the over-etch trenches 908(1), 908(2) may extend a depth D4below bottom surfaces 930(1), 930(2) of the MTJ stack pillars 906(1), 906(2). For example, the bottom surfaces 930(1), 930(2) of the MTJ stack pillars 906(1), 906(2) may be bottom surfaces of the pinned layers 920(1), 920(2) at the interface between the second seed layer 950. For example, this depth D4may be between approximately five (5) and twenty (20) nanometers (nm). Note that the over- etch trenches 908(1), 908(2) in this example do not extend into the bottom electrodes 904(1), 904(2), because the bottom electrodes 904(1), 904(2) are reduced in the horizontal direction X due to their reduced widths W3as discussed above. The over- etch trenches 908(1), 908(2) are disposed a minimum distance D5 from outer surfaces 936(1), 936(2) of the bottom electrodes 904(1), 904(2). The distance between the over- etch trenches 908(1), 908(2) and the outer surfaces 936(1), 936(2) of the bottom electrodes 904(1), 904(2) may vary between the minimum distance D5 and a maximum distance D6if the etch profile of the over-etch trenches 908(1), 908(2) is not straight in the vertical Y direction, as shown in Figure 9. As an example, minimum distance D5between the over-etch trenches 908(1), 908(2) and the outer surfaces 936(1), 936(2) of the bottom electrodes 904(1), 904(2) may be at least two (2) nanometers (nm). As an example, maximum distance D6between the over-etch trenches 908(1), 908(2) and the outer surfaces 936(1), 936(2) of the bottom electrodes 904(1), 904(2) may be at least five (5) nanometers (nm). The over-etch trenches 908(1), 908(2) may also extend below the dielectric material layer 910 into a lower metal layer 940 (e.g., a metal 2 (M2) or metal 3 (M3) layer) and/or an inter- metal block layer 938 that contains vertical interconnect accesses (VIAs) 944(1), 944(2) interconnected to metal islands 942(1), 942(2) in the lower metal layer 940.[0054] To further discuss fabrication of an MTJ device that has an MTJ stack pillar having a larger width than the width of its bottom electrode, such as the MTJ devices 900(1), 900(2) in Figure 9, Figures 10-11G are provided. Figure 10 is a flowchart illustrating an exemplary process 1000 of fabricating an MTJ device, such as the MTJ devices 900(1), 900(2) in Figure 9. Figures 11A-11G illustrate exemplary process stages 1100(1)-1100(7) during the fabrication of the MTJ device 900 in a semiconductor wafer 1102 according to the exemplary process 1000 in Figure 10. The details discussed above with regard to the exemplary MTJ devices 900(1), 900(2) in Figure 9 are also applicable to the MTJ device 900 fabricated in the process stages 1100(1)- 1100(7) in Figures 11A-11G, and thus will not be repeated. Common elements between the MTJ devices 900(1), 900(2) in Figure 9 and elements shown in the process stages 1100(1)-1100(7) in Figures 11A-11G are shown with common element numbers.[0055] In this regard, Figure 11A illustrates a first exemplary process stage 1100(1) of fabricating an MTJ device that will have an embedded seed layer with a bottom electrode to reduce the height of an MTJ stack pillar. As shown in Figure 11 A, the dielectric material layer 910 is disposed above the lower metal layer 940 and the inter- metal block layer 938 in a semiconductor wafer 1102 (block 1002 in Figure 10). In this example, the capping layer 948 is disposed above the lower metal layer 940 and the inter-metal block layer 938 in the semiconductor wafer 1102. A top surface 932 will be formed on the dielectric material layer 910.[0056] Further, as shown in an exemplary process stage 1100(2) in Figure 11B, the CMP buffer layer 952 is optionally disposed on the capping layer 948 to provide a layer for performing CMP as previously discussed. A patterned layer 1104 is then disposed above the CMP buffer layer 952 as part of a lithography process to form the bottom electrode 904 as shown in Figure 11C. In this regard, a portion of dielectric materials 1106, 1108 from the capping layer 948 and the CMP buffer layer 952 are removed to form an opening 1110 having an opening width W3(block 1004 in Figure 10). The dielectric materials 1106, 1108 may be etched according to the patterned layer 1104 to form the opening 1110. A top surface 1112 of the metal island 942(1) may form an etch stop for etching of the dielectric materials 1106, 1108 to form the opening 1110.[0057] Thereafter, as shown in an exemplary process stage 1100(3) in Figure 11C, one or more metal materials 1114 are disposed in the opening 1110 to form the bottom electrode 904 also having the opening width W3(block 1006 in Figure 10). A seed layer material 1116 is then disposed in the opening 1110 to be embedded with the metal material(s) 1114, forming the bottom electrode 904 (block 1008 in Figure 10). This is also shown in Figure 12A. Note that opening 1110 in the process stage 1100(3) in Figure 12C extends to through the lower metal layer 940, because the bottom electrode 904 will be electrically connected to another device in the semiconductor wafer 1102 through the lower metal layer 940 in this example. Note that the initial deposition of the seed layer material 1116 may form a high rough top surface 1118 because of the thickness of the seed layer material 1116 (e.g., 10-20 nanometers (nm)) embedded with the bottom electrode 904 in the opening. This is also shown in Figure 11C, where the seed layer material 1116 extends outside of the opening 1110. Thus, as shown in an exemplary process stage 1100(4) in Figure 1 ID, a CMP process may be performed to planarize the seed layer 918 formed from the seed layer material 1116 (block 1010 in Figure 10). The seed layer 918 may be planarized to form a smooth surface 1120 embedded with the bottom electrode 904 to be substantially planar to the top surface 932 of the CMP buffer layer 952, as shown in Figure 11D, and also in Figure 12B.[0058] Next, as shown in an exemplary process stage 600(5) in Figure HE, the optional second seed layer 950 is disposed above the CMP buffer layer 952 and the embedded seed layer 918. As discussed above, the second seed layer 950 may be provided for texture enhancement for coupling to the pinned layers 920(1), 920(2) of the MTJ stack pillars 906(1), 906(2), as shown in Figure 9. This is also shown in Figure 12C. The second seed layer 950 and embedded seed layer 918 can also be processed into a smooth surface to reduce roughness of that could otherwise cause uneven growth imperfections or variations in the pinned layers 920(1), 920(2) (see Figure 9) due to uneven deposition. These imperfections could propagate through the MTJ stack pillars 906(1), 906(2) shown in Figure 9, thus creating "rough" surfaces at a base of the tunnel barriers 916(1), 916(2) and reducing a tunnel magnetoresistance ratio (TMR).[0059] Next, as shown in an exemplary process stage 1100(6) in Figure 11F, an MTJ stack 906S of a width larger than the opening width W3 of the opening 1110 is disposed above and in electrical contact with the bottom electrode 904 (block 1012 in Figure 10). The MTJ stack 906S comprises a plurality of layers that have not yet been further processed, such as etched, to form MTJ stack pillars for MTJ devices. For example, the MTJ stack 906S may be fifteen (15) nanometers (nm) as an example, and with a reduced height because the seed layer 918 is not included and instead embedded with the bottom electrode 904. The MTJ stack 906S comprises a pinned layer 920L disposed above the second seed layer 950 and the seed layer 918, a tunnel barrier layer 916L disposed above the pinned layer 920L, and a free layer 922L disposed above the tunnel barrier layer 916L. The tunnel barrier layer 916L is configured to provide a tunnel magnetoresistance between the pinned layer 920L and the free layer 922L. After the MTJ stack 906S is disposed on the dielectric material layer 910 in contact with the seed layer(s) 950, 918 and the bottom electrode 904, the MTJ stack 906S may be annealed as an example to provide the desired electrical properties in the MTJ stack 906S. A hard mask layer 928L may then be disposed on the MTJ stack 406S to protect portions of the MTJ stack 406S during etching, such as IBE, to form an MTJ stack pillar, as shown in the process stage 1100(4) in Figure 11D.[0060] Note that in this example, the pinned layer 920L of the MTJ stack 906S as shown in Figure 11F is disposed below the tunnel barrier layer 916L, and the free layer 922L is disposed above the tunnel barrier layer 916L. However, note that in the alternative, the pinned layer 920L could be disposed above the tunnel barrier layer 916L, with the free layer 922L disposed below the tunnel barrier layer 916L. [0061] As shown in an exemplary process stage 1100(7) in Figure 11G, material is removed from the MTJ stack 906S to form the MTJ stack pillar 906 and having a width W4larger than the width W3 of the bottom electrode 904 (block 1014 in Figure 10). For example, a lithography process may be used to form openings in a hard mask layer (not shown) to then remove portions of the hard mask layer to leave a remaining hard mask 928 above the location where the MTJ stack pillar 906 is to be formed. Then, as an example, an ion beam 1122 may be directed toward the MTJ stack 906S in Figure 11F to form the MTJ stack pillar 906 shown in Figure 11G to form the MTJ device 900. The hard mask 928 protects the MTJ stack 906S to be etched at the desired width characteristics. Then, as also shown in the exemplary process stage 1100(7) in Figure 11G, an over-etching process may be employed to form the over-etch trenches 908 to avoid or reduce horizontal metal shorts between adjacent devices as previously described and shown in Figure 9.[0062] Figure 13 is a schematic diagram of a memory bit cell 1300 employing the MTJ device 900 in Figure 11G as a storage element when used in a resistive memory 1302, such as an MRAM, for example. The resistive memory 1302 may be included in an IC 1304. As shown in Figure 13, the memory bit cell 1300 includes an access transistor 1306 for controlling read and write operations to the MTJ device 900 acting as a storage element. The access transistor 1306 is provided in the form of an NMOS transistor in this example, that includes a gate (G) coupled to a word line (WL), a first electrode 1308 (e.g., a drain), and a second electrode 1310 (e.g., a source). The bottom electrode 904 of the MTJ device 900 is coupled to the first electrode 1308 of the access transistor 1306. A top electrode 1312 is electrically coupled to the free layer 922 of the MTJ device 900 and to a bit line (BL) to couple the MTJ device 900 to the bit line (BL). When accessing the MTJ device 900, the MTJ device 900 is configured to receive a current IAP-P or IP_AP flowing between the top and bottom electrodes 1312, 904 as a result of the voltage differential between a voltage (VBL) coupled to the bit line (BL) and a voltage (Vs) coupled to the bottom electrode 904 when a signal 1314 on the word line (WL) activates the access transistor 1306 to couple the voltage (Vs) coupled to the bottom electrode 904. The amount of current IAP-P or IP_AP is controlled by voltage (VBL) and voltage (Vs) and whether the operation is a read or write operation. Write operations take more current to change the magnetization state of the free layer 922. The direction of the current IAP-P or IP_AP controls whether a write operation changes the magnetization state of the free layer 922 from AP to a P state, or vice versa. During a read operation, the amount of current IAP_Por IP_APis controlled by the resistance of the MTJ device 900, which depends on its magnetic state AP or P.[0063] MTJ devices with reduced or avoided metal redeposition from etching and/or over-etching, including MTJ devices having a bottom electrode with a width less than the MTJ stack to avoid or reduce etching of the bottom electrode during over-etching of the MTJ device, and/or having a seed layer embedded in the bottom electrode to reduce the height of the MTJ stack pillar to reduce the amount of metal material that can be over-etched, may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a smart phone, a tablet, a phablet, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, and an automobile.[0064] In this regard, Figure 14 illustrates an example of a processor-based system 1400 that can include MTJ devices 1402 with reduced or avoided metal redeposition from etching and/or over-etching, including MTJ devices having a bottom electrode with a width less than the MTJ stack pillar to avoid or reduce etching of the bottom electrode during over-etching of the MTJ device, and/or having a seed layer embedded in the bottom electrode to reduce the height of the MTJ stack pillar to reduce the amount of metal material that can be over-etched. These MTJ devices 1402 can include the MTJ devices 400(l)-400(2), 900(l)-900(2) in Figures 4 and 9, respectively, as non- limiting examples.[0065] In this example, the processor-based system 1400 is provided in an IC 1404. The IC 1404 may be included in or provided as a system-on-a-chip (SoC) 1406. The processor-based system 1400 includes a CPU 1408 that includes one or more processors 1410. The CPU 1408 may have a cache memory 1412 coupled to the processor(s) 1410 for rapid access to temporarily stored data. The cache memory 1412 may include the MTJ devices 1402 for providing memory bit cells for storage of data. The CPU 1408 is coupled to a system bus 1414 and can intercouple master and slave devices included in the processor-based system 1400. As is well known, the CPU 1408 communicates with these other devices by exchanging address, control, and data information over the system bus 1414. Although not illustrated in Figure 14, multiple system buses 1414 could be provided, wherein each system bus 1414 constitutes a different fabric. For example, the CPU 1408 can communicate bus transaction requests to a memory system 1418 as an example of a slave device. The memory system 1418 may include a memory array 1420 that include memory bit cells 1422 that includes the MTJ devices 1402 as an example.[0066] Other master and slave devices can be connected to the system bus 1414. As illustrated in Figure 14, these devices can include the memory system 1418, and one or more input devices 1424, which can include the MTJ devices 1402. The input device(s) 1424 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. These other devices can also include one or more output devices 1426, and one or more network interface devices 1428, both of which can include the MTJ devices 1402 as an example. The output device(s) 1426 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. These other devices can also include one or more display controllers 1430 as examples. The network interface device(s) 1428 can be any devices configured to allow exchange of data to and from a network 1432. The network 1432 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 1428 can be configured to support any type of communications protocol desired.[0067] The CPU 1408 may also be configured to access the display controller(s) 1430 over the system bus 1414 to control information sent to one or more displays 1434. The display controller(s) 1430 sends information to the display(s) 1434 to be displayed via one or more video processors 1436, which process the information to be displayed into a format suitable for the display(s) 1434. The video processor(s) 1436 can include the MTJ devices 1402 as an example. The display(s) 1434 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc.[0068] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The master devices and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0069] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0070] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0071] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0072] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Methods of forming a microelectronic packaging structure and associated structures formed thereby are described. Those methods and structures may include forming a package structure comprising a discrete antenna disposed on a back side of a device, wherein the discrete antenna comprises an antenna substrate, a through antenna substrate via vertically disposed through the antenna substrate. A through device substrate via that is vertically disposed within the device is coupled with the through antenna substrate via, and a package substrate is coupled with an active side of the device. |
1.A method of forming a package structure, comprising:Forming a separate antenna on the back side of the device, wherein the discrete antenna comprises an antenna substrate;Passing through the antenna substrate to form a through hole passing through the antenna substrate, wherein the through hole passing through the antenna substrate is vertically disposed to pass through the antenna substrate;The through hole passing through the antenna substrate is coupled to a through hole passing through the substrate vertically disposed in the device;The device is coupled to a package substrate.2.The method of claim 1 further comprising forming a radiating element that is vertically coupled to the through hole of the antenna substrate and disposed in a top portion of the discrete antenna.3.The method of claim 1 wherein said antenna substrate comprises alternating layers of electrically conductive material and dielectric material.4.The method of claim 2 wherein said radiating element comprises alternating layers of electrically conductive material and dielectric material.5.The method of claim 1 wherein said antenna substrate comprises at least one of glass, undoped silicon, and liquid crystal polymer.6.The method of claim 1 wherein said through hole through the antenna substrate is coupled to said through hole through said substrate by one of a conductive structure and a metal to metal bond.7.The method of claim 1 wherein said antenna substrate comprises a frequency of at least about 30 GHz.8.The method of claim 1 wherein said through-via through said antenna substrate coupled to said via through said substrate is capable of propagating a millimeter wave signal.9.The method of claim 1 further comprising forming a grounded antenna contact on a bottom portion of said discrete antenna.10.The method of claim 9 wherein said grounded antenna contact is coupled to a through-substrate via that is vertically disposed in said device.11.The method of claim 1 wherein said device comprises a system on a chip including a milliwave radio.12.The method of claim 1 wherein the active face of the device is coupled to the package substrate by a direct metal to metal bond and by one of solder ball points.13.The method of claim 1 wherein said package substrate comprises a multilayer package substrate.14.The method of claim 1 wherein the package substrate comprises a BBUL package substrate, wherein the device is partially embedded within the BBUL package substrate.15.The method of claim 1 wherein said discrete antennas comprise physical dimensions that are less than a range of frequencies within which the device is operable.16.The method of claim 8 wherein said grounded through via of said substrate is adjacent said via through said substrate.17.The method of claim 1 further comprising forming a plurality of discrete antennas on said device.18.The method of claim 1 wherein a second device is laminated on said device adjacent to said discrete antenna.19.The method of claim 18 further comprising forming a radiation shielding layer to surround said device and said second device.20.The method of claim 18 wherein said second device comprises a memory device.21.A package structure comprising:a discrete antenna disposed on a back side of the first device, wherein the discrete antenna includes an antenna substrate;a through hole passing through the antenna substrate, wherein the through hole passing through the antenna substrate is vertically disposed to pass through the antenna substrate;a through hole passing through the substrate vertically disposed in the first device and coupled to the through hole passing through the antenna substrate;A package substrate coupled to the active surface of the first device.22.The package structure of claim 21 further comprising a radiating element vertically coupled to said through hole of said antenna substrate and disposed on a top portion of said discrete antenna.23.The package structure according to claim 21, wherein said antenna substrate comprises at least one of glass, undoped silicon, and liquid crystal polymer.24.The package of claim 21 wherein said antenna substrate comprises alternating layers of a conductive material and a dielectric material.25.The package of claim 22 wherein said radiating element comprises alternating layers of a conductive material and a dielectric material.26.The package structure according to claim 21, wherein said through hole passing through the antenna substrate is coupled to said through hole through said substrate through one of a conductive structure and a metal to metal bond.27.The package of claim 21 wherein said antenna substrate comprises a frequency of at least about 30 GHz.28.A package structure according to claim 26, wherein said through hole passing through said antenna substrate coupled to said through hole of said substrate is capable of emitting a millimeter wave signal.29.The package structure of claim 21 further comprising a grounded antenna contact disposed on said discrete antenna bottom portion.30.The package structure of claim 29 wherein said grounded antenna contact is coupled to a through-substrate via that is vertically disposed in said first device.31.The package of claim 21 wherein said first device comprises silicon on a chip including a millimeter wave radio.32.The package structure of claim 21 wherein the active side of said first device is coupled to said package substrate by a direct metal to metal bond and through one of solder bumps.33.The package structure of claim 21 wherein said package substrate comprises a multilayer package substrate.34.The package structure of claim 21 wherein said package substrate comprises a BBUL package substrate, and wherein said first device is partially embedded within said BBUL package substrate.35.The package of claim 21 wherein said discrete antennas comprise physical dimensions that are less than a range of frequencies within which the device is operable.36.A package structure according to claim 30, wherein said grounded through-substrate via is adjacent to said through-via via.37.The package structure of claim 21 further comprising a plurality of discrete antennas disposed on said first device.38.The package structure of claim 21 wherein a second device is laminated on said first device.39.The package structure of claim 38, further comprising a radiation shielding layer surrounding said first device and said second device.40.The package of claim 38 wherein said second device comprises a memory device.41.A package structure comprising:a discrete antenna disposed on a back side of the die, wherein the discrete antenna includes an antenna substrate, the die including a radio;a radiating element disposed horizontally on a top portion of the discrete antenna;a through hole passing through the antenna substrate, wherein the through hole passing through the antenna substrate is vertically disposed to pass through the antenna substrate and coupled to the radiating element;a through hole passing through the substrate vertically disposed in the die and coupled to the through hole passing through the antenna substrate;A package substrate coupled to the active face of the die.42.The package structure of claim 41, further comprising:a bus communicatively coupled to the package structure;An eDRAM communicatively coupled to the bus.43.The package of claim 41 wherein said die comprises a system on a chip.44.The package structure of claim 41 wherein said radio comprises a millimeter wave radio.45.The package of claim 41 wherein said antenna substrate comprises a frequency of at least about 30 GHz.46.A package structure according to claim 41, wherein said through hole of said antenna substrate coupled to said through hole of said substrate is capable of emitting a millimeter wave signal. |
Package structure including discrete antennas assembled on the deviceBackground of the inventionIntegrating millimeter-wave radios operating at 30 GHz or higher on the platform enables wireless data transfer between devices or between chips. Successfully transferring data between devices/chips requires one or more package-level integrated antennas to function as interfaces. Applications such as very short-range chip-to-chip communications and post-silicon verification of on-chip system (SoC) / central processing unit (CPU) devices using wireless debug ports may suffer from traditional/prior art packages The routing loss associated with the substrate/antenna array design and the loss of the package operable area.BRIEF DESCRIPTION OF THE DRAWINGSWhile the specification has been described with reference to the claims of the embodiments of the embodiments of the invention, it is believed that theFigures 1a-1d illustrate structures in accordance with various embodiments.Figure 2 shows a flow chart in accordance with various embodiments.Figure 3 shows the structure in accordance with various embodiments.Figure 4 shows a system in accordance with various embodiments.Detailed waysIn the following detailed description, reference is made to the drawings in the claims These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It will be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, the particular features, structures, or characteristics described herein, together with an embodiment, can be implemented in other embodiments without departing from the spirit and scope of the embodiments. In addition, it is to be understood that the position or arrangement of the individual elements in each of the disclosed embodiments can be modified without departing from the spirit and scope of the embodiments. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the embodiments are defined by the full scope of the appended claims and the appended claims. In the figures, the same reference numerals may refer to theA method of forming and using a microelectronic package structure, such as forming a package structure including discrete antennas disposed on a top surface of a microelectronic device, is described. The methods and structures can include forming a package structure including discrete antennas disposed on a back side of the device, wherein the discrete antennas include an antenna substrate, a through hole disposed through the antenna substrate and vertically through the antenna substrate. A through-via via disposed vertically within the device is coupled to a via through the antenna substrate, and the package substrate can be coupled to the active side of the device. The package structure of the various embodiments disclosed herein allows for a shorter range of transmit applications, using a separate single antenna.Figures 1a-1d illustrate an embodiment of a package structure including at least one discrete antenna disposed on a device. In one embodiment, package structure 100 includes at least one discrete antenna 102 (Fig. 1a). The discrete antenna 102 includes an antenna substrate 104, which in some embodiments may comprise a glass material. In other embodiments, the antenna substrate 104 may comprise at least one of a liquid crystal polymer, an organic material, a low temperature co-fired ceramic, alumina, undoped silicon, and any high performance, millimeter wave substrate, depending on Specific application. In one embodiment, antenna substrate 104 includes frequencies of approximately 30 GHz and above. In one embodiment, the antenna substrate 104 can comprise alternating layers of electrically conductive material and dielectric material. In one embodiment, the discrete antennas 102 can include a high-k dielectric material that can be used to reduce the size of the discrete antennas 102 in some cases. In one embodiment, the discrete antenna 102 can include a radiating element 106 and a via 108 through the antenna substrate. In one embodiment, the radiating element 106 can include multiple levels of metal that can be capacitively coupled to each other (eg, the radiating element can include multiple metal layers separated by a dielectric material) to enhance the frequency bandwidth of the discrete antenna 102 .In one embodiment, the radiating element 106 can be disposed horizontally at a top portion of the antenna substrate 104 and can be vertically coupled to a through hole 108 that passes through the antenna substrate. In one embodiment, the discrete antennas 102 can include dimensions that can be less than about 2 mm in the width direction, less than about 2 mm in the length direction, and less than about 0.4 mm in the height direction. The size of the discrete antennas 102 can vary depending on the particular application. In one embodiment, the physical size of the antenna substrate 104 can be much smaller than the wavelength range of the frequency within which the device/application can operate. In one embodiment, the vias 108 through the antenna substrate may not be physically coupled to the radiating element 106, wherein the millimeter wave signal may be electromagnetically coupled between the radiating element 106 and the via 116 passing through the substrate.The through holes 108 that pass through the antenna substrate may be vertically disposed within the antenna substrate 104. The antenna contact 110 can be coupled to the via 108 through the antenna substrate and can be disposed on the bottom portion of the antenna substrate 104. The antenna conductive structure 112 can be coupled to the antenna contact 110. Device contacts 114, which may include redistribution layer (RDL) 114, may be coupled to antenna conductive structure 112. Device contacts 114 may be disposed on the back side of device 118. In one embodiment, device 118 may include a system on a chip (SoC) device including a radio 119 such as a millimeter wave radio, and in other embodiments, may include any type of device suitable for a particular application.Vias that may include through-substrate vias (TSVs) 116 through the device substrate may be coupled to device contacts 114 and may be disposed vertically within device/device substrate 118. In one embodiment, the vias 116 through the substrate may be lined with, for example, an insulating material 121 such as silicon dioxide (Fig. Id, depicting a portion of the device 118 including the TSVs 116). Through-substrate vias 116 forming a liner with insulator 121 may be disposed through device material 135, which may in some cases include silicon substrate material 135, in some embodiments, devices 118 can exhibit losses of less than 1 dB. For example, device material 135 may be insulated from device contacts 114 and active layer/face 120 of device 118 by an insulating material 137, such as an oxide material.Referring back to FIG. 1a, a via 116 through the substrate can be electrically coupled and physically coupled to the via 108 through the antenna substrate (and the antenna contact 110, the conductive structure 112, and the device contact 114 are coupled therebetween) The vias 108 through the antenna substrate coupled to the vias 116 through the substrate can conduct signals from the discrete antennas 102 to the device 118. In another embodiment, the vias 108 through the antenna substrate can be coupled to the vias 116 through the substrate by one of a conductive structure and a metal to metal bond. In one embodiment, the discrete antenna 102 can include a high performance millimeter wave antenna substrate 104 such as glass. The millimeter wave signal that can be emitted/propagated from the radiating element 106 in/on the antenna substrate 104 can pass through the coupling between the through hole 108 passing through the antenna substrate and the through hole 116 passing through the substrate, at the discrete antenna 102 Transmission/propagation with device 118.The grounded antenna contact 111 may be disposed at a bottom portion of the antenna substrate 104 adjacent to the antenna contact 110. The grounded antenna conductive structure 113 can be coupled to the grounded antenna contact 111. The grounded device contact 115 can be coupled to the grounded antenna conductive structure 113. Grounding device contacts 115 can be disposed on the back side of device 118. A via 117, which may include a via through the via 117 of the substrate through the device substrate, may be coupled to the ground device contact 115 and may be disposed vertically within the device 118. A via 117 that is grounded through the device substrate can be adjacent to the via 116 through which the signal passes through the substrate and can provide a ground reference to the discrete antenna 102.In one embodiment, a second discrete antenna 102' can be disposed on device 118 and can be adjacent to discrete antenna 102. The second discrete antenna 102' includes an antenna substrate 104' and may include materials similar to the antenna substrate 104. The second discrete antenna 102' can include a radiating element 106' coupled to a through hole 108' passing through the antenna substrate, an antenna contact 110' coupled to the through hole 108' passing through the antenna substrate, and an antenna contact Point 110' coupled antenna conductive structure 112'.Device contact 114' can be coupled to antenna conductive structure 112'. Device contacts 114' can be disposed on the back side of device 118. The vias 116' that pass through the device substrate can be coupled to the device contacts 114' and can be disposed vertically within the device 118. The vias 116' that pass through the device substrate can be electrically coupled and physically coupled to the vias 108' that pass through the antenna substrate.The grounded antenna contact 111' may be disposed on the bottom portion of the antenna substrate 104' adjacent to the antenna contact 110'. The grounded antenna conductive structure 113' can be coupled to the grounded antenna contact 111'. The grounded device contact 115' can be coupled to the grounded antenna conductive structure 113'. Grounding device contacts 115' can be disposed on the back side of device 118. A via 117' that can be grounded through the via 117' of the substrate and through the device substrate can be coupled to the grounded device contact 115' and can be disposed vertically within the device 118. A via 117' that is grounded through the device substrate can be adjacent to the via 116' of the signal through the substrate and can provide a ground reference to the second discrete antenna 102'.The discrete antennas 102, 102' can be assembled/coupled to the back side of the device 118. In one embodiment, the millimeter wave signal that can be induced between the device and the discrete antennas 102, 102' by the radiating elements 106, 106' can be passed through the through holes 116, 116' of the substrate and through the antenna lining The serial connection between the bottom vias 108, 108' is carried. Additionally, depending on the particular application, each of the signal vias (which may include a via connection 116, 116' through the substrate and a via connection between the vias 108, 108' passing through the antenna substrate) may be One or more through holes 117, 117' that are grounded through the substrate are surrounded. The vias 117, 117' that are grounded through the substrate serve as return paths for millimeter wave signals from the discrete antennas 102, 102'.The discrete antennas 102, 102' exhibit significantly improved electrical characteristics compared to antennas implemented within the package substrate. In addition, the vertical implementation of the TSV coupled to the vias that pass vertically through the antenna substrate frees up, for example, the packaging space required for conventional CPU signal routing, thus improving the overall compactness of the package structure 100.In one embodiment, the active face/layer 120 of device 118 can be coupled to substrate 126 by solder balls/interconnects 122. In another embodiment, the active face 120 of the device 118 can be coupled to the substrate 126 by direct metal to metal bonding. In one embodiment, package structure 100 can include a 3D package structure 100. In one embodiment, the package structure 100 can include a portion of a coreless, bumpless construction layer (BBUL) package structure 100. In another embodiment, a portion of package structure 100 can include a microelectronic device such as device 102, 102', 102" and a next level of component (eg, a circuit board) with which package structure 100 can be coupled. Any suitable type of package structure 100 is provided between the electrical communication. In another embodiment, the package structure 100 herein can include between a die and an upper integrated circuit (IC) package coupled to the lower IC package. Any suitable type of package structure for electrical communication.The substrate 126 of the various embodiments herein can include a multilayer substrate 126 comprising alternating layers of dielectric material and metal disposed around the core layer (dielectric or metal core). In another embodiment, substrate 126 can include a coreless multilayer substrate 126. Other types of substrates as well as substrate materials can also be used with the disclosed embodiments (e.g., ceramic, sapphire, glass, etc.).In one embodiment, device package structure 100 includes device 118 that can be flip-chip mounted on multilayer package substrate 126, which includes millimeter wave radio 119. In another embodiment, a plurality of discrete chip antennas 102 may be formed on device 118/coupled to device 118, wherein the number of discrete antennas 102 coupled to device 118 may depend on the particular design requirements. The discrete antennas 102 of the various embodiments herein occupy less area on the package substrate 126 and exhibit significantly reduced signal loss. In addition, embodiments require less stringent signal isolation schemes, resulting in shrinking package footprints.FIG. 1b depicts an embodiment in which device 118 (similar to device 118 and associated package 100 components depicted in FIG. 1a) may be partially embedded in a coreless liner such as BBUL substrate 127 In bottom 127. Interconnect 122 can be disposed within substrate 127 and can be coupled to coreless interconnect structure 124. In one embodiment, package structure 131 can include at least two discrete antennas 102, 102'. An advantage of forming/coupling device 118 and discrete antennas 102, 102' in partially embedded substrate 127 is that the overall Z height of package structure 131 is reduced. In another embodiment, device 118 and antennas 102, 102' may be fully embedded in substrate 127.Figure 1c depicts an embodiment in which package structure 132 includes two devices 118, 118' (similar to device 118 of Figure 1a and associated package 100 components) stacked on one another. The first device/die 118 can be coupled/disposed to the package substrate 126, the package substrate 126 can comprise any type of suitable package substrate 126, and the second device/die 118' can be placed/stacked On the first device 118. The first device 118 can be coupled to the second device 118' through the ground via 117' and the signal via 116', and through the ground interconnect structure 123 and the signal interconnect structure 125. In one embodiment, the discrete antennas 102 (similar to the discrete antennas of FIG. 1a) may include dimensions as small as 1 mm in the width direction and 1 mm in the length direction, and may be stacked adjacent to the second device 118'. On a device 118. In general, the dimensions of the discrete antennas of the various embodiments include a fraction of the minimum wavelength in the frequency range for a particular application/design.In one embodiment, package structure 132 may include a system on a chip including at least one 3D stacked millimeter wave chip antenna. In some embodiments, a plurality of discrete antennas can be placed on/coupled to the back of the first device 118. In one embodiment, an optional radio frequency interference (RFI) shield 130 can be disposed around or around the stacked devices 118, 118'. In some embodiments, the RFI shield can be used to further isolate the discrete antenna from the rest of the package structure assembly.Embodiments herein include 3D integration allowing for discrete antenna and package structures in which one or more discrete millimeter wave chip antennas are assembled on a system/CPU die/device on the host chip, where the device includes integration Millimeter wave radio. The antenna can be implemented on a high performance millimeter wave substrate such as, for example, glass, wherein the millimeter wave signal can be coupled between the discrete antenna and the device using vias through the substrate. Embodiments herein support the integration of 3D discrete antennas into applications such as very short range chip-to-chip communications, and post-silicon verification of SoC/CPU chips using, for example, wireless debug ports. Applications such as wireless antenna signals to logic analyzers, such as multiple wireless antenna transmissions between devices (such as between mobile devices and/or between, for example, DVDs and display devices) are permitted herein.A method of forming a package structure is depicted in FIG. 2 in another embodiment. In step 202, at least one discrete antenna is formed on the back side of the device, wherein the discrete antenna includes an antenna substrate. In step 204, a via hole is formed through the antenna substrate through the antenna substrate, wherein the via hole passing through the antenna substrate is vertically disposed to pass through the antenna substrate. In step 206, a via through the antenna substrate is coupled to a via through the substrate disposed vertically within the device, and in step 208, the device is coupled to the package substrate.Turning now to Figure 3, an embodiment of a computing system 300 is shown. System 300 includes a number of components disposed on motherboard 310 or other circuit boards. The main board 310 includes a first side 312 and an opposite second side 314, and various components may be disposed on one or both of the first and second faces 312, 314. In the illustrated embodiment, computing system 300 includes a package structure 340 (which may be similar to, for example, package structure 100 of FIG. 1a) disposed on first side 312 of the motherboard, wherein package structure 340 may include Any of the described microchannel structure embodiments.System 300 can include any type of computing system, such as, for example, a handheld or mobile computing device (eg, a cellular telephone, a smart phone, a mobile Internet device, a music player, a tablet, a laptop, a nettop computer, etc.) However, the disclosed embodiments are not limited to handheld and other mobile computing devices, and such embodiments may find application in other types of computing systems, such as desktop computers and servers.Motherboard 310 can include any suitable type of circuit board or other substrate capable of providing electrical communication between one or more of the various components disposed on the board. In one embodiment, for example, motherboard 310 includes a printed circuit board (PCB) that includes a plurality of metal layers that are separated from each other by a layer of dielectric material and that are interconnected by conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals between components coupled to the board 310 - perhaps in combination with other metal layers. However, it should be understood that the disclosed embodiments are not limited to the PCBs described above, and further, the motherboard 310 can include any other suitable substrate.In addition to the package structure 340, one or more additional components can be disposed on one or both sides 312, 314 of the motherboard 310. As an example, as shown, the assembly 301a can be disposed on the first side 312 of the main board 310 and the assembly 301b can be disposed on the opposite side 314 of the main board. Additional components that may be disposed on motherboard 310 include other IC devices (eg, processing devices, memory devices, signal processing devices, wireless communication devices, graphics controllers and/or drivers, audio processors and/or controllers, etc.) Power delivery components (eg, voltage regulators and/or other power management devices, power supplies such as batteries, and/or passive devices such as capacitors), and one or more user interface devices (eg, audio input devices, audio) Output devices, keypads or other data input devices, such as touch screen displays, and/or graphical displays, etc., and any combination of these and/or other devices.In one embodiment, computing system 300 includes a radiation shielding layer. In yet another embodiment, computing system 300 includes a cooling scheme. In still another embodiment, computing system 300 includes an antenna. In a further embodiment, the assembly 300 can be disposed within a housing or housing. Where motherboard 310 is disposed within the housing, certain components of computer system 300, such as a user interface device, such as a display or keypad, and/or a power source, such as a battery, may be associated with motherboard 310 (and/or The components located on this board are electrically coupled but can be mechanically coupled to the housing.FIG. 4 is a schematic diagram of a computer system 400 in accordance with an embodiment. The depicted computer system 400 (also referred to as electronic system 400) can implement/include a package structure that includes any of a number of disclosed embodiments, as well as their equivalents set forth in the present invention. Computer system 400 can be a mobile device such as a netbook computer. Computer system 400 can be a mobile device such as a wireless smart phone. Computer system 400 can be a desktop computer. Computer system 400 can be a handheld reader. Computer system 400 can be integrated with a car. Computer system 400 can be integrated with a television.In one embodiment, electronic system 400 is a computer system including a system bus 420 that electrically couples various components of electronic system 400. System bus 420 is any combination of a single bus or bus in accordance with various embodiments. Electronic system 400 includes a voltage source 430 that provides electrical energy to integrated circuit 410. In some embodiments, voltage source 430 provides current to integrated circuit 410 via system bus 420.Integrated circuit 410 is electrically coupled, communicatively coupled to system bus 420, and includes any circuit or combination of circuits in accordance with an embodiment, including the packages/devices of the various embodiments included herein. In one embodiment, integrated circuit 410 includes a processor 412 that can include any type of package structure in accordance with various embodiments herein. As used herein, processor 412 can represent any type of circuit such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a digital signal processor, or another processor. In one embodiment, processor 412 includes any of the various embodiments in the package structure disclosed herein. In one embodiment, the SRAM embodiment is used in a memory cache of a processor.Other types of circuits that may be included in integrated circuit 410 are custom circuits or application specific integrated circuits (ASICs), such as for wireless devices such as cellular phones, smart phones, pagers, portable computers, two-way radios, and the like. Communication circuit 414 in the system. In one embodiment, processor 412 includes a memory 416 on the die, such as a static random access memory (SRAM). In one embodiment, processor 412 includes a memory 416 on an embedded die, such as an embedded dynamic random access memory (eDRAM).In one embodiment, integrated circuit 410 is complementary to subsequent integrated circuit 411. In one embodiment, dual integrated circuit 411 includes a memory 417 on an embedded die such as eDRAM. The dual integrated circuit 411 includes an RFIC dual processor 413 and dual communication circuitry 415 and a memory 417 on a dual die such as an SRAM. Dual communication circuit 415 can be configured for RF processing.At least one passive device 480 is coupled to a subsequent integrated circuit 411. In one embodiment, electronic system 400 further includes an external memory 440, which in turn may include one or more memory elements suitable for a particular application, such as main memory 442 in the form of RAM, one or more hard drives 444, And/or one or more drives that handle removable media 446, such as a magnetic disk, compact disk (CD), digital versatile disk DVD, flash drive, and other removable media known in the art. External memory 440 can also be embedded memory 448. In one embodiment, electronic system 400 also includes display device 450, and audio output 460. In one embodiment, electronic system 400 includes an input device such as controller 470, which may be a keyboard, mouse, touch pad, keypad, trackball, game controller, microphone, voice recognition device, or electronic system 400 Any other input device that enters information. In one embodiment, input device 470 includes a camera. In one embodiment, input device 470 includes a digital sound recorder. In one embodiment, input device 470 includes a camera and a digital sound recorder.While the foregoing description has specified certain steps and materials that can be used in the methods of the various embodiments, those skilled in the art will understand that many modifications and alternatives are possible. Therefore, all such modifications, variations and substitutions are intended to be within the spirit and scope of the embodiments as defined by the appended claims. In addition, the figures provided herein only show certain portions of the exemplary microelectronic devices involved in the implementation of the various embodiments and associated package structures. As such, the various embodiments are not limited to the structures described herein. |
Non-volatile memory devices comprising a memory string including a plurality of vertically superimposed diodes. Each of the diodes may be arranged at different locations along a length of the electrode and may be spaced apart from adjacent diodes by a dielectric material. The electrode may electrically couple the diodes of the memory strings to one another and to another memory device, such as, a MOSFET device. Methods of forming the non-volatile memory devices as well as intermediate structures are also disclosed. |
1.A non-volatile memory device comprising:a plurality of transistors on the substrate, each of the plurality of transistors being electrically coupled to a word line and a bit line;a plurality of memory strings above the plurality of transistors, each of the plurality of memory strings comprising a plurality of diodes;An electrode electrically connecting at least two of the plurality of memory strings to at least one of the plurality of transistors, the plurality of diodes being disposed at positions spaced along a length of the electrode;At least one dielectric barrier material between the plurality of diodes and the electrode;A dielectric material positioned between at least two other of the plurality of memory strings and electrically isolated from the at least two other memory strings.2.The non-volatile memory device of claim 1 wherein said electrode comprises a metal contact pin having at least one of a metal or ceramic material on a sidewall of said metal contact pin.3.The nonvolatile memory device of claim 1 wherein said electrode comprises a phase change material.4.The nonvolatile memory device of claim 1, wherein the at least one dielectric barrier material comprises at least one of an oxide material and a nitride material.5.The nonvolatile memory device of claim 1 wherein each of said plurality of diodes comprises an intrinsic region between oppositely doped regions, said intrinsic region and said oppositely doped The region extends perpendicular to the length of the electrode.6.The nonvolatile memory device of claim 1, wherein the plurality of diodes are aligned in a first direction to form a plurality of columns and aligned in a second direction substantially perpendicular to the first direction Form multiple lines.7.a semiconductor structure comprisingOverlying the substrate and vertically superimposing one another on top of each other to form a plurality of columns of diodes, each of the plurality of diodes comprising an intrinsic region between oppositely doped regions, said plurality of adjacent columns The diodes are mirror images of each other;a transistor array underlying the plurality of diodes and including a plurality of transistors;An electrode positioned between at least two of the plurality of columns and electrically connecting a diode of the plurality of diodes to at least one of the plurality of transistors;At least one dielectric barrier material between the plurality of diodes and the electrode;A dielectric material between the other ones of the plurality of diodes and electrically isolated from the other diodes.8.The semiconductor structure of claim 7 wherein each of said plurality of diodes in one of said plurality of columns and said plurality of diodes in the other of said plurality of columns Each of them is roughly horizontally aligned.9.The semiconductor structure of claim 7 wherein at least one of said oppositely doped regions and said intrinsic region extend along a length of said substrate in a direction substantially perpendicular to said plurality of columns.10.A method of forming a semiconductor structure, comprising:Forming a plurality of alternating first and second regions to form a cell stack over a base material overlying the transistor array, the transistor array comprising a plurality of transistors electrically coupled to the plurality of cell pins;Removing portions of the first and second regions that are exposed via the mask to form a plurality of first slots therethrough, each of the plurality of first slots overlying the plurality One of the unit pins;Introducing a dopant to the exposed portion of the first region to form a plurality of first doped regions;Forming a silicide material over each of the plurality of first doped regions;Forming a filling material over the semiconductor structure to fill at least the plurality of first slots;Removing portions of the first and second regions that are exposed via another mask to form a plurality of second slots laterally spaced from each of the plurality of first slots;Introducing a dopant to the exposed portion of the first region to form a plurality of second doped regions, the intrinsic region of the first region being disposed in the plurality of second doped regions and Between a plurality of first doped regions;Forming at least one dielectric barrier material on sidewalls defining the plurality of second slots; andAn electrode is formed on each of the plurality of second slots.11.The method of claim 10 wherein forming a plurality of alternating first and second regions to form a cell stack overlying the base material overlying the transistor array comprises:Attaching a wafer containing crystalline silicon to the base material; andA portion of the wafer is separated to leave a first semiconducting region overlying the base material.12.The method of claim 10 wherein removing portions of the first and second regions that are exposed via another mask to form a plurality of second slots comprises:Forming a spacer on the protruding area of the filling material;Forming at least one material over the spacer, the at least one material having a plurality of openings exposing portions of the unit stack between the first slots; andThe exposed portions of the cell stack are removed to form the plurality of second slots.13.The method of claim 12 wherein forming a spacer on the protruding region of the fill material comprises forming the spacer having a width sufficient to overlie the first doped region.14.The method of claim 10 wherein removing portions of the first and second regions exposed via another mask to form a plurality of second slots comprises removing the first and second regions a portion exposed through the other mask to form the plurality of second slots, each of the plurality of second slots circumscribe at least one of the plurality of first slots .15.The method of claim 10, further comprising removing portions of the first and second regions to form a plurality of layers in a peripheral region of the cell stack, each of the plurality of layers An exposed surface comprising one of the plurality of first zones.16.The method of claim 15 further comprising:Introducing a dopant to the exposed portions of the first region on the plurality of layers to form a doped material; and converting at least a portion of the doped material to a silicide material to form a contact .17.The method of claim 10, further comprising forming at least one of a ceramic material, a conductive material, and a phase change material over a sidewall defining the plurality of second slots.18.The method of claim 10 further comprising removing a portion of the base material to expose a surface of the plurality of unit pins. |
Method, structure and device for increasing memory densityRelated application cross referenceThe present application claims the filing date of the "METHODS, STRUCTURES AND DEVICES FORINCREASING MEMORY DENSITY" of US Patent Application Serial No. 12/610,922, filed on Nov. 2, 2009. rights and interests.Technical fieldEmbodiments of the present invention relate to methods, structures, and apparatus for increasing memory density, and more particularly to methods for forming multilayer semiconductor structures and resulting structures and devices incorporating such structures.Background techniqueNon-volatile memory devices are used to store digital data for computer systems and other electronic devices. The non-volatile memory does not change state immediately after the power applied thereto or the power applied thereto fails, and thus the non-volatile memory retains the stored data even for subsequent use despite interrupting the power supply. Search. Examples of non-volatile memory cells include magnetic random access memory (MRAM), ferroelectric random access memory (FRAM), phase change random access memory (PCRAM), or resistive random access memory (RRAM).As the demand for non-volatile memory continues to increase, developers and manufacturers are continually attempting to exceed the capabilities of current technologies to increase the density of non-volatile memory cells. To achieve high storage densities, manufacturers typically focus on scaling semiconductor devices down to submicron sizes. However, conventional non-volatile memory cells utilize a significant amount of substrate real estate on a semiconductor substrate, and thus limit the density of non-volatile memory cells.There is a need for methods, structures, and devices for increasing density and reliability in non-volatile memory devices.Summary of the inventionVarious embodiments of the present invention are directed to an embodiment of a non-volatile memory device, a semiconductor structure, and a method for forming a semiconductor structure including a plurality of diodes. In at least one embodiment, the present invention comprises a non-volatile memory device comprising: a plurality of transistors disposed on a substrate; a plurality of memory strings disposed over the plurality of memory cells, each memory string comprising a plurality of diodes; and an electrode electrically connecting at least two of the plurality of memory strings to the at least one node, the plurality of diodes being disposed at locations spaced along a length of the electrode. Each of the plurality of transistors can be electrically coupled to a word line and a bit line, the word line and the bit line intersecting each other at at least one node.In other embodiments, the invention includes a semiconductor structure including a plurality of diodes overlying a substrate and vertically stacked one on another to form a plurality of columns, each of the plurality of diodes comprising being disposed opposite The intrinsic region between the doped regions. The plurality of diodes in adjacent columns may be mirror images of each other.In still other embodiments, the present invention includes a method of forming a semiconductor structure, including: forming a plurality of alternating first and second regions to form a cell stack over a base material overlying a transistor array, the transistor The array includes a plurality of transistors electrically coupled to the plurality of cell pins; removing portions of the first and second regions that are exposed via the mask to form a plurality of first slots therethrough, the plurality of Each of a slot overlying one of the plurality of cell pins; introducing a dopant to the exposed portion of the first region to form a plurality of first doped regions; Forming a silicide material over each of the plurality of first doped regions; forming a fill material over the semiconductor structure to fill at least the plurality of first slots; removing the first region and a portion of the second region exposed through the other mask to form a plurality of second slots laterally spaced from each of the plurality of first slots; and introducing a dopant to the first The exposed portion of the region to form a plurality of second doped regions, said Region is an intrinsic region disposed between the plurality of second doped region and the plurality of first doped region.DRAWINGS1A is a bottom plan view illustrating one embodiment of a memory device of the present invention;Figure 1B is a partial cross-sectional view of the memory device shown in Figure 1A taken along section line 1-1;Figure 2 is a circuit diagram of the memory device of Figures 1A and 1B;3A through 32B are a bottom plan view and a cross-sectional view illustrating an embodiment of the method of the present invention for forming the memory device illustrated in Figs. 1A and 1B;Figure 33A is a bottom plan view illustrating another embodiment of the memory device of the present invention;Figure 33B is a partial cross-sectional view of the memory device shown in Figure 33A taken along section line 33-33;Figures 34 through 36 are partial cross-sectional views of a semiconductor structure and illustrate an embodiment of the method of the present invention for forming the memory device illustrated in Figures 33A and 33B;Figure 37A is a bottom plan view illustrating yet another embodiment of the memory device of the present invention;Figure 37B is a partial cross-sectional view of the memory device shown in Figure 37A taken along section line 37-37;38 and 39 are partial cross-sectional views of a semiconductor structure and illustrate an embodiment of the method of the present invention for forming the memory device shown in Figs. 37A and 37B.Detailed waysThe illustrations presented herein are not intended to be an actual view of any particular device or system, but are merely idealized representations for describing the present invention. In addition, elements that are common to the various figures may be labeled the same. It will be appreciated that for simplicity and clarity of illustration, the reference numbers of the elements that are common between the various figures are not necessarily shown in each of the figures.1A and 1B show an embodiment of a portion of a memory device 100 of the present invention. FIG. 1B is a cross-sectional view of the memory device 100 shown in FIG. 1A taken along section line 1-1 thereof. Memory device 100 includes at least one memory string 102 in which a plurality of diodes 104 are electrically coupled to electrodes 110 and spaced along the length of electrode 110. Memory device 100 can be disposed on a conventional multi-gate memory device (e.g., conventional MOSFET array 106) formed on substrate 108. As used herein, the term "substrate" means and includes a structure comprising a semiconductor-type material including, for example, silicon, germanium, gallium arsenide, indium phosphide, and other III-V or II. -VI type semiconductor material. The substrate includes not only, for example, conventional substrates but also other bulk semiconductor substrates, such as, by way of non-limiting example, a silicon-on-insulator (SOI) type substrate, a silicon-on-sapphire (SOS) type substrate. And epitaxial silicon supported by the substrate material. The semiconductor type material may be doped or undoped. When reference is made to a "substrate" in the following description, elements or components of an integrated circuit or device may have been at least partially formed in or over a surface of substrate 108 using previous process actions.The diodes 104 of each memory string 102 can be electrically coupled to one another and electrically coupled to the underlying MOSFET array 106 via electrodes 110. The active elements of memory device 100 (i.e., the elements through which charge carriers travel) or the materials used to form such active elements are drawn with cross-hatching to simplify the various figures herein. Each memory string 102 can be isolated from adjacent memory strings 102 by a dielectric inter-pillar 196. The dielectric inter-pillars 196 can include, for example, an oxide material or a nitride material. Diodes 104 can be arranged in a predetermined number of rows 114 and columns 116, with rows 114 and columns 116 aligned in two intersecting planes. In each of the columns 116, the diodes 104 may be vertically stacked in a first direction Y that is substantially perpendicular to the major plane of the substrate 108. Each of the diodes 104 in column 116 will be separated from adjacent diodes 104 by an interlayer dielectric material 118. By way of non-limiting example, the interlayer dielectric material 118 can comprise an oxide material, such as silicon dioxide. The diodes 104 in each of the rows 114 can be aligned with the diodes 104 of the laterally adjacent rows 114 in a second direction X that is substantially parallel to the major plane of the substrate 108. Although the display memory device 100 has three (3) rows 114, any number (n) of rows 114 can be formed.Each electrode 110 effectively forms a node at a point of contact between the diodes 104 of each of the memory strings 102. In some embodiments, contact pins 132 can be electrically coupled to underlying MOSFET array 106 via cell pins 158 (shown in phantom) to provide electrical communication between diodes 104 of memory string 102 and MOSFET array 106, as described in further detail below Description.Memory device 100 can include contact structure 134 positioned over isolation region 136 in substrate 108. Contact structure 134 can provide electrical communication between memory device 100 and a conductive element (eg, a circuit board) of a higher level substrate (not shown). Contact structure 134 can include semiconductive path 138, contacts 140, wire interconnects 142, and metal lines 144. Each of the semiconducting paths 138 extends between at least one of the electrodes 110 and one of the contacts 140 of the contact structure 134. Optionally, a first doped region 122 and a barrier material 128 may be disposed between each of the semiconducting paths 138 and the electrode 110. Each of the wire interconnects 142 can extend between one of the contacts 140 and the metal line 144 to electrically couple each of the memory strings 102 to the metal lines 144. As shown in FIG. 1A, portions of metal lines 144 may be exposed via optional passivation material 146.Each of the diodes 104 can include an intrinsic region 120 disposed between the first doped region 122 and the second doped region 124, a silicide region 126 adjacent the first doped region 122, and adjacent The barrier material 128 of the second doped region 124. The intrinsic region 120, the first doped region 122, the second doped region 124, the silicide region 126, and the barrier material 128 may extend in a direction perpendicular to the major plane of the substrate 108. The intrinsic region 120 can comprise a semiconductor material such as polysilicon, germanium, gallium arsenide or silicon germanium. Each of the first doped region 122 and the second doped region 124 may comprise polysilicon doped with an n-type dopant (eg, phosphorus or arsenic) (ie, p-type polysilicon) or doped with p-type A dopant (eg, boron or aluminum) doped polysilicon (ie, n-type polysilicon). The first doped region 122 and the second doped region 124 may be oppositely doped. As used herein, the term "oppositely doped" means that one of the first doped region 122 and the second doped region 124 includes the remaining negative charge carriers (n-type) while the other Includes the remaining positive charge carriers (p-type). For example, the first doped region 122 (ie, the doped region 122 adjacent the silicide region 126) can include p-type polysilicon and the second doped region 124 (ie, adjacent to the barrier material 128) The doped region 124) may include n-type polysilicon. Barrier material 128 can include, for example, a dielectric material, such as an oxide material or a nitride material. By way of non-limiting example, silicide region 126 may be a non-transition metal silicide such as cobalt silicide (CoSi2) (which is often referred to as "CoSix"), titanium silicide (TiSi2), tungsten silicide (TiSi2), or nickel silicide. (NiSi2). The barrier material 128 can be in contact with the electrode 110. The diodes 104 on opposite sides of the electrode 110 may be mirror images of each other, wherein the positions of the intrinsic region 120, the first doped region 122, the second doped region 124, the silicide region 126, and the barrier material 128 are relative to The shaft electrode is reversed. Each of the electrodes 110 can include, for example, sidewall spacers 130 and contact pins 132. Each of the sidewall spacers 130 can be disposed over the sidewalls of the interlayer dielectric material 118 and the barrier material 128.FIG. 2 is a circuit diagram illustrating the memory device 100 illustrated in FIGS. 1A and 1B. Memory device 100 includes a plurality of memory strings 102, each of which is electrically coupled to an access line, for example, a first word line 161 and a second word line 163. The first word line 161 and the second word line 163 can be electrically coupled to a data/sensing line (eg, bit line 162) and another access line (eg, another word line 160). The first word line 161 and the second word line 163 provide two different addresses for each of the memory strings 102 such that the memory device 100 can include, for example, two bits per node.FIG. 3A is a bottom plan view of semiconductor structure 101 having conventional MOSFET array 106 shown in FIGS. 1A and 1B formed on substrate 108. To form the semiconductor structure 101, a plurality of unit pins 158 can be aligned in the second direction X and in the third direction Z, the upper surface of the unit pins 158 being exposed and transverse to the electrically insulating material 164. 3B is a cross-sectional view of the semiconductor structure 101 shown in FIG. 3A taken along section line 3-3. As shown in FIG. 3B, each of the unit pins 158 are positioned transverse to the electrically insulating material 164 and the nitride cap 166. MOSFET array 106 can include a plurality of transistors, such as MOSFETs 148, each of which includes a gate region 150 overlying oxide region 151 and semiconductor material 153 and positioned between insulating spacers 152. MOSFET devices 148 are each disposed between source region 154 and drain region 156 in substrate 108. The cell pins 158 are disposed between the MOSFET devices 148. Each gate region 150 of MOSFET device 148 can be electrically coupled to word line 160, and each source region 154 can be electrically coupled to bit line 162 by polysilicon pins 152. An electrically insulating material 164 (e.g., an oxide material) overlies the portion of gate region 150 that is exposed via cell pins 158 and the surface of substrate 108. Each bit line 162 is covered by, for example, a nitride cap 166. MOSFET device 106 can be formed using conventional methods known in the art and thus not described in detail herein.As shown in FIG. 4, a base material 168 can be formed over the MOSFET array 106. As a non-limiting example, base material 168 can include a doped or undoped polysilicon material or an oxide material. The base material 168 can be deposited using, for example, a conventional CVD process. A cell stack 170 including a plurality of alternating first regions 172a, 172b, and 172c and second regions 174a, 174b, and 174c may be formed over the base material 168. The cell stack 170 shown in FIG. 4 includes three (3) first regions 172a, 172b, and 172c and three (3) second regions 174a, 174b, and 174c. However, cell stack 170 can be formed to include any number of first regions 172a, 172b, and 172c and second regions 174a, 174b, and 174c based on the desired number of diodes 104 to be formed in each memory string 102. Each of the first regions 172a, 172b, and 172c and the second regions 174a, 174b, and 174c of the cell stack 170 is formed of a material that is selectively etchable with respect to the underlying material. In some embodiments, first regions 172a, 172b, and 172c and second regions 174a, 174b, and 174c can each be formed of a material that is selectively etchable relative to the underlying material. By way of non-limiting example, each of the first regions 172a, 172b, and 172c of cell stack 170 may be formed of doped or undoped polysilicon material (germanium, gallium arsenide, or silicon germanium) and may have From a thickness of from about 20 nm to about 40 nm. For example, the second regions 174a, 174b, and 174c may be formed of doped or undoped polysilicon material, and the first regions 172a, 172b, and 172c may be sagable relative to the second regions 174a, 174b, and 174c. The selectively etched doped or undoped polysilicon material is formed. As another non-limiting example, the second regions 174a, 174b, and 174c may be formed of a semiconducting material (eg, a polysilicon material), and the first regions 172a, 172b, and 172c may be formed of a dielectric material (eg, an oxide material). Each of the second regions 174a, 174b, and 174c of the cell stack 170 can be formed, for example, of doped or undoped polysilicon, oxide material, nitride material, or oxynitride material, and can have From a thickness of from about 20 nm to about 50 nm. In some embodiments, the first regions 172a, 172b, and 172c and the second regions 174a, 174b, and 174c may be deposited one above another using a conventional chemical vapor deposition (CVD) process.In another embodiment, the second regions 174a, 174b, and 174c may be formed of an oxide material, and the first regions 172a, 172b, and 172c may be formed of crystalline silicon. In this embodiment, each of the first regions 172a, 172b, and 172c formed of crystalline silicon can be placed on a dielectric material, such as a foundation, by a process described herein using a modification of the so-calledtechnique. Material 168 or one of the second zones 174a, 174b, and 174c. Such processes are described in detail, for example, in U.S. Patent No. RE 39,484 to Bruel, U.S. Patent No. 6,303,468 to Aspar et al., to Espar et al. U.S. Patent No. 6, 335, 258, U.S. Patent No. 6,756,286 issued to Moriceau et al., U.S. Patent No. 6,809,044 issued to Espar et al., issued to Espar et al. U.S. Patent Application Serial No. 2006/0099, 776 to U.S. Patent Application Serial No. However, other processes suitable for fabricating semiconductor materials on the surface of logic devices can also be used if sufficiently low process temperatures are maintained. In a conventional embodiment of thetechnology, the donor wafer is bonded to the acceptor wafer using a high temperature anneal of about 1000 ° C to about 1300 ° C. However, additional plasma activation actions can be integrated into the conventionaltechnology fabrication process to reduce the required bonding temperature, as described in detail below.As shown in FIG. 5, a plurality of ions (eg, hydrogen or inert gas ions) can be implanted into a donor wafer 600 formed of crystalline silicon to form an implanted region 602. As indicated by directional arrow 604, a plurality of ions can be implanted into donor wafer 600 in an orientation substantially perpendicular to main surface 606 of donor wafer 600 using an ion source (not shown) to form implanted region 602, Implanted region 602 can also be characterized as a transfer region, with internal boundaries 608 of the transfer region being shown in dashed lines in donor wafer 600. As is known in the art, the depth to which ions are implanted is at least in part a function of the energy by which ions are implanted into the donor wafer 600. In general, ions implanted with less energy will be implanted at relatively shallow depths, while ions implanted at higher energies will be implanted at relatively deep depths. As is known to those skilled in the art, the inner boundary 608 of the implanted region 602 is substantially parallel to the major surface 606 of the donor wafer 600 and at a predetermined depth depending on selected parameters of the atomic material implantation process. As a non-limiting example, ions may be selected to implant ions into the donor wafer 600 at an energy that forms an internal boundary 608 at a depth D2 between about 20 nm and about 50 nm, and more specifically about 30 nm.The inner boundary 608 includes a microbubble or microcavity layer (not shown) comprising implanted ionic species and provides a weakened structure within the donor wafer 600. Next, the donor wafer 600 can be heat treated at a temperature higher than the temperature at which the implantation is achieved in accordance with the disclosure of the patent document in the preceding paragraph to effect crystallization rearrangement in the wafer and coalescence of microbubbles or microcavities. .An attachment surface (not shown) can be formed by exposing the major surface 606 of the donor wafer 600 to a reactive ion etch (RIE) plasma comprising an inert gas (eg, argon, oxygen, or nitrogen) to form a plasma activating material. The plasma activating material undergoes an oxidation reaction with an adjacent material of one of the base material 168 or the second regions 174a and 174b due to an increase in mobility of an ionic species (eg, hydrogen) formed on the major surface 606 thereof. The form increases the dynamics of subsequent engagement actions. The wafer bonding process can be performed at a temperature of less than about 400 ° C by utilizing a plasma activating material. One embodiment of a plasma activated bond is described in U.S. Patent No. 6,180,496, issued toS.As shown in FIG. 6, donor wafer 600 is disposed on the surface of base material 168 and can be bonded to base material 168 using an annealing process as described above. The depth of hydrogen or other ions implanted into the inner boundary 608 in the ion implantation region 602 causes the silicon in the donor wafer 600 to be heat treated along the inner boundary when the shear force is applied substantially parallel to the major surface of the donor wafer 600. 608 broke. After the donor wafer 600 is attached to the semiconductor structure 101, the donor wafer 600 can be cleaved or split by opposing the major surface of the base material 168 at the inner boundary 608 by applying a shear force to the donor wafer 600. The portion on the side (the portion of the donor wafer 600 that is the farthest from the semiconductor structure 101). A portion of the donor wafer 600 below the inner boundary 608 having a thickness, for example between about 20 nm and about 50 nm, is removed from the donor wafer 600 and the portion remains bonded to the semiconductor structure 101 to form The first region 172a including crystalline silicon is shown in FIG.Still referring to FIG. 7, after the implanted region 602 is separated from the donor wafer 600 and bonded over the base material 168 to form the first region 172a, its exposed surface may be undesirably rough. To remedy this defect, the exposed surface of the first region 172a can be smoothed to the desired extent according to techniques known in the art, such as one or more of grinding, wet etching, and chemical mechanical polishing (CMP). In order to facilitate further processing as described below. After the first zone 172a is placed on the base material 168, the second zone 174a can be formed on the first zone 172a as described above. Alternating first regions 172b, 172c and second regions 174b, 174c may be formed to produce cell stack 170 as shown in Figure 5, wherein first regions 172b, 172c and second regions 174b, 174c are formed as described above Each of them. Thus, cell stack 170 can include polysilicon, germanium, gallium arsenide or silicon germanium or crystalline silicon as first regions 172a, 172b, and 172c.Referring to FIG. 8, a masking material 176 can be formed over the semiconductor structure 101. Masking material 176 can include a material having an etch selectivity that is different from second regions 174a, 174b, and 174c of cell stack 170 and first regions 172a, 172b, and 172c. By way of non-limiting example, masking material 176 can comprise a nitride material, such as silicon nitride (Si3N4), having a thickness between about 80 nm and about 200 nm, and more specifically about 120 nm. Mask material 176 can be formed over cell stack 170 using conventional chemical vapor deposition (CVD) processes or any other process known in the art of semiconductor fabrication. As another non-limiting example, masking material 176 can comprise a hard mask material, such as amorphous carbon or metal, and can be by conventional chemical vapor deposition (CVD) processes, conventional physical vapor deposition (ie, sputtering), or none. An electroplating deposition process is formed.Referring to Figures 9A and 9B, portions of each of mask material 176, second regions 174a, 174b, and 174c and first regions 172a, 172b, and 172c can be removed from peripheral region 181 of semiconductor structure 101 to form A plurality of layers 182a, 182b, and 182c. Each of layers 182a, 182b, and 182c may expose respective portions of first regions 172a, 172b, and 172c that will later be used to form a semiconducting path (Fig. 1). Referring to FIG. 9B, which is a cross-sectional view of the semiconductor structure 101 shown in FIG. 9A taken along section line 9-9 therein, the semiconductor structure 101 may include a lower layer 182a, an intermediate layer 182b, and an upper layer 182c.As a non-limiting example, layers 182a, 182b, and 182c can be formed as shown in Figure 9B1. A first photoresist material 184 can be deposited over the mask material 176 and patterned to expose a mask layer 176 overlying the semiconductor device 101 where a lower layer will be formed The surface on the area of 182a (Fig. 9B). After the portion of the masking material 176 is removed, portions of the second region 174c that are exposed via the masking material 176 can be selectively removed relative to the masking material 176. For example, if the second region 174c includes silicon dioxide, reactive ion etching using a nitrogen trifluoride (NF3) based gas, a chlorine (Cl) based gas, or a bromide (Br) based gas may be performed. The (RIE) process selectively removes the silicon dioxide relative to the mask material 176. As another non-limiting example, if the second region 174c includes a doped or undoped polysilicon material, the wet etch process can be used to selectively remove the via with respect to the mask material 176 and the underlying first region 172c. Doped or undoped polysilicon material. After the portion of the second region 174c is removed, portions of the first region 172c may be exposed via the mask material 176 and the second region 174c. The exposed portion of the first region 172c can then be selectively removed relative to the mask material 176. As a non-limiting example, if the first region 172c includes polysilicon, a reaction using plasma based on tetrafluoromethane (CF4), plasma based on hydrogen bromide (HBr), or plasma based on hydrochloric acid (HCl) may be performed. A ionic ion etching (RIE) process removes the polysilicon relative to the mask material 176. As another non-limiting example, if the first region 172c includes a doped or undoped polysilicon material, a wet etch process can be performed to selectively remove relative to the mask material 176 and the underlying second material 174b. Doped or undoped polysilicon material. The second regions 174b and 174a and the first region 172b exposed via the mask material 176 can be removed as indicated by the dashed lines to form a lowermost layer 182a defined by the surface of the first region 172a. As indicated by the dashed line labeled 186, an additional portion of the first photoresist material 184 can be removed to expose another portion of the mask material 176 overlying the region of the cell stack 170 where the intermediate layer 182b will be formed. . Thereafter, the mask material 176, the second regions 174c and 174b, and the portions of the first region 172c exposed through the first photoresist material 184 may be removed as indicated by the dashed lines to form a surface from the first region 172b. Defined intermediate layer 182b. Another portion of the first photoresist material 184 overlying the region of the semiconductor structure 101 where the upper layer 182c will be formed may be removed as indicated by the dashed line labeled 188. A portion of the second region 174c exposed via the first photoresist material 184 can be removed to form an upper layer 182c defined by the surface of the first region 172c.Alternatively, a plurality of layers 182a, 182b, and 182c may be formed as shown in FIG. 9B2. A first photoresist material 184 can be deposited over the mask material 176, and the photoresist material can be patterned to expose the overlying of the mask material 176 over the semiconductor device 101 where an upper portion will be formed The surface on the area of layer 182c (Figs. 10A and 10B). A portion of the second region 174c exposed via the masking material 176 may be removed to define the upper layer 182c using a method such as that described with respect to FIG. 9B1, as indicated by the dashed line labeled 191. A second photoresist material 192 can be deposited and patterned to expose the remaining portion of the mask material 176 overlying the region of the semiconductor device 101 where the intermediate layer 182b will be formed. A portion of each of the first region 172c and the second region 174b exposed via the second photoresist material 192 may be removed as indicated by the dashed line labeled 193 to form the intermediate layer 182b. A third photoresist material 194 can be deposited and patterned to expose the remaining portion of the cell stack 170 that will form the lower layer 182a. A portion of each of the first region 172b and the second region 174a exposed via the third photoresist material 194 can be removed as indicated by the dashed line labeled 195 to form the lower layer 182c.Referring to Figures 10A and 10B, contacts 140 may be formed on semiconductor structure 101 after forming a plurality of layers 182a, 182b, and 182c. The exposed portions of the first regions 172a, 172b, and 172c on the dopant layers 182a, 182b, and 182c may be doped with a desired concentration of dopants to form the doped material 121, and then portions of the doped material may be It is converted to silicide to form contact 140. The exposed portions of the first regions 172a, 172b, and 172c can be doped using conventional methods, such as an ion implantation process or a high temperature diffusion process. By way of non-limiting example, if the first regions 172a, 172b, and 172c include polysilicon, the semiconductor structure 101 can be exposed to a boron- or aluminum-containing plasma to form p-type polysilicon. As another example, a thin film of p-type material (not shown) may be deposited over the surface of the semiconductor structure 101 and may be thermally annealed, during which the p-type dopant migrates into the first regions 172a, 172b, and 172c. To form p-type polysilicon.Portions of the doped material of the first regions 172a, 172b, and 172c may be converted to silicide to form the contacts 140. In some embodiments, a non-transition metal (eg, cobalt, titanium, tungsten, or nickel) can be deposited over the doped material and heated to a level sufficient to cause the non-transition metal to react with the dopant material 121. The temperature thus forms a silicide region 126 and contacts 140 to form contacts 140. The remaining portion of the doped material 121 can be disposed between the silicide region 126 and the first regions 172a, 172b, and 172c. For example, if the doped material comprises p-type polysilicon, cobalt can be deposited into the p-type polysilicon and annealed at a temperature between about 400 ° C to about 600 ° C to form cobalt silicide.FIG. 11A is a bottom plan view of semiconductor structure 101 after first slots 178 are formed between unit pins 158, each of which is shown in phantom for purposes of illustrating the underlying structure. Figure 11B is a cross-sectional view of the semiconductor structure 101 shown in Figure 11A taken along section line 11-11 therein. After the contacts 140 are formed, a first fill material can be deposited over the peripheral regions 181 of the semiconductor structure 101 to form dielectric rims 197. For example, the first fill material can be an oxide material deposited over the semiconductor structure 101 using conventional processes, such as a chemical vapor deposition (CVD) process or a physical vapor deposition (PVD) process. Next, a chemical mechanical polishing (CMP) process can be used to remove portions of the oxide material overlying the mask material 176 such that the dielectric edges 197 are substantially coplanar with the exposed surface of the mask material 176.As shown in FIG. 11B, a portion of each of the mask material 176 and the cell stack 170 can be removed to form a first slot 178. As a non-limiting example, a photoresist material (not shown) may be provided over the mask material 176 (FIG. 9) and the region of the mask material 176 overlying the cell stack 170 to be removed may be removed. The mask material 176 is patterned (eg, a portion of the cell stack 170 that does not overlie the region on the cell pins 158). An anisotropic etch process (eg, a dry reactive ion or plasma etch process) can then be used to etch the regions of the mask material 176 (FIG. 9) exposed via the photoresist material to form a number of exposed second regions 174c. Opening of the area (not shown). The photoresist material can then be removed using a conventional ashing process.After removing the portion of the masking material 176, portions of the second region 174c exposed via the masking material 176 can be selectively removed relative to the masking material 176 using methods such as those described with respect to FIG. 10B1. . For example, if the second region 174c includes silicon dioxide, reactive ion etching using a nitrogen trifluoride (NF3)-based gas, a chlorine (Cl)-based gas, or a nickel-based (Br)-based gas may be performed. The (RIE) process selectively removes the silicon dioxide relative to the mask material 176. After the portion of the second region 174c is removed, portions of the first region 172c may be exposed via the mask material 176 and the second region 174c. The exposed portion of the first region 172c can then be selectively removed relative to the mask material 176. As a non-limiting example, if the first region 172c includes polysilicon, the reactivity of using tetrafluoromethane (CF4)-based plasma, hydrogen bromide (HBr)-based plasma, or hydrochloric acid (HCl)-based plasma may be performed. An ion etch (RIE) process removes the polysilicon from the mask material 176. The second regions 174b and 174a and the first regions 172b and 172a may be alternately removed as described above to form a first slot 178 that extends through the cell stack 170. The base material 168 underlying the cell stack 170 can be used as an etch stop during the removal of the overlying first region 172a. For example, the remaining portion of the base material 168 that is exposed during formation of the first slot 178 can have a thickness of about 30 nm.The first slot 178 can be defined by a sidewall 180 that is perpendicular to the substrate 108, or alternatively, the first slot 178 can taper inward as it extends toward the substrate 108. Optionally, during removal of the first region 172a, only a selected amount, if any, of the base material region 168 overlying the MOSFET array 106 is removed.As shown in Figures 12A and 12B, the exposed portions of the first regions 172a, 172b, and 172c (Figs. 11A and 11B) within the first slot 178 can be doped with dopants of the desired concentration to form a first warp. Doped region 122. 12B is a cross-sectional view of the semiconductor structure 101 shown in FIG. 12A taken along section line 12-12 therein. The first regions 172a, 172b, and 172c may be doped using conventional methods such as an ion implantation process or a high temperature diffusion process. The exposed part. For example, if the first regions 172a, 172b, and 172c include polysilicon, the semiconductor structure 101 can be exposed to boron-containing or aluminum-containing plasma to form p-type polysilicon. As another example, a thin layer (not shown) of p-type material can be deposited over the surface of the semiconductor structure 101 and can be thermally annealed, during which the p-type dopant migrates to the first regions 172a, 172b, and 172c The first doped region 122 is made to include p-type polysilicon.Referring to Figures 13A and 13B, the exposed portions of the first doped region 122 can be converted to silicide to form a silicide region 126. Figure 13B is a cross-sectional view of the semiconductor structure 101 shown in Figure 13A taken along section line 13-13 therein. In some embodiments, a non-transition metal (eg, cobalt, titanium, tungsten, or nickel) can be deposited over the first doped region 122 (FIGS. 12A and 12B) and heated sufficiently to cause the non-transition metal to The temperature at which the first doped region 122 is reacted thereby forms a silicide region 126 and contacts 140 to form a silicide region 126 and contacts 140. For example, if the first doped region 122 includes p-type polysilicon, cobalt may be deposited into the p-type polysilicon and annealed at a temperature between about 400 ° C to about 600 ° C to form cobalt silicide. Alternatively, the first doped region 122 can be undercut using a conventional etchant such as a mixture of chloropentane (C2ClF5) and sulfur hexafluoride (SF6). Next, metal can be deposited in an undercut region (not shown) and annealed using conventional methods to form silicide regions 126 and contacts 140.As shown in Figures 14A and 14B, another fill material can be deposited in the first slot 178 to form the dielectric inter-pillar 196. Figure 14B is a cross-sectional view of the semiconductor structure 101 shown in Figure 14A taken along section line 14-14 therein. For example, the fill material can be an oxide material deposited over the semiconductor structure 101 using conventional processes such as chemical vapor deposition (CVD) processes or physical vapor deposition (PVD) processes. Next, a portion of the oxide material overlying the mask material 176 may be removed using a chemical mechanical polishing (CMP) process such that the upper surface of the semiconductor device 101 is substantially planar.As shown in Figures 15A and 15B, the mask material 176 can be removed from the semiconductor structure 101 to expose the surface of the remaining portion of the second region 174c. The mask material 176 can be removed using a conventional wet etch process that selectively removes the mask material 176 relative to the remainder of the dielectric spacers 196, dielectric edges 197, and second regions 174c. For example, if the masking material 176 is a nitride material, a phosphoric acid (H3PO4) etchant having a temperature between about 140 °C and about 150 °C can be applied to remove the nitride material from the semiconductor structure 101. Figure 15B is a cross-sectional view of the semiconductor structure 101 shown in Figure 15A taken along section line 15-15 therein. As shown in FIG. 15B, after removal of mask material 176, portions of dielectric pillar 196 and dielectric rim 197 protrude upward from exposed surface 198 of second region 174c and have a thickness approximately equal to mask material 176. The height (i.e., between about 80 nm and about 200 nm, and more specifically about 120 nm).Referring to Figures 16A and 16B, a spacer 200 can be formed on the exposed sidewalls of the inter-dielectric posts 196 and the sidewalls of the dielectric rim 197 using a conventional spacer etch process. Figure 16B is a cross-sectional view of the semiconductor structure 101 shown in Figure 16A taken along section line 16-16 therein. The substrate of each of the spacers 200 adjoining the second region 174c can have a width W sufficient to substantially cover the underlying first doped region 122, for example, from about 35 nm to about 100 nm. The spacer 200 may be formed of a hard mask material such as tungsten, titanium nitride (TiN 2 ) or tungsten silicide. For example, tungsten having a substantially conformal thickness can be formed over the exposed regions of the second region 174c and the inter-dielectric pillar 196 and the dielectric rim 197, and reactive ion etching using chlorine (Cl2) as an etchant can be used. The (RIE) process forms the spacer 200.As shown in FIG. 17, a first liner 202 can be deposited over the surface of the semiconductor structure 101, after which the sacrificial material 204 is deposited over the first liner 202. The first liner 202 may be formed of a material that is resistant to an etchant capable of etching the sacrificial material 204 and the spacer 200 (eg, silicon nitride or tantalum nitride (TaN)). The first liner 202 can be deposited to a thickness of between about 5 nm and about 10 nm using a conventional chemical vapor deposition (CVD) process. The sacrificial material 204 can be an oxide material, such as silicon dioxide. The sacrificial material 204 can be deposited to a thickness sufficient to provide a gap 206 between the portions of the sacrificial material 204 that cover the dielectric inter-pillars 196 and the regions of the dielectric ribs 197 that are about 50 nm. For example, the sacrificial material 204 can be deposited to a thickness from about 25 nm to about 35 nm using a conventional chemical vapor deposition (CVD) process or an atomic layer deposition (ALD) process.As shown in FIGS. 18A and 18B, another mask material 208 can be formed over the sacrificial material 204 and can be patterned to include an opening 210 that exposes each of the dielectric pillars 196 of the sacrificial material 204. A portion between one and overlying the unit pins 158. For the purpose of illustrating the underlying structure, the inter-dielectric column 196 and the unit pins 158 are shown in dashed lines in FIG. 18A. Masking material 208 can include, for example, a photoresist material, transparent carbon, or amorphous carbon, and methods for forming masking material 208 are known in the art. Figure 18B is a cross-sectional view of the semiconductor structure 101 shown in Figure 18A taken along section line 18-18 therein. As shown in FIG. 18B, a masking material 208 has been formed over the overhanging portion of the sacrificial material 204 overlying the inter-dielectric pillar 196, while the opening 210 exposes the recess of the sacrificial material 204 that is positioned over the cell pin 158. part.19A is a bottom plan view of semiconductor structure 101 after a plurality of second slots 212 have been formed therein. For purposes of illustrating the underlying structure, the underlying cell pins 158 and the underlying dielectric interstitial posts 196 are shown in phantom in FIG. 19A. Figure 19B is a cross-sectional view of the semiconductor structure 101 shown in Figure 19A taken along section line 19-19 therein. Each of the second slots 212 can circumscribe one of the inter-dielectric columns 196 and can be positioned over one of the unit pins 158. To form the second slot 212, the sacrificial material 204, the first liner 202, the second regions 174c, 174b, and 174a, and the overlying portions of the first regions 172c, 172b, and 172a may be removed via the opening 210 in the masking material 208. The portion on unit pin 158. For example, the sacrificial material 204 and the first liner 202 can be selectively removed from the masking material 208 using a dry (ie, plasma) etch process. The processing parameters for this dry etch process will depend on the composition of the sacrificial material 204 and the first liner 202 that are selective to the mask material 208, and various directions for etching many dielectric materials are known in the art. Heterogeneous plasma etching process. The second regions 174c, 174b, and 174a and the first regions 172c, 172b, and 172a, respectively, may be removed relative to the mask material 208 using methods such as those described with respect to Figures 10B1 and 10B2 to expose a portion of the base material 168. Mask material 208 can be removed after forming second slot 212.FIG. 20 shows the semiconductor structure 101 after forming the second doped region 124. The second doped region 124 can be formed by doping the portions of the first regions 172a, 172b, and 172c exposed within the second slot 212 with a dopant of the desired concentration. The second doped region 124 can be formed by a conventional process such as an ion implantation process or a high temperature diffusion process. For example, the second doped region 124 can be formed by a plasma doping process, often referred to as a PLAD process, during which the desired dopant is ionized in the ion source to accelerate the resulting ions to form a specification An ion beam of energy, and then directing the ion beam at a surface of a material, such as polysilicon, such that the ions penetrate into the material. As a non-limiting example, if the first regions 172a, 172b, and 172c are polysilicon, the phosphorous or arsenic may be implanted using a PLAD process such that the second doped region 124 comprises n-type polysilicon. As another example, a thin layer of n-type material can be deposited over the surface of the semiconductor structure 101 and can be thermally annealed, during which the n-type dopant migrates into the first regions 172a, 172b, and 172c such that The di-doped region 124 includes n-type polysilicon.After forming the second doped region 124, a second liner 214 can be formed over the exposed sidewalls of the second slot 212, and the remaining portion of the second slot 212 can be filled with polysilicon material to form the sacrificial pins 216, such as This is shown in Figures 21A and 21B. Figure 21B is a cross-sectional view of the semiconductor structure 101 shown in Figure 21A taken along section line 21-21 therein. The second liner 214 may be formed of a material (eg, a nitride material or an oxide material) that prevents dopant migration from the second doped region 124 into the sacrificial pins 216, and may be performed using a conventional chemical vapor deposition (CVD) process. Deposition. As a non-limiting example, the second liner 214 can have a thickness of between about 3 nm and about 8 nm. The sacrificial pin 216 can be formed by depositing polysilicon using, for example, a conventional chemical vapor deposition (CVD) process or a physical vapor deposition (PVD) process, followed by anisotropic dry reactive ion (ie, plasma) etching. The process is such that the sacrificial pins 216 are recessed into the second slot 212. The sacrificial material 204 and the first liner 202 may prevent material from depositing outside of the second slot 212 during deposition of the polysilicon to form the sacrificial pins 216, such as on the surface of the second region 174c.As shown in FIG. 22, the sacrificial material 204 can be removed using a conventional etching process that selectively removes the sacrificial material 204 to expose the first liner 202. As a non-limiting example, if the sacrificial material 204 is an oxide material and the first liner 202 and the second liner 214 are nitride materials, a wet etchant comprising an aqueous hydrochloric acid solution can be introduced to the semiconductor structure 101 relative to the nitrogen. The material selectively removes the oxide material.Referring to FIGS. 23A and 23B, after the sacrificial material 204 is removed, a third masking material 218 can be deposited and patterned to cover a portion of the peripheral region 181 of the semiconductor structure 101 and overlying the cell pins 158 of the sacrificial pins 216 ( The part on the dotted line). For purposes of illustrating the underlying structure, spacers 200, sacrificial pins 216, and portions of cell pins 158 underlying mask material 218 are shown in dashed lines in FIG. 23A. Figure 23B is a cross-sectional view of the semiconductor structure 101 shown in Figure 23A taken along section line 23-23 therein.24A is a bottom plan view of semiconductor structure 101 after material exposed via masking material 218 and spacer 200 has been removed. For the purpose of illustrating the underlying structure, unit pins 158 are shown in dashed lines in Figure 24A. Figure 24B is a cross-sectional view of the semiconductor structure 101 shown in Figure 24A taken along section line 24-24 therein. For example, the first liner 202 can be silicon nitride and its exposed portions can be removed using an anisotropic reactive ion (ie, plasma) etch process. The sacrificial pins 216 can be removed using, for example, a reactive ion etching (RIE) process. After the sacrificial pins 216 are removed, the second liner 214 can be removed using an anisotropic reactive ion (ie, plasma) etch process. The second regions 174a, 174b, and 174c and the second doped region 124 may be alternately removed using methods such as those described with respect to Figures 10B1 and 10B2 to expose the underlying portions of the base material 168. At least a portion of the inter-dielectric column 196 can also be removed to form a plurality of posts 220. In FIG. 24A, the unit pins 158, the remainder of the spacer 200, and the remainder of the sacrificial pins 216 are shown in dashed lines for the purpose of illustrating the underlying structure (ie, the overhanging pins 216 overlying the cell pins 158). section). The portion of the second doped region (not shown) between the cell pins 158 has been removed to singulate over the second doped region on each of the cell pins 158.Referring to Figures 25A and 25B, Figure 25B is a cross-sectional view of the semiconductor structure 101 shown in Figure 25A taken along section line 25-25 therein, the mask material 218 being removable to expose the spacer 200 and the first liner 202. Overlying the remainder of the peripheral area 181. For example, mask material 218 can be removed using a conventional ashing process or a wet etch process using a mixture of ammonium hydroxide (NH4OH), hydrogen peroxide (H2O2), and water (H2O).26A and 26B show the semiconductor structure 101 after the spacer 200 and the remainder of the first liner 202 have been removed. Figure 26B is a cross-sectional view of the semiconductor structure 101 shown in Figure 26A, with a portion 203a of Figure 26A taken along section line 26'-26' of the bisector cell pin 158, and another portion 203b of Figure 26A being along the cell pin Intersection lines 26"-26" extending between 158 are taken. The second region 174c has been omitted from the portion 203a for the purpose of illustrating the underlying structure. As a non-limiting example, if the spacer 200 includes tungsten, the tungsten can be removed using a conventional wet etch process using a mixture of hydrofluoric acid (HF) and nitric acid (HNO3). As shown in FIG. 26A, sacrificial pins 216 are positioned above unit pins 158, with each of unit pins 158 being shown in dashed lines. The post 220 is disposed between the remainder of the inter-dielectric column 196 and the base material 168. The second doped region 124 extends along the sacrificial pins 216. The first doped region 122 and the silicide region 126 circumscribe the fill material 196 in the first slot 178. The intrinsic regions 120 are each disposed between one of the first doped regions 122 and one of the second doped regions 124 to form a diode between each of the underlying cell pins 158 (dashed lines) 104.27A and 27B show the semiconductor structure 101 after the upper surface of the semiconductor structure 101 has been planarized and the lower dielectric material 222, the third liner 224, and the intermediate dielectric material 226 have been deposited thereon. Figure 27B is a cross-sectional view of the semiconductor structure shown in Figure 27A taken along section line 27-27 therein. The upper surface of the semiconductor structure 101 can be planarized using a chemical mechanical polishing (CMP) process. The lower dielectric material 222 can include, for example, an oxide material. A lower dielectric material 222 can be deposited over the semiconductor structure 101 to fill the recess therein, and then planarized using, for example, a chemical mechanical polishing (CMP) process such that the upper surface of the lower dielectric material 222 is substantially planar of. The third liner 224 can include a material that can be selectively etched relative to the lower dielectric material 222 and can have a thickness between about 10 nm and about 30 nm. For example, if the lower dielectric material 222 is silicon dioxide, the third liner 224 can include silicon nitride. An intermediate dielectric material 226 can then be formed over the third liner 224. As a non-limiting example, the intermediate dielectric material 226 can include an oxide material. The lower dielectric material 222, the third liner 224, and the intermediate dielectric material 226 can each be formed using a conventional deposition process, such as a chemical vapor deposition (CVD) process.As shown in FIGS. 28A and 28B, line interconnects 142 can be formed, each of which extends through intermediate dielectric material 226, third liner 224, and lower dielectric material 222 to one of contacts 140. Figure 28B is a cross-sectional view of the semiconductor structure shown in Figure 28A taken along section line 28-28 therein. The sacrificial pin 216 and the unit pin 158 are shown in dashed lines in FIG. 28A for purposes of illustrating the underlying structure. Each of the line interconnects 142 can be formed using conventional photolithography processes. For example, a photoresist (not shown) can be formed over semiconductor structure 101 and patterned to include apertures (not shown) therein, each of which is overlaid on a desired formation line The position of one of the interconnects 142. The intermediate dielectric material 226, the third liner 224, and portions of the fill material 196 may be removed via each of the apertures using, for example, a conventional anisotropic reactive ion (ie, plasma) etch process to expose The surface of the contact 140 is depressed. The photoresist can be removed, and a conductive material (eg, tungsten, aluminum, or copper) can be deposited using a conventional chemical vapor deposition (CVD) process, and the conductive can be etched back using conventional chemical mechanical polishing (CMP) processes. Materials are formed to form wire interconnects 142.As shown in Figures 29A and 29B, metal lines 144 may be formed over and in contact with the interconnect 142. Figure 29B is a cross-sectional view of the semiconductor structure 101 shown in Figure 29A taken along section line 29-29 therein. The sacrificial pin 216 and the unit pin 158 are shown in dashed lines in FIG. 29A for purposes of illustrating the underlying structure. To form the metal lines 144, an upper dielectric material 228 can be formed over the surface of the intermediate dielectric material 226 and over the exposed surface of the line interconnects 142. Metal lines 144 that are each electrically coupled to one of the wire interconnects 142 can be formed using conventional photolithography processes. For example, a photoresist (not shown) can be formed over the upper dielectric material 228 and a plurality of apertures (not shown) can be formed therein, each aperture overlying one of the line interconnects 142 On. A portion of the upper dielectric material 228 can be removed through the aperture to form a plurality of openings, each opening exposing a surface of the underwire interconnect 142. The opening can be filled with metal using a conventional chemical vapor deposition (CVD) process and the metal can be etched back using conventional chemical mechanical polishing (CMP) processes to form metal lines 144.A passivation material 146 can optionally be formed over the semiconductor structure 101 and portions of the passivation material 146, the upper dielectric material 228, the intermediate dielectric material 226, and the third liner 224 can be removed, as shown in Figures 30A and 30B. . Figure 30B is a cross-sectional view of the semiconductor structure 101 shown in Figure 30A taken along section line 30-30 therein. The unit pins 158 are shown in dashed lines in Figure 30A for purposes of illustrating the underlying structure. Passivation material 146 can be formed and patterned to cover peripheral region 181 of semiconductor structure 101 using conventional photolithographic processes. The passivation material 146, the upper dielectric material 228, the intermediate dielectric material 226, and the third liner 224 may be removed relative to the passivation material 146 and the third liner 224 using a conventional anisotropic reactive ion (ie, plasma) etch process. section. Portions of the third liner 224 that are exposed via the passivation material 146 can then be removed to expose the surfaces of the fill material 196, the lower dielectric material 222, and the sacrificial pins 216.Referring to FIGS. 31A and 31B, sacrificial pins 216 and second liner 214 may be removed from semiconductor structure 101 to expose sidewalls of second slot 212. Figure 31B is a cross-sectional view of the semiconductor structure 101 shown in Figure 31A taken along section line 31-31 therein. The unit pins 158 are shown in dashed lines in Figure 30A for purposes of illustrating the underlying structure. As a non-limiting example, the sacrificial pins 216 can be selectively removed from the second liner 214 using an anisotropic etch process (eg, a dry reactive ion or plasma etch process). As a non-limiting example, if the sacrificial pin 216 is polysilicon and the second liner 214 is a nitride material, a tetramethylammonium hydroxide (TMAH) etchant can be introduced into the semiconductor structure 101 to selectively remove the polysilicon. The nitride material is not removed. Next, the second liner 214 can be removed using a conventional anisotropic etch process (eg, a dry reactive ion or plasma etch process) to expose the second doped region 124, the second regions 174a, 174b, and 174c, and the base material 168. Portions of the plurality of slots 212 thereby defining a second slot 212.32A and 32B show semiconductor structure 101 after barrier material 128 and sidewall spacers 130 are formed. Figure 32B is a cross-sectional view of the semiconductor shown in Figure 32A taken along section line 32-32, with section lines 32-32 halving unit pins 158. As a non-limiting example, the barrier material 128 can include a dielectric, such as an oxide material or a nitride material. The barrier material 128 can be formed over the exposed portion of the second doped region 124 using a conventional chemical vapor deposition (CVD) process or an oxidation process. The sidewall spacers 130 may be formed of, for example, a metal or ceramic material such as titanium nitride (TiN), titanium (Ti), or platinum (Pt). The sidewall spacers 130 may be formed over the sidewalls of the second slot 212 using a conventional chemical vapor deposition (CVD) process. The sidewall spacers 130 can include a material that substantially reduces diffusion of metal into the second doped region 124. After sidewall spacers 130 are formed, an etch process (eg, an anisotropic dry reactive ion or plasma etch process) may be etched through base material 168 to expose the underlying portions of unit pins 158. A conductive material (eg, tungsten) may be deposited using a conventional chemical vapor deposition (CVD) process to fill the remainder of the second slot and etch back the conductive material to form electrode 110, each of which is electrically coupled to One of the unit pins 158 is as shown in Figures 1A and 1B. Two memory strings 102 can be electrically coupled to each of the nodes at the intersection of bit line 162 and column line 160. As shown in FIG. 1A, each of the electrodes 110 effectively forms a dual node having a diode 104 on its opposite side.Another embodiment of a memory device 300 is shown in Figures 33A and 33B. Memory device 300 can be, for example, a resistive random access memory (RRAM) device that includes another silicide region 302 and memory element 304 disposed between electrode 110 and diode 104 of each memory string 102. Embodiments of methods that may be used to form memory device 300 are described below with reference to FIGS. 34-36.As shown in FIG. 34, a semiconductor structure 301 comprising a plurality of vertically stacked diodes 104 overlying a conventional MOSFET array 106 disposed on a substrate 108 may be formed in a manner previously described with reference to FIGS. 4A through 31B. A conventional anisotropic reactive ion etching process can be performed to remove exposed portions of each of the second doped regions 124 to form a plurality of recesses 306 therein.Referring to FIG. 35, another silicide region 302 can be deposited in recess 306 (FIG. 34) in each of the second doped regions 124. As a non-limiting example, another silicide region 302 may include cobalt silicide (CoSi2), titanium silicide (TiSi2), tungsten silicide (TiSi2), or nickel silicide (NiSi2), and may be processed using a conventional chemical vapor deposition (CVD) process. Deposition.As shown in FIG. 36, memory element 304 and sidewall spacers 130 may be formed over each of the sidewalls of second slot 212. Memory element 304 can comprise a dielectric material, such as an oxide material or a nitride material. Memory element 304 can be formed using, for example, a conventional chemical vapor deposition (CVD) process. The sidewall spacers 130 may include titanium oxide (TiO2), copper oxide (CuxOy), tungsten oxide (WxOy), nickel oxide (NixOy), or gallium oxide (Ga203), and may be used, for example, as described with respect to FIGS. 32A and 32B. The method of method formation. After the memory element 304 and sidewall spacers 130 are formed over each of the sidewalls, the exposed regions of the base material 168 can be removed using an etchant that selectively etches away the base material 168 The memory element 304 and the sidewall spacers 130 are selective. For example, the base material 168 can be removed using a conventional anisotropic dry reactive ion or plasma etch process to expose a portion of each of the underlying cell pins 158. Next, contact pins 132 including, for example, tungsten may be formed in the remaining portion of the second slot 212 above the sidewall spacers 130 using a conventional chemical vapor deposition (CVD) process to form the layers shown in FIGS. 33A and 33B. Memory device 300.Another embodiment of a memory device 400 is shown in Figures 37A and 37B. Figure 37B is a cross-sectional view of the memory device 400 shown in Figure 37A taken along section line 37-37 therein. Memory device 400 can be, for example, a phase change random access memory (PCRAM) device including another silicide 302 and phase change material 402 disposed between diode 104 and contact pins 132 of at least one memory string 102. . Embodiments of methods that may be used to form memory device 400 are described below with reference to FIGS. 38 and 39.As shown in FIG. 38, semiconductor structure 401 can be formed in the manner previously described with reference to FIGS. 4A through 31B and a conventional anisotropic reactive ion etching process can be performed to form recesses 306 in second doped regions 124.Referring to FIG. 39, a phase change material 402 can be formed over the exposed sidewalls in each of the second slots 212. Phase change material 402 may also be deposited on the sidewalls of passivation material 146, upper dielectric material 228, and intermediate dielectric material 226. As a non-limiting example, phase change material 402 can include a chalcogenide such as Ge2Sb2Te5 (GST). Phase change material 402 can be deposited using, for example, a conventional sputtering process. After the phase change material 402 is formed over each of the sidewalls, the exposed regions of the base material 168 can be removed using an etchant that selectively etches away the base material 168 from the phase change material 402. . For example, the base material 168 can be removed using a conventional anisotropic dry reactive ion or plasma etch process to expose a portion of each of the underlying cell pins 158.Contact pins 132 can then be formed in the remainder of the second slot 212 to form the memory device 400 as shown in Figures 37A and 37B. As a non-limiting example, the contact pins 132 can comprise a conductive material, such as tungsten. The contact pins 132 can be formed by depositing the conductive material using a conventional chemical vapor deposition (CVD) process and then etching back the conductive material.Although vertically stacked diodes 104 are shown herein as part of a vertical non-volatile memory device, those skilled in the art will recognize that they can be used in a variety of other memory applications as well. The memory strings 102 each include two diodes 104 per cell, and thus provide a higher density diode 104 arrangement that can be easily integrated with existing devices. The memory device including the vertically stacked diodes 104 provides a substantially increased Ioff by using a conventional MOSFET device as an address selector. The method of forming the vertically stacked diodes 104 as described above enables the formation of temperature sensitive elements (e.g., phase change materials) as a final action after completion of the process requiring increased temperature. The arrangement of diodes 104 described herein provides increased memory density at reduced cost.In view of the above description, some embodiments of the present invention include a non-volatile memory device including an electrode and disposed over a substrate and including different locations along a length of the electrode At least one memory string of the plurality of diodes. The electrode can include, for example, a metal contact pin having at least one of a metal or ceramic material disposed on an opposite side or alternatively having a phase change material. Each of the plurality of diodes can include an intrinsic region disposed between the p-type doped region and the n-type doped region. The p-type doped region, the n-type region, and the intrinsic region may extend along a length of the substrate that is substantially perpendicular to a length of the electrode. The at least one memory string is disposed on a memory device including a plurality of memory cells and is electrically coupled to the memory device by the electrodes.An additional embodiment of the present invention includes an intermediate semiconductor structure including a plurality of diodes overlying a substrate and vertically stacked one above another in a plurality of columns, each of the plurality of diodes passing The dielectric material is spaced apart from each other. The plurality of diodes can be disposed on a plurality of memory cells, each of the memory cells being electrically coupled to a unit pin. At least one of the plurality of columns may at least partially overlie a portion of the unit pins.Other embodiments include methods of fabricating semiconductor structures. A plurality of alternating first and second regions may be deposited over a base material overlying the transistor array to form a cell stack, the transistor array comprising a plurality of transistors, each of the transistors being electrically coupled to a Unit pin. The portion of the cell stack that is exposed via the mask can be removed to form a plurality of first slots, each of the slots at least partially overlying the cell pins. A dopant may be introduced to the exposed portion of the first region to form a plurality of p-type doped regions, and a silicide material may be deposited over each of the plurality of p-type doped regions. A fill material may be deposited over the semiconductor structure to at least fill the plurality of first slots. A portion of the cell stack that is exposed via another mask may form a plurality of second slots that are laterally spaced apart from each of the plurality of first slots. The exposed portion of the first region can be introduced to a dopant to form a plurality of n-type doped regions, disposed in the plurality of n-type doped regions, and the plurality of p-type doped regions The intrinsic zone between.Although the present invention has been described in terms of the illustrated embodiments and variations thereof, those skilled in the art will understand and understand that the invention is not limited thereto. Rather, additions, deletions, and modifications of the illustrated embodiments can be made without departing from the scope of the invention as defined by the appended claims. |
In a method for attaching a semiconductor chip (10) to a chip carrier (12) thereby producing electrically conducting connections between contact areas arranged on a bottom surface of the semiconductor chip (10) and contact areas (26, 28) on the chip carrier (12) an anisotropically conducting paste or film (16) is applied to the chip carrier surface on which the contact areas (26, 28) are provided; the chip (10) is placed on the paste or film (16) so that its contact areas (22, 24) come to rest exactly over the contact areas (26, 28) of the chip carrier; heat is applied to the paste or film sufficient to cause hardening thereof and at the same time pressure is applied to the top side of the semiconductor chip (10) via an elastic body (20) causing or interstices between the contact areas (22, 24) on the bottom surface of the chip to be completely filled with the material of the paste or film (16) and causing said elastic body (20) to fold around the edges of the chip (10) to create an accumulation (32) of the material of the paste or film (16) around the entire chip (10). |
Method for attaching a semiconductor chip to a chip carrier thereby producing electrically conducting connections between contact areas arranged on a bottom surface of the semiconductor chip and contact areas on the chip carrier, wherein- an anisotropically conducting paste or film is applied to the chip carrier surface on which the contact areas are provided;- placing the semiconductor chip on the paste or film so that it contacts areas come to rest exactly over the contact areas of the chip carrier;- applying heat to the anisotropically conducting paste or film sufficient to cause hardening thereof and at the same time pressing the semiconductor chip against the chip carrier via an elastic body causing all interstices between the contact areas on the bottom surface of the semiconductor chip to be completely filled with the material of the paste or film and causing said elastic body to fold around the edges of the semiconductor chip to create an accumulation of the material of the paste or film around the entire chip.Device for carrying out the method of claim 1, said device comprising a pressure die (18) for the application of the pressure to the chip (10) with an adjustable pressing force against the chip carrier (12), a counter-pressure support (14) for receiving the chip carrier (12) with the semiconductor chip (10) arranged on it with the interposition of the anisotropically conducting film (16) or the anisotropically conducting paste (16), and an elastic body (20; 34; 36) arranged between the pressure die (14) and the semiconductor chip (10).Device according to claim 2, wherein the elastic body (20) is fitted to the face of the pressure die (18).Device according to claim 1, whereby the elastic body (34) is a band extending parallel to and between the face of the pressure die (18) and the surface of the semiconductor chip (10).Device according to any one of the previous claims 2 to 4, wherein the counter-pressure support (14) is heated.Device according to any one of the previous claims 2 to 5, wherein the elastic body consists of heat-resistant silicone.Device for attaching a semiconductor chip to a chip carrier, thereby producing an electrically conducting connection between contact areas arranged on a surface of the semiconductor chip and contact areas on the chip carrier by means of an anisotropically conducting film (16) or an anisotropically conducting paste (16), with a pressure die (18) for the application of the pressure to the chip (10) with an adjustable pressing force against the chip carrier (12), a counter-pressure support (14) for receiving the chip carrier (12) with the semiconductor chip (10) arranged on it with the interposition of the anisotropically conducting film (16) or the anisotropically conducting paste (16), and an elastic body (20; 34; 36) arranged between the pressure die (14) and the semiconductor chip (10) for creating an accumulation of material of the film or paste around the entire chip. |
The invention relates to a method for attaching a semiconductor chip to a chip carrier, thereby producing an electrically conducting connection between contact areas arranged on a surface of the semiconductor chip and contact areas on the chip carrier, and to a device for carrying out said method.In the production of electronic parts, use is more and more frequently made of a new process for producing the electrical connection between a semiconductor chip and the contact areas on a carrier, connected by conductor tracks, whereby the contact areas of the semiconductor chip are brought into direct contact with the contact areas of the carrier. The previously used package, housing the semiconductor chip, and which was provided with its own contact areas used for contact bonding, is hereby dispensed with. To produce the electrical connection between the contact areas of the semiconductor chip and the contact areas on the carrier, use is thereby made of an anisotropically conducting film or an anisotropically conducting paste, that is a material which offers a very low electric resistance in only one direction, whilst it is practically non-conducting in the direction perpendicular to it.A problem when using such a film or such a paste to produce the electrical connection between the contact areas of the semiconductor chip and the contact areas of the carrier is that very narrow tolerances must be adhered to as regards the tools which are used to press the semiconductor chip against the film or the paste and the carrier, since a reliable electrical connection between the different contact areas of the semiconductor chip and the corresponding contact areas on the carrier can only be achieved when, on the one hand, the pressure applied is evenly distributed and, on the other, the thickness of the film or the paste between the areas in contact with each other is made as uniform as possible. The uniformity of the layer thickness is of great importance for the following reason. The anisotropic conduction behaviour of the film or of the paste used is achieved in that in a material, such as epoxy resin, electrically conductive particles are embedded which are not in contact with each other. In the direction of the surface extension of the film or the paste, respectively, this material therefore offers a very high electric resistance, but it assumes a low-resistance state when as a result of applied pressure it becomes so thin between two contact areas that the particles embedded in the epoxy resin come into contact with both the contact areas at the semiconductor chip and also with the contact areas at the carrier. These particles then produce a conducting connection between the contact areas. If, however, the semiconductor chip is pressed against the carrier in even a slightly slanting position, because of excessive tolerances, a situation may arise where the conducting particles cannot produce a conducting connection between the contact surfaces because they are not in contact with each other, and that the film or paste material is squeezed out at some contact areas to an extent that no conducting particles remain available between the contact areas to be connected. The desired electrical connection between the contact areas is therefore not realised at these points, so that the unit to be produced has to be scrapped. Especially in the case of semiconductor chips having a plurality of contact areas which are to be connected to the corresponding contact areas of the carrier, this requirement for a uniform contact pressure constitutes a problem that is difficult to solve.The invention rests of the requirement to provide a method and a device of the type described in the foregoing allowing to create the desired reliable electrical connections between the contact areas on the semiconductor chip and the corresponding contact areas on the chip carrier to be realised in a highly reliable way, without the need for stringent demands on the tolerance of the parts involved.This requirement is satisfied according to the invention in the method as defined in claim 1. The device for carrying out the method comprises a pressure die for the application of pressure to the chip with an adjustable pressing force against the chip carrier, a counter-pressure support for receiving the chip carrier with the semiconductor chip arranged on it with the interposition of the anisotropically conducting film or the anisotropically conducting paste, and an elastic body arranged between the pressure die and the semiconductor chip.The elastic body used in the device according to the invention takes care of the compensation of tolerances and ensures that the semiconductor chip is pressed in precisely plane-parallel alignment against the chip carrier, so that between the contact areas on the semiconductor chip and the corresponding contact areas on the chip carrier equal distances are obtained, which are essential for the establishment of reliable electrical connections between these contact areas. Moreover, by applying the pressure via the elastic body, the interstices between the contact areas of the semiconductor chip are completely filled with the material of the paste or film and an accumulation of this material is created around the entire semiconductor chip which contributes to the protection of the chip against external influences and the reliability of the electrical connections between the contact areas concerned.In an advantageous embodiment, the elastic body is attached to the face of the pressure die. The elastic body can advantageously also be an elastic strip extending parallel to and between the face of the pressure die and the surface of the chip. The elastic body may advantageously be made of heat-resistant silicone.The invention shall now be described in exemplified form with reference to the drawing, whereFig. 1 shows a first embodiment of the device according to the invention,Fig. 2 is an enlarged section of the device of figure 1, before the semiconductor chip is pressed against the chip carrier by means of the pressure die,Fig. 3 is an enlarged section of figure 2 after pressure-bonding the semiconductor chip, andFig. 4 shows the device according to the invention in a second embodiment.By reference to the device schematically represented in Fig. 1, a semiconductor chip 10 is to be attached to a chip carrier 12 in such a way that the contact areas located on the lower surface of the semiconductor chip, facing the chip carrier 12, are to be brought into electrical contact with the contact areas located on the upper surface of the chip carrier 12, facing the semiconductor chip 10. The chip carrier 12 can have printed circuit paths on its upper surface, whereby specific areas of these circuit paths constitute the contact areas which are to be connected to the corresponding contact areas of the semiconductor chip. The chip carrier 12 can, for example, be a ceramic substrate, a conventional circuit board, or even a foil printed with circuit paths. In the example described it is assumed that the chip carrier is such a foil with printed circuit paths.The chip carrier 12 is placed on a counter pressure support 14 and a piece of an anisotropically conducting film 16 is placed on its top side, in the area where the semiconductor chip 10 is to be attached. This film, also known as ACF (for a nisotropic conductive film), consists of epoxy resin in which the electrically conducting particles are embedded. Such films are commercially available from Toshiba and Hitachi.The semiconductor chip 10 is then placed on the film 16 in such a way that the contact areas at its bottom surface come to rest exactly over the chip carrier contact areas with which electric contact is to be made. A pressure die 18 is subsequently lowered onto the top side of the semiconductor chip 10 with a pre-determined, precisely defined force. On the face of this pressure die 18 there is an elastic body 20 of silicone which ensures that the semiconductor chip 10 is pressed against the chip carrier 12 in precise plane-parallel alignment with respect to the surface of the counter pressure support 14. Because of the elasticity of the elastic body 20, any incorrect alignment or any tolerances of the parts moving together, in relation to each other, are thus compensated.Fig. 2 shows in an enlarged sectional view, not to scale, how the contact areas 22, 24 at the bottom surface of the semiconductor chip 10 are positioned in relation to the contact areas 26, 28 on the chip carrier 12 when the semiconductor chip 10 is in place, after the interposition of the film 16 and before the application of pressure by the pressure die 18. As can be seen, the film 16 contains electrically conducting particles 30 which are not in contact with respect to each other, so that the film has a very high electric resistance. Only when, after application of pressure, the film becomes so thin as is shown in figure 3, the electrically conducting particles establish a connection between the contact areas 22, 26 and 24, 28, respectively. The electric resistance in the direction parallel to the surface expansion of the film, however, still remains high, so that no short circuit can occur between the contact areas separated from each other in this direction.As shown in Fig. 3, the application of pressure to the top side of the semiconductor chip 10 causes all interstices between the contact areas to be completely filled with the material of the film. Since the elastic body 20, on application of pressure, also folds around the edges of the semiconductor chip 10, an accumulation 32 of the material of the film 16 is produced around the entire semiconductor chip 10, which not only perfectly protects the chip against external influences, but also from separation, even when the chip carrier 12 is subjected to bending stresses.A further embodiment variant of the here described device is represented in Fig. 4, where the elastic body is not applied to the face of the pressure die 18, but takes the shape of an elastic strip 34 which extends parallel to and between the face of the pressure die 18 and the surface of the semiconductor chip 10. When the pressure die 18 in lowered in the direction of the semiconductor chip 10, the elastic band 34 is applied, in the same manner as the elastic body 20 in the embodiment shown in figure 1, to and around the semiconductor chip 10, thus producing the same effect as the elastic body 20.The anisotropically conducting film 16 in the embodiments described can also be replaced by an anisotropically conducting paste which, like the film, consists of an epoxy resin in which conducting particles are embedded. Such pastes are commercially available, for example from the companies previously mentioned. They are also known by their shortened designation of ACP (for Anisotropic Conductive Paste).For the purpose of accelerating the hardening process of the film or the paste, all embodiments described can be provided with a heated counter pressure support 14. The consequence of this is that a durable connection between the semiconductor chip 10 and the chip carrier 12 is achieved in only a short time.By the use of the device described, very reliable connections between the contact areas of the semiconductor chip and the corresponding contact areas on the chip carrier 12 can be achieved, which also applies when a large number of contact areas is present on the underside of the semiconductor chip 10, that are to be connected to the corresponding contact areas of the chip carrier 14. |
An electrostatic discharge (ESD) protection structure for protecting against ESD events between signal terminals is disclosed. ESD protection is provided in a first polarity, by a bipolar transistor (4C) formed in an n-well (64; 164), having a collector contact (72; 172) to one signal terminal (PIN1) and its emitter region (68; 168) and base (66; 166) connected to a second signal terminal (PIN2). For reverse polarity ESD protection, a diode (25) is formed in the same n-well (64; 164) by a p+ region (78; 178) connected to the second signal terminal (PIN2), serving as the anode. The cathode can correspond to the n-well (64; 164) itself, as contacted by the collector contact (72; 172). By using the same n-well (64; 164) for both devices, the integrated circuit chip area required to implement this pin-to-pin protection is much reduced. <IMAGE> |
A structure in an integrated circuit, for conducting energy from an electrostatic discharge (ESD) event between first and second signal terminals, comprising:a transistor (42), connected between the first and second signal terminals, (PIN1, PIN 2) and formed into a well (64, 164) of a first conductivity type, for conducting ESD energy from the first signal terminal to the second signal terminal; anda diode (25), having an anode connected to the second signal terminal and a cathode connected to the first signal terminal, the diode formed at a junction between a diffused region of a second conductivity type formed into the well, characterised in that the diode further comprises:adjacent diffused regions (180) of the first and second conductivity types formed within the well, and connected to the first signal terminal.The structure of claim 1, wherein the cathode of the diode comprises the well.The structure of claim 1 or 2, wherein the anode of the diode comprises the diffused region.The structure of any of claims 1-3, wherein the transistor is a bipolar transistor, and comprises:a base region (166) of the second conductivity type, formed within the well, the base region connected to the second signal terminal;an emitter region (168) of the first conductivity type, formed within the base region, and connected to the second signal terminal;a collector contact structure, connecting the first signal terminal to the well.The structure of claim 4, wherein the collector contact structure comprises:a diffused region of the first conductivity type formed into a surface of the well.The structure of claim 5, wherein the bipolar transistor further comprises:a buried layer (162) of the first conductivity type, underlying the well;and wherein the collector contact structure further comprises:a buried contact plug (174), in contact with the diffused region of the first conductivity type and with the buried layer.The structure of any of claims 1-6, further comprising:a trigger element connected to the first signal terminal, for defining the turn-on conditions of the bipolar transistor.The structure of claim 7, wherein the trigger element comprises:a diffused region (175) of the first conductivity type formed within the base region, and connected to the first signal terminal.The structure of any preceding claim, wherein the anode of the diode comprises:a plurality of diffused anode regions (178) of the second conductivity type, each connected to the second signal terminal;and wherein the adjacent diffused regions of the first and second conductivity types are arranged as a plurality of chains of adjacent regions, one of the plurality of chains disposed between a first diffused anode region and a second diffused anode region of said plurality of diffused anode regions.An integrated circuit including a structure as claimed in any preceding claim.The integrated circuit of claim 10, and wherein electrostatic discharge is provided to ground. |
This invention is in the field of semiconductor integrated circuits, and is more specifically directed to integrated structures for protecting such circuits from electrostatic discharge events.Modern high-density integrated circuits are known to be vulnerable to damage from the electrostatic discharge (ESD) of a charged body (human or otherwise) as it physically contacts an integrated circuit. ESD damage occurs when the amount of charge exceeds the capability of the conduction path through the integrated circuit. The typical ESD failure mechanisms include thermal runaway resulting in junction shorting, and dielectric breakdown resulting in gate-junction shorting (e.g., in the metal-oxide-semiconductor, or MOS, context).To avoid damage from ESD, modern integrated circuits incorporate ESD protection devices, or structures, at each external terminal. ESD protection devices generally operate by providing a high capacity conduction path, so that the brief but massive ESD charge may be safely conducted away from circuitry that is not capable of handling the event. In some cases, ESD protection is inherent to the particular terminal, as in the case of a power supply terminal, which connects to an extremely large p-n junction capable of absorbing the ESD charge. Inputs and outputs, on the other hand, typically have a separate ESD protection device added in parallel to the functional terminal. The ideal ESD protection device turns on quickly in response to an ESD event to safely and rapidly conduct the ESD charge, but remains off and presents no load during normal operation.Examples of ESD protection devices are well known in the art. In the case of MOS technology, an early ESD protection device was provided by a parasitic thick-field oxide MOS transistor that was turned on by and conducted ESD current, as described in U.S. Patent No. 4,692,781 and in U.S. Patent No. 4,855,620 , both assigned to Texas Instruments Incorporated and incorporated herein by this reference. As the feature sizes of MOS integrated circuits became smaller, and with the advent of complementary MOS (CMOS) technology, the most popular ESD protection devices utilized a parasitic bipolar device to conduct the ESD current, triggered by way of a silicon-controlled-rectifier (SCR) structure, as described in Rountree et al., "A Process-Tolerant Input Protection Circuit for Advanced CMOS Processes", 1988 EOS/ESD Symposium, pp. 201-205 , incorporated herein by this reference, and in U.S. Patent No. 5,012,317 and U.S. Patent No. 5,307,462 , both assigned to Texas Instruments Incorporated and also incorporated herein by this reference.Figure 1 illustrates an integrated circuit including conventional ESD protection circuits and structures, in which external terminals are protected from damage due to electrostatic discharge relative to device substrate ground. As shown in Figure 1, external terminals PIN1, PIN2 serve as inputs, outputs, or both, for functional circuitry 10. External terminal GND is typically connected to the substrate of the integrated circuit, which serves as device ground. Those skilled in the art will understand that external terminals PIN1, PIN2, GND may be physically realized in various ways. Typically, these external terminals include a bond pad on the surface of the integrated circuit chip itself, which is connected by way of a bond wire or lead frame to an external terminal of the device package (such as a package pin, a package pad for surface mount packages, or a solder bump) or which is soldered directly to a land of a circuit board or multichip substrate. In any event, terminals PIN1, PIN2, GND are electrically connected outside of the integrated circuit to communicate signals or to receive a bias voltage, and as such are capable of receiving an electrostatic discharge.In this conventional arrangement, the electrostatic discharge (ESD) from terminals PIN1, PIN2 to device ground GND is safely conducted by way of n-p-n transistors 4A, 4B, respectively. Referring to the example of the protection circuit for terminal PIN1, n-p-n transistor 4A has its collector connected to terminal PIN1 and its emitter connected to substrate ground GND, effectively in parallel with functional circuitry 10. Trigger 6A and resistor 7A are connected in series between terminal PIN1 and substrate ground GND, and the base of transistor 4A is connected to the node between trigger circuit 6A and resistor 7A. Typically, trigger 6A corresponds to a device or element that defines the turn-on of transistor 4A. In some cases, trigger 6A is not a particular component (i.e., simply a connection), in which case transistor 4A turns on when its base-collector junction breaks down (at a voltage BVcbo) in response to a positive polarity ESD event. In another example, trigger 6A may be a capacitor, or an element such as a Zener diode that breaks down at a voltage that is exceeded by a significant positive polarity ESD event, with the voltage drop across resistor 7A due to this current forward-biasing the base-emitter junction of transistor 4A. Alternatively, this ESD protection scheme may instead involve a field effect device as transistor 4A, for example an n-channel MOSFET, as known in the art. In any case, transistor 4A safely conducts the ESD energy through a low-impedance path to substrate ground GND, ensuring that damaging densities of energy are not conducted through functional circuitry 10. During normal device operation, assuming a sufficiently high trigger voltage, transistors 4A, 4B remain off, and thus do not affect the operation of the integrated circuit.Protection for negative polarity ESD events at terminals PIN1, PIN2 is provided by diodes 5A, 5B, respectively. Typically, diodes 5A, 5B are simply the parasitic diodes between the n-type region serving as the collector of transistors 4A, 4B and the p-type substrate. Diodes 5A, 5B are each forward-biased by negative ESD events at terminals PIN1, PIN2, respectively, so that the ESD energy is safely conducted through this low-impedance path. In normal operation, substrate ground GND is at a sufficiently low voltage relative to the specified voltages at terminals PIN1, PIN2 that these diodes 5A, 5B remain reverse-biased, and do not affect the voltage levels at terminals PIN1, PIN2 nor the operation of functional circuitry 10.Some types of modern integrated circuits require ESD protection not only between terminals PIN1, PIN2 and substrate ground GND, but also require protection for ESD events between any given pair of its signal terminals (e.g., between terminals PIN1 and PIN2), not involving substrate ground GND. These types of circuits include so-called mixed signal integrated circuits, which include both digital and analog functions. Examples of such mixed signal devices include charge-pump circuits, voltage regulator circuits, boot-strap or "flying" gate drivers, and the like. Figure 2 illustrates such an integrated circuit having a conventional ESD protection circuit between terminals PIN1, PIN2.In this example, n-p-n transistor 4C has its collector connected to terminal PIN1 and its emitter connected to terminal PIN2. Trigger 6C and resistor 7C are also connected in series between terminals PIN1, PIN2, and the base of transistor 4C is connected to the node between trigger circuit 6C and resistor 7C. These devices protect functional circuitry 10 from damage due to ESD events of positive polarity at terminal PIN1 relative to terminal PIN2.However, parasitic diode 5C at the collector of transistor 4C is not coupled to terminal PIN2, but instead is connected to the substrate, at substrate ground GND. As such, in the event of a negative polarity ESD event at terminal PIN1 relative to terminal PIN2, the voltage at which terminal PIN1 is clamped by either the series combination of structure 5C and structure 4B, or the structure of transistor 4C, will be higher than desirable for effective ESD protection performance. Instead, protection for negative polarity pin-to-pin ESD events is provided by isolated diode 15C, having its cathode at terminal PIN1 and its anode at terminal PIN2. Again, as in the case of Figure 1, a negative polarity ESD event at terminal PIN1 relative to terminal PIN2 will forward bias isolated diode 15C, so that a low-impedance path for this energy will be provided, preventing damage to functional circuit 10.Those skilled in the art having reference to this specification will realize that there is not a need to provide a mirror-image ESD structure between terminals PIN2, PIN1 (i.e., having an n-p-n transistor with its collector at terminal PIN2 and its emitter at PIN1). Rather, the circuit of Figure 2, including isolated diode 15C, is capable of protecting both terminals PIN1, PIN2 in either direction.The orientation of the ESD structure (specifically isolated diode 15C) between signal terminals PIN1, PIN2 should take into account situations in which functional circuitry 10 may permit the voltage on one signal terminal (e.g., PIN1) to exceed the voltage on another signal terminal (e.g., PIN2) in normal operation. In addition, as conventional in the art, similar ESD protection circuits are provided between each pair of terminals that are required to have such protection.While the arrangement of Figure 2 provides excellent ESD protection for all combinations of ESD events, conventional implementations of the pin-to-pin protection, particularly in providing the additional isolated diode 15C as shown in Figure 2, have been inefficient in practice. Figure 3 illustrates the conventional physical implementation of the pin-to-pin ESD protection circuit illustrated in Figure 2, in a cross-sectional view.In the conventional example illustrated in Figure 3, the integrated circuit is formed into lightly-doped p-type substrate 30. N-type buried layer 32 is a heavily doped n-type region that underlies a portion of the surface of substrate 30, and provides a subcollector for n-p-n transistor 4C. The collector of transistor 4C is provided by n-well 34, disposed above n-type buried layer 32, and the base of transistor 4C is p-type region 36 that is diffused into n-well 34 from the surface. The emitter of transistor 4C is implemented by n+ region 38 diffused into p-region 36; n+ region is connected to signal terminal PIN2 by a metal conductor (not shown). P+ region 40 is also disposed within p-region 36, and is connected to signal terminal PIN2 by way of resistor 7C, typically a polysilicon or a diffused resistor, and a corresponding metal conductor (not shown). The subcollector at n-type buried layer 32 is connected to signal terminal PIN1 by way of buried contact 44 (typically a heavily doped buried region), overlying n+ region 42, and a corresponding metal conductor (not shown).In this example, trigger 6C is simply the connection to collector region 42 and collector region 42 itself. A positive ESD event of sufficient energy between signal terminals PIN1, PIN2 will break down the collector base junction of transistor 4C. The breakdown current will flow into the base of transistor 4C, and to signal terminal PIN2 through resistor 7C, forward biasing the emitter-base junction and initiating bipolar conduction. Once transistor 4C is turned-on, collector-emitter current will be safely conducted from signal terminal PIN1 through n+ region 42, buried contact 44, n-type buried layer 32, n-well 34, p-type region 36, and n+ region 38.In this conventional arrangement, negative polarity ESD events are handled by isolated diode 15C. Isolated diode 15C has an anode formed by p+ region 48 that is disposed within n-well 46, and a cathode formed by n+ region 50, also within n-well 46. P+ region 48 and n+ region 50 are connected to signal terminals PIN2, PIN1, respectively, by conventional metal conductors (not shown). Parasitic diode 5C is provided between n+ region 50 and n-well 46, and p-type substrate 46. In this arrangement, a negative polarity ESD event at signal terminal PIN1 relative to signal terminal PIN2 will forward bias isolated diode 15C, which safely conducts the ESD energy between these signal terminals.However, in this conventional arrangement as shown in Figure 3, the second instance of n-well 46 that is provided for isolated diode 15C occupies a large amount of silicon area. In particular, conventional integrated circuits typically have a design rule that specifies the minimum acceptable spacing between adjacent n-wells, primarily to avoid punch-through. In the example of Figure 3, this well-to-well spacing between adjacent n-wells 34, 46 is illustrated by distance WW. A typical specification for distance WW in a conventional mixed signal device having high voltage capability is 15 to 20 µm. Especially considering that a corresponding isolated diode 15C is required between each pair of signal terminals in the device, the area required for the diode and the well-to-well spacing can become significant.A further example is presented in Japanese Patent Application JP2252261 , which shows a transistor and a diode connected between first and second terminals both being formed in wells for ESD protection.The present invention provides ESD structure as set forth in the claims.The present invention may be implemented by forming an electrostatic discharge protection structure, connected between two signal terminals of an integrated circuit. The structure includes both a transistor and a reverse-polarity protection diode within a common well. In the example of a bipolar protection transistor, the common well has the same conductivity type as that of the collector of the bipolar transistor. The first signal terminal is connected to the collector of the bipolar transistor, while the second signal terminal is connected to the emitter of the bipolar transistor and is resistively connected to the base of that transistor. The first and second signal terminals are connected to the cathode and anode of the diode. The bipolar transistor conducts ESD energy of a first polarity and the diode conducts ESD energy of the reverse polarity. The diode is constructed to have a reverse breakdown voltage greater than the triggering voltage of the bipolar transistor, so that the diode does not affect normal operation of the integrated circuit, and is not damaged by ESD stress of the first polarity.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGFigure 1 is an electrical diagram, in schematic form, of a conventional electrostatic discharge (ESD) protection circuit.Figure 2 is an electrical diagram, in schematic form, of a conventional electrostatic discharge (ESD) protection circuit that provides protection between signal pins.Figure 3 is a cross-sectional diagram of the conventional ESD structure of Figure 2.Figure 4 is an electrical diagram, in schematic form, of a electrostatic discharge (ESD) protection circuit according to the preferred embodiments of the invention.Figures 5a and 5b are cross-sectional and plan views, respectively, of an ESD protection structure according to a first preferred embodiment of the invention.Figures 6a and 6b are cross-sectional and plan views, respectively, of an ESD protection structure according to a second preferred embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will be described in connection with its preferred embodiments, and specifically in connection with an example of this preferred embodiment of the invention involving an integrated circuit constructed according to a conventional bipolar and complementary metal-oxide-semiconductor (BiCMOS) technology. It is to be understood that this description is provided by way of example only, and is not to unduly limit the true scope of this invention as claimed.Figure 4 illustrates, by way of an electrical schematic, an example of an integrated circuit incorporating an ESD protection circuit according to the preferred embodiment of the invention. It is contemplated that the integrated circuit of Figure 4 is a single-" chip" integrated circuit, in which the elements shown in Figure 4 are all realized on the same integrated circuit device. This integrated circuit thus has a plurality of terminals for making connection to circuitry external to the integrated circuit; it is at these terminals that protection against electrostatic discharge (ESD) events is to be provided. More specifically, this invention is directed to providing ESD protection between signal pins, safely conducting ESD energy and current between the two signal pins so that the functional circuitry is not damaged by the ESD event.Conventional ESD protection circuits were described above, in the Background of the Invention, relative to Figures 1 and 2. As evident from those Figures and from Figure 4, some elements in the conventional ESD protection circuits are also present in the ESD circuitry according to the preferred embodiments of this invention. For the sake of clarity, the same reference numerals are used in this Figure 4 to refer to those circuit elements that are the same as those in Figures 1 and 2.In this example, external terminals PIN1, PIN2 serve as signal terminals (inputs, outputs, or common I/O terminals) connected to functional circuitry 10. External terminal GND is typically connected to the substrate of the integrated circuit, and as such can absorb a great deal of transient charge at its p-n junctions; accordingly, the substrate typically serves as device ground. Those skilled in the art will understand that external terminals PIN1, PIN2, GND may be physically realized in various ways. These external terminals include at least a so-called bond pad on the integrated circuit, to which connection may be readily to an external pin or pad of an integrated circuit package, a substrate in a multi-chip module, or to a circuit board. These connections may be made by way of a conventional wire bond to a package header or lead frame; by way of a solder bump to a package header, lead frame, or circuit board; or by way of a tape or beam lead in other types of packages. In any event, external signal terminals PIN1, PIN2 are electrically connected outside of the integrated circuit to communicate signals to or from the functional circuitry, and external terminal GND is receives a reference voltage. Of course, other terminals, including other signal terminals and power supply terminals, are also provided within the integrated circuit; only signal terminals PIN1, PIN2 and reference voltage terminal GND, are illustrated in Figure 4, for the sake of clarity.Each of these external terminals are exposed to electrostatic discharge (ESD) events. Typically, an ESD event is in the form of an extremely high voltage with a finite, but large, amount of charge that is discharged through the integrated circuit. The function of the ESD protection circuit in the preferred embodiments of the invention, for example as shown in Figure 4, is to provide a sufficiently low impedance path to this transient current, so that this high transient current is not conducted through the sensitive functional circuitry 10 of the deviceIn the integrated circuit of Figure 4, ESD energy from either of terminals PIN1, PIN2 to device ground GND will be safely conducted through respective ones of n-p-n transistors 4A, 4B, respectively. Signal terminal PIN1 is connected to the collector of n-p-n transistor 4A and to functional circuitry 10; the emitter of transistor 4A is connected to substrate ground GND (as is functional circuitry 10). Trigger 6A is connected between signal terminal PIN1 and the base of transistor 4A, and resistor 7A is connected between this node at the base of transistor 4A and device ground GND. As before, trigger 6A is any conventional element that defines the turn-on of transistor 4A. One example for trigger circuit 6A is simply a direct connection to the collector of transistor 4A, in the case where transistor 4A is to be turned on by collector-base junction breakdown resulting from an ESD event. Trigger 6A may also be an additional component, such as a Zener diode that conducts current from signal terminal PIN1 into the base of transistor 4A and resistor 7A when the Zener diode is in reverse-bias breakdown. Resistor 7A is preferably implemented as a polysilicon resistor. Transistor 4B is similarly configured between signal terminal PIN2 and external terminal GND.Each of transistors 4A, 4B conduct ESD energy of positive polarity at external terminals PIN1, PIN2 relative to substrate ground GND. In such an event, the corresponding one of bipolar transistors 4A, 4B will safely conduct the ESD current as collector-emitter current. This ESD current thus is shunted from functional circuitry 10, protecting it from overcurrent damage from the ESD event.As in the conventional structures described above relative to Figures 1 and 2, diodes 5A, 5B protect external terminals PIN1, PIN2 from damage due to negative polarity ESD events, relative to substrate ground GND. Typically, diodes 5A, 5B are simply the parasitic junction diodes between the n-type region serving as the collector of transistors 4A, 4B and the p-type substrate. A negative polarity ESD event at terminals PIN1, PIN2 will forward-bias diodes 5A, 5B, respectively, providing a low-impedance path. In normal operation, the low voltage at substrate ground GND relative to the terminals PIN1, PIN2 keeps diodes 5A, 5B reverse-biased, and transparent to functional circuitry 10.ESD protection is provided between external terminal PIN1 and external terminal PIN2, without regard to substrate ground GND, and in both polarities, according to the preferred embodiments of the invention. This pin-to-pin ESD protection is especially important, and often required, for certain types of integrated circuits. Charge-pump circuits, voltage regulators, and other mixed signal integrated circuits, which have both analog and digital functions, typically require such protection. In addition, the voltage on one signal terminal (e.g., PIN1) may exceed the voltage on another signal terminal (e.g., PIN2) in the normal operation of mixed-signal functional circuitry 10; this operation must be considered in constructing the ESD protection structures between signal pins.In the preferred embodiments of the invention, as in the conventional case of Figure 2, n-p-n transistor 4C provides this protection in one polarity (terminal PIN1 to terminal PIN2, in the example of Figure 4). External terminal PIN1 is connected to the collector of transistor 4C, and external terminal PIN2 is connected to the emitter of this device. Trigger 6C and resistor 7C are also connected in series between terminals PIN1, PIN2, with the base of transistor 4C connected to the node between trigger circuit 6C and resistor 7C. Transistor 4C will turn on in response to an ESD events of positive polarity at terminal PIN1 relative to terminal PIN2, providing a low-impedance path for this ESD energy, as described above.According to the preferred embodiments of the invention, diode 25 is provided between signal terminal PIN1 and signal terminal PIN2, to protect functional circuitry 10 against ESD events of the opposite polarity, in this case with signal terminal PIN1 negative relative to signal terminal PIN2. According to the preferred embodiments of the invention, diode 25 is a junction diode, with its anode connected to signal terminal PIN2 and its cathode connected to signal terminal PIN1 via the collector of transistor 4C. As will be evident from the following description, diode 25 is implemented in an extremely space-efficient manner according to the preferred embodiments of the invention, particularly as compared against the conventional approach of Figure 3 that includes an isolated diode 15C.Figures 5a and 5b illustrate, in cross-section and plan views, respectively, the construction of an ESD protection structure corresponding to the circuit of Figure 4, according to a first preferred embodiment of the invention. As mentioned above, this protection structure is provided to protect against damage to functional circuitry 10 due to an ESD event of either polarity between signal terminals, for example between signal terminals PIN1, PIN2.As shown in Figures 5a and 5b, the structure is formed at a surface of p-type substrate 60, which in this case is directly or indirectly biased from substrate ground terminal GND in normal operation. At the selected location of substrate 60, n-type buried layer 62 is disposed, and serves as a subcollector for transistor 4C in this example. N-type buried layer 62 is formed in the conventional manner, for example as described in U.S. Patent No. 4,958,213 , commonly assigned herewith and incorporated herein by this reference. N-type well 64 is formed over n-type buried layer 62 in the conventional manner, for example as an implanted region within an epitaxial layer formed over buried layer 62, as also described in U.S. Patent No. 4,958,213 , and is substantially coincident with buried layer 62. In the plan view of Figure 5b, therefore, buried layer 62 is not visible, as it substantially underlies n-well 64.Transistor 4C has its base region formed within p-type well 66, formed in the conventional manner within n-well 64. The emitter of transistor 4C is formed by way of ion implanted n+ region 68 formed within p-well 66, for example by way of the same ion implantation process or processes used to form an n-type source/drain region for MOS transistors elsewhere within the integrated circuit. According to this embodiment of the invention, p+ region 70 is also formed within p-well 66, for example also by the same p-type implant used to form p-type source/drain regions for MOS devices elsewhere in the integrated circuit. This p+ region 70 is connected to signal terminal PIN2 by way of resistor 7C (preferably formed of polysilicon; not shown); n+ region 68 is connected directly to signal terminal PIN2, for example by way of a metal conductor (not shown).In this embodiment of the invention, the collector contact of signal terminal PIN1 is made by way of n+ region 72 and buried contact plug 74, which directly contacts (or, in some cases, only approaches) buried layer 62. Buried contact plug 74 is a conductive contact to n-type buried layer 62, for example in the form of a heavily doped buried region formed by conventional techniques. N+ region 72 may then be formed into an epitaxial layer overlying plug 74, for example in the case where the remainder of the surface of substrate 60 is also formed in an epitaxial layer. Connection of n+ region 72 to signal terminal PIN2 is then made by way of a conventional metal conductor (not shown).In this example, referring back to the circuit schematic of Figure 4, trigger 6A is embodied simply by the connection of signal terminal PIN1 to the collector of transistor 6C, such that transistor 4C turns on in response to a positive polarity ESD event between signal terminals PIN1 and PIN2 that is sufficient to break down the collector-base junction. Referring to Figure 5a, this breakdown will likely occur between n-type region 64 and p-well 66. Once this junction breaks down, current from signal terminal PIN1 will flow to signal terminal PIN2 via n+ region 68, and via p+ region 70 and resistor 7C. This current forward-biases the base-emitter junction at n+ region 68, initiating bipolar conduction through transistor 4C and thus providing a low-impedance collector-emitter current path for the ESD energy.According to this embodiment of the invention, diode 25 is formed by the placement of p+ region 78 at a location within n-well 64. As evident from Figures 5a and 5b, p+ region 78 is located within the same n-well 64 that transistor 4C is disposed, preferably on the other side of the collector contact of n+ region 72 from the transistor base. This p+ region 78 is connected to signal terminal PIN2 by way of a metal conductor (not shown).The dopant concentration and junction depth of p+ region 78 are preferably selected to ensure proper characteristics for diode 25. Referring back to the circuit schematic of Figure 4, it is important that the reverse-bias breakdown voltage of diode 25 is at a voltage greater than that of the turn-on voltage of transistor 4C, so that transistor 4C (rather than diode 25) conducts positive polarity ESD energy. As described above, in this example, transistor 4C turns on by breakdown of its collector-base junction. Accordingly, the reverse-bias breakdown voltage of diode 25 must be higher than the breakdown voltage of the collector-base junction of transistor 4C. This may be ensured by forming p+ region 78 to a relatively deep depth, and perhaps with a relatively lower doping concentration than that of n+ region 72. For example, p+ region 78 may be formed within a region that receives the p-well implant. It is contemplated that such characteristics for p+ region 78 are available in the feature set for the integrated circuit being formed into substrate 60 in this example.In some implementations, p+ region 78 within n-well 64 may insert some latchup vulnerability to the structure. However, it is contemplated that the presence of n-type buried layer 62 and plug 74 will generally prevent parasitic thyristor conduction in this embodiment of the invention, so latchup is likely to be of minimal concern in this implementation.Diode 25, at the junction between n-well 64 and p+ region 78, thus provides protection for negative polarity (signal terminal PIN1 to PIN2) ESD events, by providing a low-impedance path for conduction in this direction. Should signal terminal PIN2 receive ESD energy of positive polarity relative to signal terminal PIN1, the p-n junction at p+ region 78 will forward bias relative to n-well 64. Current can then be safely conducted from p+ region 78 through n-well 64, to buried plug 74 and n+ region 72 to signal terminal PIN2. Functional circuitry 10 will thus be protected by diode 25 in this implementation.As evident from a comparison of Figures 5a and 5b to Figure 3, the provision of p+ region 78, and thus diode 25, within n-well 64 provides important efficiencies in the fabrication of the integrated circuit. Because diode 25 is not isolated from n-well 64, as was isolated diode 15 relative to n-well 34 in Figure 3, there is no need for the well-to-well spacing WW. This saves significant chip area in the integrated circuit, especially considering that typical well-to-well spacing requirements are on the order of 15 to 20 µm in modern technology. Considering that this spacing would be required for each implementation of the ESD structure, between each pair of signal terminals of the device, the chip area saved according to the present invention is substantial. In addition, the parasitic resistance necessitated by making connection over the well-to-well spacing is also eliminated, rendering improved device performance in response to ESD events.Other configurations of the ESD protection structure, for protection between signal pins, are also contemplated according to this invention. These various configurations can include additional components, as desired for a particular manufacturing technology or to attain certain performance objectives. Figures 6a and 6b illustrate, in cross-section and plan views, respectively, an example of such an alternative configuration. Metal levels are not illustrated in the plan view of Figure 6b, for the sake of clarity.As evident from Figures 6a and 6b, the ESD structure according to this embodiment of the invention incorporates bipolar transistor 4C and diode 25 (Figure 4) within a single n-well 164; in this example, n-well 164 overlies n-type buried layer 162, both formed as described above. P-well 166 is disposed within n-well 164, serving as the base of transistor 4C, and contains multiple n+ regions 168 serving as the emitter of transistor 4C. N+ regions 168 are connected to signal terminal PIN2 by way of a metal conductor (not shown). Signal terminal PIN2 is also connected to p+ regions 170 at the periphery but within p-well 166, via a pair of polysilicon resistors 107C. The collector of transistor 4C at n-type buried layer 162 is contacted by buried plug 174, which is connected to signal terminal PIN1 as before.In this embodiment of the invention, trigger 6A is implemented by way of a Zener diode, formed by way of n+ regions 175 formed into p-well 166 and connected to signal terminal PIN1. In the conventional manner, the Zener diode formed at the junction between n+ regions 175 and p-well 166 will breakdown at approximately a specified voltage, in response to a positive polarity ESD event at signal terminal PIN1 relative to signal terminal PIN2. Once this breakdown occurs, then current will flow into p-well 166 from signal terminal PIN1 to signal terminal PIN2, via p+ regions 170 and resistors 107C, and via n+ regions 168. The emitter-base junction at n+ regions 168 and p-well 166 will become forward-biased, enabling collector-emitter current from signal terminal PIN1 via plug 174 and n-type buried layer 162, through the base of p-well 166 and out of the emitter at n+ regions 168.For negative polarity ESD events (signal terminal PIN2 at a higher potential than signal terminal PIN1), diode 25 is provided in this embodiment of the invention. Specifically, the anode of diode 25 is formed by p+ regions 178 within n-well 164, connected to signal terminal PIN2 (by metal conductors, not shown). The cathode of diode 25 is provided by n-well 164 itself, to which contact is made from signal terminal PIN1 via buried plug contacts 174 and n-type buried layer 162. In this manner, a positive potential due to an ESD event at signal terminal PIN2, relative to signal terminal PIN1, will forward bias the junctions between p+ regions 178 and n-well 164, providing a safe conduction path to signal terminal PIN1 via n-type buried layer 162 and plug contacts 174.If desired, and if available from the technology, p+ regions 178 may be formed within a p-type base implant region disposed into n-well 164. This can provide a different characteristic for diode 25, particularly by increasing its reverse breakdown voltage.Also in this embodiment of the invention, p+ regions 178 are surrounded by n+/p+ chains 180. These chains 180 are implemented by adjacent implanted n+ and p+ regions, alternating with one another along the horizontal direction (in Figure 6b). Both conductivity-type regions in chains 180 are connected to signal terminal PIN1 by way of a metal conductor (not shown). N+/p+ chains 180 provide added protection against latchup, by negating any parasitic p-n-p transistor action that may otherwise initiate laterally between p+ regions 178 and p-well 166, through n-well 164.According to this embodiment of the invention, the ESD protection structure is implemented in a significantly smaller region of the integrated circuit than would be possible if the reverse-bias diode were to be isolated in its own well, as in conventional devices. The construction according to this invention eliminates the need for well-to-well spacing between the reverse polarity ESD protection diode and the forward polarity ESD protection bipolar transistor. This reduced chip area is especially important considering that such protection structures are to be implemented between each pair of signal terminals in the overall device. In addition, the smaller structure area also reduces parasitic resistance of the conductors making contact to the protection elements, further improving device performance.It will be apparent to those skilled in the art that other alternative implementations and modifications may also be used in connection with this invention. Specifically, different forward polarity structures may alternatively be used, including such devices as thyristors ("SCRs"), MOS transistors, and the like. In addition, it is contemplated that this invention will be useful in connection with a wide range of device types, including mixed-signal devices as noted above, as well as in pure digital and analog integrated circuits, fabricated by MOS, bipolar, BiCMOS, and other technologies. |
Methods, apparatus, systems and articles of manufacture for an example event processor (308a-g) are described to retrieve an input event and an input event timestamp corresponding to the input event, generate an output event based on the input event and the input event timestamp, in response to determination that an input event threshold is exceeded within a threshold of time, and an anomaly detector (304) to retrieve the output event, determine whether the output event indicates threat to functional safety of a system on a chip (302), and in response to determining the output event indicates threat to functional safety of the system on a chip (302), adapt a process for the system on a chip (302) to preserve functional safety. |
CLAIMSWhat Is Claimed Is:1. An apparatus for a system on a chip (SoC) comprising: a set of hardware accelerators (HWAs); a set of event processors coupled to the HWAs such that each of the event processors is coupled to a respective HWA of the set of HWAs; and an anomaly detector coupled to the set of event processors.2. The apparatus of claim 1, wherein an event processor in the set of event processor includes at least one input terminal, an input counter, programmable operators, an output counter, a statistics generator, and an output event generator.3. The apparatus of claim 2, wherein the programmable operators include at least one of bitwise AND, bitwise OR, bitwise XOR, bitwise NOT, bitwise NAND, or bitwise NOR.4. The apparatus of claim 1, the SoC including: a first bus to transmit data between the set of HWAs; and a second bus to transmit data between the set of event processors and the anomaly detector.5. The apparatus of claim 1, the SoC including a processor coupled to the anomaly detector.6. The apparatus of claim 1, the set of event processors to: retrieve an input event; and generate an output event based on the input event, in response to a determination that an input event threshold is exceeded within a threshold of time.7. The apparatus of claim 1, wherein the set of event processors receive a input event, the input event is at least one of an external input to the SoC, an output from the SoC, an input to one of the HWAs in the set of HWAs, or an output from one of the HWAs in the set of HWAs.8. The apparatus of claim 1, the anomaly detector to: retrieve an output event from an event processor in the set of event processors; and determine whether the output event indicates threat to functional safety of the SoC.9. The apparatus of claim 8, the anomaly detector to, in response to determining the output event indicates a threat to functional safety of the SoC, adapt a process for the SoC to preserve functional safety.10. The apparatus of claim 9, the anomaly detector to: determine whether utilization of one of the HWAs in the set of HWAs is greater than a high risk threshold;
in response to the determination that the utilization of the HWA is greater than the high risk threshold, decrease the utilization of the HWA; determine whether the utilization of the HWA is less than a low risk threshold; in response to determining that the utilization of the HWA is less than the low risk threshold, increase the utilization the HWA; and in response to determining that the utilization of the HWA is greater than the low risk threshold and less than the high risk threshold, perform no alteration to the utilization of the HWA.11. A method for an event processor comprising: receiving a set of signals associated with processing by a hardware accelerator (HWA) on a system on a chip (SoC); performing a set of operations on the set of signals to determine whether an event occurred in the processing by the HWA; and providing a result of the set of operations that indicates whether the event occurred to an anomaly detector.12. The method of claim 11, wherein the set of signals include at least one of an external input to the SoC, an output from the SoC, an input the HWA, or an output from the HWA.13. The method of claim 11 further comprising performing a plurality of instances of the set of operations on respective sets of signals in parallel using a plurality of event processors, each associated with a respective HWA.14. The method of claim 11, wherein the set of operations are performed using programmable operators that include at least one of bitwise AND, bitwise OR, bitwise XOR, bitwise NOT, bitwise NAND, or bitwise NOR.15. The method of claim 11, including generating statistics based on the signals and the results of the operations, the statistics including a maximum, a minimum, an average, and a relationship between the set of signals and the result of the operations.16. The method of claim 11, including a method for the anomaly detector including: retrieving the result of the operations; and determining whether the result of the operations indicate threat to the SoC.17. The method of claim 16, including, in response to determining the result of the operations indicates threat to functional safety of the SoC, adapt a process for the SoC to preserve functional safety.18. The method of claim 16, including: determining whether utilization of the HWA is greater than a high risk threshold;
in response to the determination that the utilization the HWA is greater than the high risk threshold, decreasing the utilization of the HWA; determining whether the utilization the HWA is less than a low risk threshold; in response to determining that the utilization the HWA is less than the low risk threshold, increasing the utilization of the HWA; and in response to determining that the utilization the HWA is greater than the low risk threshold and less than the high risk threshold, performing no alteration to the utilization of the HWA.19. The method of claim 11, wherein the event processor can be programmably configured to detect events.20. The method of claim 11, further including transmitting the results of the operations to the anomaly detector via a data bus. |
METHOD AND APPARATUS TO FACILITATE LOW LATENCY FAULT MITIGATION[0001] This relates generally to processor management, and, more particularly, to a method and apparatus to facilitate low latency fault mitigation, quality of service (QoS) management and debug of a processing pipeline.BACKGROUND[0002] Oversight of a process operating within a processor has been conducted with software using interrupts that execute at specified times. At the interrupt points, the oversight determines whether the process has achieved the necessary tasks since the most recent interrupt. The software collects processor data and then determines whether any errors or system-wide failures have occurred based on the collected data.BRIEF DESCRIPTION OF THE DRAWINGS[0003] FIG. 1 is a block diagram of an example sensor data flow.[0004] FIG. 2 is a block diagram of an example system on a chip utilizing the sensor processing data flow of FIG. 1.[0005] FIG. 3 is a block diagram of a system on a chip including example event processors. [0006] FIG. 4 is an example of a color conversion process utilizing the system on a chip of FIG. 2.[0007] FIG. 5 is an example of a state machine reset process utilizing the system on a chip of FIG. 2.[0008] FIG. 6 is an example signal diagram of various components of the system on a chip of FIG. 3.[0009] FIG. 7 is a block diagram of an example event processor.[0010] FIG. 8 is a diagram showing example operations of the anomaly detection of FIGS. 3 and 7.[0011] FIG. 9 is a block diagram of an example of a system on a chip including the example of the event processor hardware.[0012] FIG. 10 is an example timing diagram of a component of a system on a chip.[0013] FIG. 11 is a diagram showing an example input stream from a register to the event processor.[0014] FIG. 12 is a flowchart representative of machine readable instructions which may be executed to implement the event processor.
[0015] FIG. 13 is a block diagram of an example processing platform structured to execute the instructions of FIG. 12 to implement an example event processor.[0016] FIG. 14 is a block diagram of an example software distribution platform to distribute software (e.g., software corresponding to the example computer readable instructions of FIGS. 12) to client devices such as consumers (e.g., for license, sale and/or use), retailers (e.g., for sale, re-sale, license, and/or sub-license), and/or original equipment manufacturers (OEMs) (e.g., for inclusion in products to be distributed to, for example, retailers and/or to direct buy customers).[0017] The figures are not to scale. Instead, the thickness of the layers or regions may be enlarged in the drawings. Although the figures show layers and regions with clean lines and boundaries, some or all of these lines and/or boundaries may be idealized. In reality, the boundaries and/or lines may be unobservable, blended, and/or irregular. In general, the same reference numbers will be used throughout the drawing(s) and accompanying wrihen description to refer to the same or like parts.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0018] Sensor processing includes a wide array of processors, sensors, and controllers that interact to provide solutions in automotive, industrial, medical, consumer, and other applications. In an example, a processor takes in sensor data, manipulates the data, and take various actions in response to the data. In some applications deemed safety critical, the processor may be required to perform the processing within a fixed amount of time.[0019] However, when a processor component fails to complete a task in time or at all, prior approaches have had challenges measuring, tracking, and reporting the time taken for the steps of the task. For example, software-based timing approaches may run at the system level and, as such, may not have visibility into the lower-level various processing resources, such as individual hardware accelerators (HWA). Thus, software-based timing approaches may not be able to locate a specific point of failure. Furthermore, software-based timing may be inaccurate, may have a long latency, and may overburden the processor. Hardware-based timing may be inflexible and limited to only a few signals within a processing path.[0020] To address these issues and others, a system is provided that includes configurable event processors. Signals of interest can be routed to the event processors and used to control various configurable counters and timers. In this way, transitions in the signals can be timed and reported.[0021] Unless specifically stated otherwise, descriptors such as "first," "second," "third," etc. are used herein without imputing or otherwise indicating any meaning of priority, physical
order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the described examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.[0022] As used herein “substantially real time” refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc. Thus, unless otherwise specified, “substantially real time” refers to real time +/- 1 second. [0023] FIG. 1 is a block diagram of an example vision data flow on a system on a chip. As shown in FIG. 1, the data flow of a vision pipeline in a system on a chip begins with capture of data by, in some instances, a camera interface 104. In these examples, the data capture could consist of an image, figure, or any other data type.[0024] The captured data is sent for analysis and processing by a set of HWAs such as an imaging unit 108, a vision primitive unit 112, a deep learning unit 116, and a computer vision unit 120.[0025] After analysis and processing, the data is sent to a display controller 124 and to a host CPU interface 128. The display controller 124 could interface to a monitor, television, or other graphical display interface. The host CPU interface 128 provides an interface to facilitate a user action, CPU action, or other action affecting the analyzed data.[0026] FIG. 2 is a block diagram of an example sensor processing data flow 201 represented by a line that may be carried out on a system on a chip (SoC) 204. The data flow 201 may be a process that incorporates at least one of the components on the SoC 204. In the approach of FIG. 2, an overseer 205 is implemented on a processor 206. The processor 206 can transmit software events to various components of the SoC 204 via a data bus 208. The software events may be data requests, interrupts, and/or timestamps. The overseer 205 utilizes the software events to determine whether an error or system-wide failure has occurred in the data flow 201. [0027] In this example, the data flow 201 begins with collection of input data by the camera interface 104. In some examples, the input data may be stored in on-chip memory 210 or external memory 212. Various components on the SoC 204 conduct vision processing and analysis on the input data. These components may include, but are not limited to, imaging unit 108, vision primitive unit 112, deep learning unit 116, computer vision unit 120, and display controller 124. In some examples, the various components may save the output data from the analysis or processing to on-chip memory 210 or external memory 212. The overseer 205 may
collect information from the various components on the SoC via the processor 206 and data bus 208 through the utilization of data requests and system interrupts. The overseer 205 determines whether an error threshold has been surpassed based on the collected information. The approach detailed above may have a high latency and may require a long time to determine whether an error threshold has been surpassed.[0028] FIG. 3 is a block diagram of an example SoC 302 to facilitate low latency fault mitigation, quality of service (QoS) management and debug of a processing pipeline. The example SoC 302 of FIG. 3 includes a set of HWAs (e.g., the example camera interface 104, the example imaging unit 108, the example vision primitive unit 112, the example deep learning unit 116, the example computer vision unit 120, the example display controller 124, the example data bus 208, and the example on-chip memory 210 similar to those described above). The example SoC 302 of FIG. 3 further includes an example anomaly detector 304, an example processor 306, an example set of event processors 308 (e.g., event processors 308a-g), an example event bus 312, and an example set of Input/Output (IO) pins 316.[0029] In contrast to other examples, such as the SoC 204 of FIG. 2 where events and corresponding errors are detected by a processor 206 monitoring all data flow to and from the HWAs across the bus 208, the SoC 302 provides localized, configurable event detection. In that regard, each of the HWAs includes a corresponding event processor 308b-308g coupled to the respective HWA. The corresponding event processor 308b-308g can be programmably configured to detect events as well as relationships between events in real-time by monitoring the inputs and outputs of the HWA and other signals (such as IO signals received by IO pins 316). This relieves the processor 306 from the burden of actively monitoring the bus 208. When one of the event processors 308b-308g detects an event, it provides a corresponding signal over a dedicated event bus 312, which further relieves the pressure on bus 208. The signal is received by an anomaly detector 304 via its own respective event processor 308, and the anomaly detector 304 determines a response to the event. As the anomaly detector 304 may be a separate resource, this may further reduce the load on processor 306.[0030] In some examples, the anomaly detector 304 receives an output event from the event processor 308a-g via the processor 306 and the example event bus 312 and determines whether the SoC 302 is acting within confines of functional safety or an error and/or system-wide failure has occurred based on the output event. In response to the analysis of the output event, the anomaly detector 304 may alter the usage of at least one of the various components on the SoC 302 to limit future errors from occurring. The anomaly detector 304 may utilize statistics from previous output events to determine the alteration. In these examples, the anomaly detector 304
may determine whether at least one of the components on the SoC 302 is operating above a high risk threshold. In response to determining the at least one component is operating above the high threshold, the anomaly detector 304 may reduce the usage of the at least one component to prevent more errors. In some examples, the anomaly detector 304 may determine whether at least one component on the SoC 302 is operating below a low risk threshold. In response to determining the at least one component is operating below the low risk threshold, the anomaly detector 304 may increase the usage of the at least one component.[0031] The various components of the SoC 302 transmit data to and retrieve data from the data bus 208 via the at least one event processor 308a-g. The at least one event processor 308a- g collects a copy of the data that transmits through the event processor 308a-g and transmits the original data to the original intended destination and analyzes the data copy. The data can include inputs to the components, outputs from the components, external inputs to the SoC 302, outputs from the SoC 302, event statistics, event timestamps, and processing events. The data is not limited to just these examples. An event processor 308a-g may analyze the sample data and determine whether a threshold value has been met. In response to the threshold value being met, the event processor 308a-g generates an output event and transmits the output event to the anomaly detector 304 via the event bus 312.[0032] The transmission of software events between event processors 308a-g and the anomaly detector 304 occurs on the event bus 312 different than the data bus 208 of FIG. 3. Inclusion of the anomaly detector 304, the event processor 308a-g, and the event bus 312 allows for lower latency analysis of anomalies and a quicker reaction time to anomaly detection in the SoC 302 because the anomaly detector 304 can monitor all input and output events form the event processors 308a-g at a regular interval via the dedicated event bus 312. The event bus 312 is dedicated to the transmission of data between the event processors 308a-g and the anomaly detector 304. The event bus 312 allows for the transmission of data between the anomaly detector 304 and the event processors 308a-g to occur much faster than an example SoC not including the event bus 312. A faster transmission results in a reduced latency and faster response to possible errors in the SoC 302. In comparison to the example SoC 204 of FIG. 2, the regular interval is much lower than the interval between interrupts that the SoC 204 utilizes.[0033] The SoC 302 includes IO pins 316. The IO pins 316 may transmit data to the various components on the SoC 302 via the event bus 312. The IO pins 316 may retrieve data from the various components on the SoC 302 via the event bus 312.[0034] In operation, the at least one event processor 308a-g collects a sample of data
transmitted through the at least one event processor 308a-g. The at least one event processor 308a-g may create an output event based on the sample of data. The at least one event processor 308a-g transmits the output event to the anomaly detector 304 via the event bus 312 and the processor 306. The anomaly detector 304 retrieves the output event. The anomaly detector 304 determines whether the output event indicates an error or system-wide failure. In response to determining that the output event indicates an error or system-wide failure, the anomaly detector 304 may alter the usage of at least one of the components on the SoC 302.[0035] The components on the SoC 302 that the anomaly detector 304 may alter include, but are not limited to, the camera interface 104, the imaging unit 108, the vision primitive unit 112, the deep learning unit 116, the computer vision unit 120, the display controller 124, the on- chip memory 210, and the processor 306. The anomaly detector 304 may utilize statistics from previous output events to determine how to alter the usage of at least one component on the SoC 302. The statistics provide the anomaly detector 304 with information including the usage of the at least one component on the SoC 302.[0036] FIG. 4 is an example that may utilize the SoC 204 or SoC 302. The example of FIG. 4 details a process for a pixel processing color conversion pipeline which processes line buffers. A gamma table 404 is used to configure the final gamma after a color conversion process. In this example, the gamma table 404 writes the output to an IPIPE 408. In an ideal instance, the gamma table 404 writes to a location in the IPIPE 408 with a valid write command 412. In an error instance, the gamma table 404 writes a hang command 416 to a region outside the IPIPE 408 in a line buffer zone 420. In these examples, the hang command 416 is a command that causes the system to hang, or malfunction.[0037] In the example of FIG. 2, the overseer 205 may be unable to acquire a clear view during the valid write command 412 and the hang command 416 due to timing margins. In some examples, the overseer 205 is unable to determine that the hang command 416 has occurred before the hang command 416 finalizes or disrupts other processes. In contrast, in the example of FIG. 4, the anomaly detector 304 and event processors 308a-g are able to accurately monitor the hang command 416 because, in an example SoC including the anomaly detector 304 and event processors 308, the gamma table 404 transmits the hang command 416 through example event processors 308a-g. Thus, the event processors 308a-g recognize an error within the hang command 416 and transmits an output to the anomaly detector 304 to be analyzed. Accordingly, while the arrangement of FIG. 2 is limited in ability to detect such errors, the arrangement of FIG. 3 is not limited in the same manner. In response to the information form the event processors 308a-g, the anomaly detector 304 determines the error related to the hang
command 416 and halts the hang command 416.[0038] Debugging such issues may require careful review of source code, re-structuring of processes, and iterative long duration testing causing production to stall. In a live environment, source code access may not be available. This increases issues with debugging and may increase resolution time by multitudes for an SoC such as the SoC 204 of FIG. 2. The SoC 302 of FIG. 3 allows for statistic generation during runtime which shortens debugging and resolution time.[0039] In the example of FIG. 5, a receiver 504 uses an LP00 to LP11 transition 508 on a differential input/output pair to trigger an internal state machine 512 to reset. To achieve non dependency of the external LP00 to LP11 trigger, the SoC 204 implemented logic on the input/outputs. In the example of FIG. 5, the logic carried out by the SoC 204 was broken and, thus, the LP11 state transition became a function of an external driver. The timing at which sensors drive LP00 versus an internal software trigger are scheduled become interdependent because of the LP11 transition fault. As a result, a race condition is introduced between the LP00 transition and the internal software trigger. In a condition where the LP00 transition wins the race condition, the internal software trigger is unable to notify the overseer 205. In these instances, the LP00 transition occurs and a fault occurs. With changes to the SoC 204, the race condition can be prevented from occurring at all.[0040] In contrast, on example SoC 302 of FIG. 3 including the anomaly detector 304 and event processors 308a-g, the race condition could be prevented. The event processors 308a-g allow for I/O observability and identification of a fault in the LP00 to LP11 transition 508 by the anomaly detector 304. The anomaly detector 304 may halt the process in response to the identification of the fault.[0041] FIG. 6 is an example signal diagram. A processor internal clock (PICLK) 602 signal represented the clock of the processor 306 on the SoC 302. The PICLK 602 provides a clock rate for the SoC 302 and dictates the frequency at which components on the SoC 302 may operate. Components on the SoC 302 may alter, output, or operate at the change from high to low, or low to high, on the PICLK 602.[0042] The signal diagram of FIG. 6 includes signal charts for four example events (e.g., Event[0] 604a, Eventfl] 604b, Event[2] 604c, Event[3] 604d). Event[3:0] 604a-d may be inputs, outputs, triggers, or operations performed by a component on the SoC 302. When an event occurs, for instance an event associated with signal Event[0] 604a, the respective signal changes from a value of zero to a value of one for some pre-determined value of time. This pre-determined value of time can be how long the event is active, a certain number of clock
cycles, or a certain amount of time.[0043] G[3:0] 606 andLTS[7:0] 607 work together to count the number of clock cycles 608a- c between event occurrences in Event[3:0] 604a-d. LTS[7:0] 607 counts in binary to a max of 255. If an event occurrence in one of Event[3:0] 604a-d does not occur by the time LTS[7:0] 607 reaches 255 (e.g., 256 clock cycles have transpired by PICLK 602), LTS[7:0] 607 resets to 128 and G[3:0] increases by one. If an event occurs in one of Event[3:0] 604a-d, the values of LST[7:0] 607 and G[3:0] reset to zero. In some examples, the value of LTS[7:0] 607 takes one plus the current value of G[3:0] clock cycles to increase by one. For instance, if the current value of G[3:0] is one and the current value of LST[7:0] 607 is 140, two clock cycles must pass before the value of LST[7:0] 607 increases to 141. The LTS[7:0] 607 takes one plus the current value of G[3:0] 606 clock cycles to increase by one to prevent G[3:0] 606 from overflowing. The time required for G[3:0] 606 increases dramatically when compared to an example where the LST[7:0] 607 takes one clock cycle to increase without checking the G[3:0] 606.[0044] When an event occurs, an event processor 308 acquires knowledge of the event occurring as the component on the SoC 302 must transmit the event through the event processors 308a-g. The event processors 308a-g acquire statistics pertaining to the event, such as a timestamp, the event input, the event output, the event runtime, and the clock cycles since a last event. The event processor 308 includes knowledge about when a possible error could have occurred. For example, the event processor 308 may know that when Event[0] 604a triggers, at least three hundred clock cycles must pass before Eventfl] 604b triggers. In the example of FIG. 6, Eventfl] 604b triggers 259 clock cycles after Event|0|. In this example, the event processor 308 identifies a possible error to have occurred and generates an output to transmit to the anomaly detector 304 to analyze. The anomaly detector 304 may alter the process on the SoC 302 based on the output.[0045] FIG. 7 is a block diagram of an example of the event processor 308 hardware. The event processor 308 receives input from various components of the SoC 302, including signals to and from the respective HWA, and determines whether an anomaly has occurred in the SoC 302 process based on the inputs. The event processor 308 generates an output event when an anomaly threshold has been met and transmits the output event to an example anomaly detector 304. The event processor 308 includes example input terminals 704a-c, an example input counter 708, example programmable operators 712, an example output counter 716, an example statistics generator 720, and an example output event generator 728.[0046] The event processor 308 includes at least one input terminal 704a-c. Input terminals 704a-c can retrieve external inputs to the SoC 302, outputs from the SoC 302, inputs to various
components on the SoC 302, or outputs from various components on the SoC 302. The event processor 308 retrieves the at least one input via the input terminals 704a-c.[0047] The example input counter 708 of the event processor 308 retrieves information via the input terminals 704a-c. The input counter 708 may be programmed to count transitions, duration, and/or other aspects of the respective signals received by the input terminals 704a-c. The input counter 708 may also be programmed to perform thresholding of each of the respective counts, and in response to an input count threshold being met, the input counter 708 may transmit a corresponding count signal to a set of programmable operators 712 and an example statistics generator 720. Also, in these examples, the input counter 708 transmits the inputs to an example anomaly detector 304.[0048] The programmable operators 712 of the event processor 308 receive the first set of count signals produced by the input counter 708. The programmable operators 712 conduct operations on the first set of count signals to detect relationships between the different count signals. In this way, the event processor 308 can provide a configurable event detection algorithm specific to its HWA without processor involvement. In some examples, the programmable operators 712 include programmable logic such as ANDs, NORs, NANDs, and ORs. In other examples, the programmable operators 712 include or do not include at least one of the ANDs, NORs, NANDs, and/or ORs. The programmable operators 712 conduct the operations on the first set of count signals and transmits a set of event detection signals to an output counter 716.[0049] The example output counter 716 of the event processor 308 retrieves the event detection signals that result from the operations of the programmable operators 712. In some examples, the output counter 716 may be programmed to count transitions, durations, and/or other aspects of the event detection signals and determine whether an output threshold has been satisfied. In these examples, the output counter 716 transmits a result second set of count signals to an example output event generator 728. Also in these examples, the output counter 716 transmits the second set of count signals to the statistics generator 720.[0050] The statistics generator 720 of the event processor 308 retrieves the inputs, various intermediate signals (such as the first set of count signals from the input counter 708, and/or the second set of count signals from the output counter 716), and a current timestamp from an example global timestamp counter 724. The statistics generator 720 generates statistics based on the retrieved data. In some examples, the statistics generator 720 creates data detailing when inputs are received, when thresholds are met, and what the event processor 308 has output. In these examples, the statistics generator 720 includes the timestamp of the events occurring by
retrieving the current timestamp from the global timestamp counter 724. In some examples, the statistics include, but are not limited to, time relation between input events, output events, and all events. The statistics may also include a minimum, a maximum, an average, and a histogram generation for selected events.[0051] The example global timestamp counter 724 contains a counter indicating the current timestamp for the SoC 302. In response to the statistics generator 720 requesting the current timestamp, the global timestamp counter 724 transmits the current time in a timestamp to the statistics generator 720.[0052] The example output event generator 728 of the event processor 308 generates output events based on the results of the operations of the programmable operators 712 in response to receiving a notification from the output counter 716 that an output threshold has been satisfied. The output event generated by the output event generator 728 can include information indicating the inputs that triggered the output threshold, the current timestamp, and the operations that were conducted. The output event generator 728 transmits the output event to the example anomaly detector 304.[0053] The anomaly detector 304 retrieves the inputs from the input counter 708 and the output events from the output event generator 728. The anomaly detector 304 analyzes the inputs and the output event to determine if an anomaly has occurred and, in response to determining an anomaly has occurred, to determine if the anomaly is an error. In response to determining that the anomaly is an error, the anomaly detector 304 can alter the process taking place in the SoC 302 to improve performance of the process in the SoC 302. In some examples, the anomaly detector 304 is a state machine. In other examples, the anomaly detector 304 is an auto-encoder.[0054] FIG. 8 is a diagram of possible embodiments of the anomaly detector 304 of FIG. 7. In some examples, the anomaly detector 304 is a state machine 804. In other examples, the anomaly detector 304 is an auto-encoder 808. The state machine 804 consists of a plurality of possible states that the anomaly detector 304 can be in. Based on inputs to the anomaly detector 304, the state machine 804 can change states. In some examples, the inputs cause the state machine 804 to move to a state that indicates an error has occurred. In these examples, the anomaly detector 304 outputs that an error has occurred and determines a way to alter the process in the SoC 302 to prevent the error from occurring further.[0055] In other examples, the anomaly detector 304 is an auto-encoder 808. The auto encoder 808 is an artificial intelligence that determines a representation for a set of data. In these examples, the set of data is input to the anomaly detector 304. The auto-encoder 808 can
identify when an error has occurred based on detected representations of the input. In response to the auto-encoder 808 determining an error has occurred, the anomaly detector 304 outputs a statement showing that an error has occurred and alters the process in the SoC 302 to prevent the error from occurring further.[0056] FIG. 9 is a block diagram of an example of a SoC 904 including the example of the event processor 308 hardware. In the provided example, the SoC 904 includes the example camera interface 104, the example imaging unit 108, the example vision primitive unit 112, the example display controller 124, an example event processor 308c, 308d, 308f for some of the components, and example IO pins 316. The example event processor 308c, 308d, 308f capture inputs from various components on the SoC 904 via the input terminals 704a-c and determine whether an error has occurred. The captured inputs transmit through the event processor 308c, 308d, 308f before reaching an intended target (e.g., imaging unit 108, vision primitive unit 112, display controller 124, etc.). The SoC 904 is an example system on a chip. Other SoCs may include or not include any of the components shown in previous figures.[0057] FIG. 10 is an example timing diagram of the hardware accelerator (HWA). In the timing diagram, an example HWA is one of the components on the SoC 302 (e.g., camera interface 104, imaging unit 108, vision primitive unit 112, deep learning unit 116, computer vision unit 120, display controller 124, etc.). The anomaly detector 304 utilizes information concerning when the HWA is active to determine how to optimize the process on the SoC 302. The anomaly detector 304 determines when the HWA was active 1004, 1008 or inactive 1012. In some instances, an HWA is active 1004 for an extended period. In other instances, an HWA alternates between active 1008 and inactive 1012 states. The anomaly detector 304 determines whether the utilization of the HWA is greater than a high risk threshold. If the anomaly detector 304 determines that an HWA utilization is greater than the high risk threshold, the anomaly detector 304 reduces the usage of the specified HWA. The anomaly detector 304 also determines whether the utilization of an HWA is less than a low risk threshold. In response to determining that the utilization of an HWA is less than the low risk threshold, the anomaly detector 304 increases the usage of the HWA.[0058] In response to determining that the HWA usage is less than the high risk threshold and more than the low risk threshold, the anomaly detector 304 does not alter the utilization of the HWA. If the anomaly detector 304 determines that an alteration to the usage of the HWA is necessary, the anomaly detector 304 alters the usage of the HWA based on the threshold being satisfied and statistics generated from previous anomaly events.[0059] FIG. 11 is a diagram showing an example system flow between registers. In some
examples, task events 1104a-e are transmitted via a memory-mapped register (MMR) 1108. The MMR 1108 may be an example of the on-chip memory 210 or the external memory 212 of FIGs 2 and 3. The task events 1104a-e include, but are not limited to, tasks beginning 1104a, 1104b and 1104d, tasks completing 1104c and 1104e, and tasks failing. The MMR 1108 then transmits the task events to an example system event trace module (CTSET) 1112. In some examples, the example SoC 302 of FIG. 3 includes a CTSET to determine the origins of events. The example CTSET 1112 accesses the task events 1104a-e and determines the origins of the task events 1104a-e. The CTSET 1112 provides a mechanism to enable software debug on multi-core and distributed systems. The CTSET 1112 also allows for real-time profiling of specific tasks or transactions across cores with a singular timestamp. Transmitting the task events 1104a-e via the MMR 1108 allows for the CTSET to retrieve the task events 1104a-e. The CTSET 1112 then transmits the task events 1104a-e to an example event processor 308 for quality analysis. FIG. 11 is an example of a system that can be monitored by an event processor 308. The system of FIG. 11 benefits from the improvements of the SoC 302 due to the addition of the event processors 308a-g.[0060] While an example manner of implementing the event processor 308 of FIG. 3 is illustrated in FIG. 7, one or more of the elements, processes and/or devices illustrated in FIG. 7 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example input terminals 704a-c, the example input counter 708, the example programmable operators 712, the example output counter 716, the example statistics generator 720, the example global timestamp counter 724, the example output event generator 728, the example anomaly detector 304 and/or, more generally, the example event processor 308 of FIG. 3 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example input terminals 704a-c, the example input counter 708, the example programmable operators 712, the example output counter 716, the example statistics generator 720, the example global timestamp counter 724, the example output event generator 728, the example anomaly detector 304 and/or, more generally, the example event processor 308 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)).[0061] When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example input terminals 704a-c,
the example input counter 708, the example programmable operators 712, the example output counter 716, the example statistics generator 720, the example global timestamp counter 724, the example output event generator 728, the example anomaly detector 304 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example event processor 308 of FIG. 3 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIG. 7, and/or may include more than one of any or all of the illustrated elements, processes and devices.[0062] As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.[0063] A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the example event processor 308 of FIG. 3 is shown in FIG. 12. The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 1312 shown in the example processor platform 1300 described below in connection with FIG. 13. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 1312, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 1312 and/or embodied in firmware or dedicated hardware.[0064] Further, although the example program is described with reference to the flowchart illustrated in FIG. 12, many other methods of implementing the example event processor 308 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The processor circuitry may be distributed in different network locations and/or local to one or more devices (e.g., a multi-
core processor in a single machine, multiple processors distributed across a server rack, etc.). [0065] The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. to be directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that may together form a program such as that described herein.[0066] In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.[0067] The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.[0068] As mentioned above, the example processes of FIG. 12 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-
transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media.[0069] “Including” and “comprising” (and all forms and tenses thereol) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase "at least" is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term "comprising" and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A and B" refers to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A or B" refers to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A and B" refers to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A or B" refers to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.[0070] As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method
actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.[0071] FIG. 12 is a flowchart representative of machine readable instructions which may be executed to implement the event processor 308. The process begins at block 1208, the event processor 308 receives and/or otherwise retrieves at least one input via the input terminals 704a-c. The input counter 708 retrieves the inputs via the input terminals 704a-c. The process proceeds to block 1212.[0072] At block 1212, the input counter 708 determines whether an input threshold has been satisfied. The input threshold can be a pre-determined value based on the process running on the SoC 302. In response to determining that the input threshold has been satisfied, the process proceeds to block 1216. In response to determining that the input threshold has not been satisfied, the process returns to block 1208.[0073] At block 1216, the programmable operators 712 conducts operations on the input events to determine an output value. The operations conducted on the input events include, but are not limited to, ANDs, ORs, NANDs, and/or NORs. Not all of the operations must be included. The programmable operators 712 determines which operations to conduct on the input events based on the source of the input event. For instance, an input event from imaging unit 108 would have different operations conducted on the input event than an input event from a source external to the SoC 302. The process proceeds to block 1220.[0074] At block 1220, the output counter 716 counts the output value and determines whether an output threshold has been satisfied. The output threshold is based on a pre-determined value that is based on the processes being conducted on the SoC 302. The output value satisfies the output threshold when a specified amount of output value is reached within a specified amount of time. This threshold and time value are not limited to one value. In response to determining that the output threshold has been satisfied, the process proceeds to block 1224. In response to determining that the output threshold has not been satisfied, the process returns to block 1208. [0075] At block 1224, the output event generator 728 generates an output event. The output event is based on the input and the output value. The output event generator 728 transmits the output event to the anomaly detector 304. The process proceeds to block 1228.[0076] At block 1228, the statistics generator 720 generates statistics based on the output values and the inputs. The statistics generator 720 retrieves the current timestamp from the global timestamp counter 724. The statistics generator 720 utilizes the current timestamp to
define the statistics for the retrieved inputs. The generated statistics may include a minimum, a maximum, an average, and a histogram for selected events. The process proceeds to block 1232.[0077] At block 1232, the anomaly detector 304 determines whether an error has occurred based on the output event and the input. The anomaly detector 304 retrieves the inputs from the input counter 708 and the output events from the output event generator 728. The anomaly detector 304 analyzes the inputs and the output event to determine if an anomaly has occurred and, in response to determining an anomaly has occurred, to determine if the anomaly is an error. The process proceeds to block 1234.[0078] At block 1234, the anomaly detector 304 reports the anomaly. In some examples, the anomaly detector 304 creates a prompt for a user. In other examples, the anomaly detector 304 transmits a message to a user. The process proceeds to block 1236.[0079] At block 1236, in response to determining that the anomaly is an error, the anomaly detector 304 can alter the process taking place in the SoC 302 to improve performance of the process in the SoC 302. The anomaly detector 304 utilizes information concerning when a hardware accelerator (HWA) is active to determine how to optimize the process on the SoC 302. In these examples, the HWA is one of the various components on the SoC 302 (e.g., camera interface 104, imaging unit 108, vision primitive unit 112, deep learning unit 116, computer vision unit 120, display controller 124, etc.). The anomaly detector 304 determines when the HWA was active or inactive. The anomaly detector 304 determines whether the utilization of the HWA is greater than a high risk threshold. If the anomaly detector 304 determines that an HWA utilization is greater than the high risk threshold, the anomaly detector 304 reduces the usage of the specified HWA. The anomaly detector 304 also determines whether the utilization of an HWA is less than a low risk threshold. In response to determining that the utilization of an HWA is less than the low risk threshold, the anomaly detector 304 increases the usage of the HWA.[0080] In response to determining that the HWA usage is less than the high risk threshold and more than the low risk threshold, the anomaly detector 304 does not alter the utilization of the HWA. If the anomaly detector 304 determines that an alteration to the usage of the HWA is necessary, the anomaly detector 304 alters the usage of the HWA based on the threshold being satisfied and statistics generated from previous anomaly events. The process ends. [0081] FIG. 13 is a block diagram of an example processor platform 1300 structured to execute the instructions of FIG 12 to implement the event processor 308 of FIG. 7. The processor platform 1300 can be, for example, a server, a personal computer, a workstation, a
self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.[0082] The processor platform 1300 of the illustrated example includes a processor 1312. The processor 1312 of the illustrated example is hardware. For example, the processor 1312 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements an example input counter 708, example programmable operators 712, an example output counter 716, an example statistics generator 720, an example global timestamp counter 724, an example output event generator 728, and an example anomaly detector 304.[0083] The processor 1312 of the illustrated example includes a local memory 1313 (e.g., a cache). The processor 1312 of the illustrated example is in communication with a main memory including a volatile memory 1314 and a non-volatile memory 1316 viaabus 1318. Thevolatile memory 1314 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 1316 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1314, 1316 is controlled by a memory controller.[0084] The processor platform 1300 of the illustrated example also includes an interface circuit 1320. The interface circuit 1320 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.[0085] In the illustrated example, one or more input devices 1322 are connected to the interface circuit 1320. The input device(s) 1322 permit(s) a user to enter data and/or commands into the processor 1312. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.[0086] One or more output devices 1324 are also connected to the interface circuit 1320 of the illustrated example. The output devices 1324 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid
crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 1320 of the illustrated example, thus, may include a graphics driver card, a graphics driver chip and/or a graphics driver processor.[0087] The interface circuit 1320 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1326. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.[0088] The processor platform 1300 of the illustrated example also includes one or more mass storage devices 1328 for storing software and/or data. Examples of such mass storage devices 1328 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.[0089] The machine executable instructions 1332 of FIG 12 may be stored in the mass storage device 1328, in the volatile memory 1314, in the non-volatile memory 1316, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.[0090] A block diagram illustrating an example software distribution platform 1405 to distribute software such as the example computer readable instructions 1332 of FIG. 13 to third parties is illustrated in FIG. 14. The example software distribution platform 1405 may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. The third parties may be customers of the entity owning and/or operating the software distribution platform. For example, the entity that owns and/or operates the software distribution platform may be a developer, a seller, and/or a licensor of software such as the example computer readable instructions 1332 of FIG. 13. The third parties may be consumers, users, retailers, OEMs, etc., who purchase and/or license the software for use and/or re-sale and/or sub-licensing. In the illustrated example, the software distribution platform 1405 includes one or more servers and one or more storage devices. The storage devices store the computer readable instructions 1332, which may correspond to the example computer readable instructions of FIG. 12, as described above. The one or more servers of the example software distribution platform 1405 are in communication with a network 1410, which may correspond to any one or more of the Internet and/or any of the
example networks 1326 described above. In some examples, the one or more servers are responsive to requests to transmit the software to a requesting party as part of a commercial transaction. Payment for the delivery, sale and/or license of the software may be handled by the one or more servers of the software distribution platform and/or via a third party payment entity. The servers enable purchasers and/or licensors to download the computer readable instructions 1332 from the software distribution platform 1405. For example, the software, which may correspond to the example computer readable instructions of FIG. 12, may be downloaded to the example processor platform 1300, which is to execute the computer readable instructions 1332 to implement the anomaly detector 504. In some example, one or more servers of the software distribution platform 1405 periodically offer, transmit, and/or force updates to the software (e.g., the example computer readable instructions 1332 of FIG. 13) to ensure improvements, patches, updates, etc. are distributed and applied to the software at the end user devices.[0091] From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been described that provide low latency fault mitigation, quality of service management, and debugging of complex processing pipeline issues. The described methods, apparatus and articles of manufacture improve the efficiency of using a computing device by lowering latency of error detection, lowering CPU overhead committed to error detection, and mitigating conditions that could lead to a system-wide failure. The described methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.[0092] Example methods, apparatus, systems, and articles of manufacture to have low latency fault mitigation, quality of service management, and debug of a processing pipeline are described herein. Further examples and combinations thereof include the following:[0093] Example 1 includes an apparatus for a system on a chip (SoC) comprising a set of hardware accelerators (HWAs), a set of event processors coupled to the HWAs such that each of the event processors is coupled to a respective HWA of the set of HWAs, and an anomaly detector coupled to the set of event processors.[0094] Example 2 includes the apparatus of example 1, wherein an event processor in the set of event processor includes at least one input terminal, an input counter, programmable operators, an output counter, a statistics generator, and an output event generator.[0095] Example 3 includes the apparatus of example 2, wherein the programmable operators include at least one of bitwise AND, bitwise OR, bitwise XOR, bitwise NOT, bitwise NAND, or bitwise NOR.
[0096] Example 4 includes the apparatus of example 1, the SoC including a first bus to transmit data between the set of HWAs, and a second bus to transmit data between the set of event processors and the anomaly detector.[0097] Example 5 includes the apparatus of example 1, the SoC including a processor coupled to the anomaly detector.[0098] Example 6 includes the apparatus of example 1, the set of event processors to retrieve an input event, and generate an output event based on the input event, in response to a determination that an input event threshold is exceeded within a threshold of time.[0099] Example 7 includes the apparatus of example 1, wherein the set of event processors receive a input event, the input event is at least one of an external input to the SoC, an output from the SoC, an input to one of the HWAs in the set of HWAs, or an output from one of the HWAs in the set of HWAs.[0100] Example 8 includes the apparatus of example 1, the anomaly detector to retrieve an output event from an event processor in the set of event processors, and determine whether the output event indicates threat to functional safety of the SoC.[0101] Example 9 includes the apparatus of example 8, the anomaly detector to, in response to determining the output event indicates a threat to functional safety of the SoC, adapt a process for the SoC to preserve functional safety.[0102] Example 10 includes the apparatus of example 9, the anomaly detector to determine whether utilization of one of the HWAs in the set of HWAs is greater than a high risk threshold, in response to the determination that the utilization of the HWA is greater than the high risk threshold, decrease the utilization of the HWA, determine whether the utilization of the HWA is less than a low risk threshold, in response to determining that the utilization of the HWA is less than the low risk threshold, increase the utilization the HWA, and in response to determining that the utilization of the HWA is greater than the low risk threshold and less than the high risk threshold, perform no alteration to the utilization of the HWA.[0103] Example 11 includes a method for an event processor comprising receiving a set of signals associated with processing by a hardware accelerator (HWA) on a system on a chip (SoC), performing a set of operations on the set of signals to determine whether an event occurred in the processing by the HWA, and providing a result of the set of operations that indicates whether the event occurred to an anomaly detector.[0104] Example 12 includes the method of example 11, wherein the set of signals include at least one of an external input to the SoC, an output from the SoC, an input the HWA, or an output from the HWA.
[0105] Example 13 includes the method of example 11 further comprising performing a plurality of instances of the set of operations on respective sets of signals in parallel using a plurality of event processors, each associated with a respective HWA.[0106] Example 14 includes the method of example 11, wherein the set of operations are performed using programmable operators that include at least one of bitwise AND, bitwise OR, bitwise XOR, bitwise NOT, bitwise NAND, or bitwise NOR.[0107] Example 15 includes the method of example 11, including generating statistics based on the signals and the results of the operations, the statistics including a maximum, a minimum, an average, and a relationship between the set of signals and the result of the operations.[0108] Example 16 includes the method of example 11, including a method for the anomaly detector including retrieving the result of the operations, and determining whether the result of the operations indicate threat to the SoC.[0109] Example 17 includes the method of example 16, including, in response to determining the result of the operations indicates threat to functional safety of the SoC, adapt a process for the SoC to preserve functional safety.[0110] Example 18 includes the method of example 16, including determining whether utilization of the HWA is greater than a high risk threshold, in response to the determination that the utilization the HWA is greater than the high risk threshold, decreasing the utilization of the HWA, determining whether the utilization the HWA is less than a low risk threshold, in response to determining that the utilization the HWA is less than the low risk threshold, increasing the utilization of the HWA, and in response to determining that the utilization the HWA is greater than the low risk threshold and less than the high risk threshold, performing no alteration to the utilization of the HWA.[0111] Example 19 includes the method of example 11, wherein the event processor can be programmably configured to detect events.[0112] Example 20 includes the method of example 11, further including transmitting the results of the operations to the anomaly detector via a data bus.[0113] Although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.[0114] The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of this description. |
In some examples, an integrated circuit package (100) is an antenna-on-package package that includes a plurality of dielectric layers (102-108), a plurality of conductor layers (110-116) interspersed with the plurality of dielectric layers (102-108), and an integrated circuit die (126) disposed on a first side of the plurality of dielectric layers (102-108). The plurality of conductor layers (110-116) includes a first layer (110) disposed on a second side of the plurality of dielectric layers (102-108) that includes a set of antennas (132). In some such examples, the integrated circuit die (126) includes radar processing circuitry, and the AOP integrated circuit package (100) is configured for radar applications. |
CLAIMSWhat is claimed is:1. An integrated circuit package comprising:a plurality of dielectric layers;an integrated circuit die disposed a first side of the plurality of dielectric layers; and a plurality of conductor layers interspersed with the plurality of dielectric layers, wherein: the plurality of conductor layers includes a first layer disposed on a second side of the plurality of dielectric layers opposite the first side; andthe first layer includes antennas electrically coupled to the integrated circuit die.2. The integrated circuit package of claim 1, wherein the antennas include a first antenna having:a center portion;a first side portion partially separated from the center portion by a first recess; and a second side portion partially separated from the center portion by a second recess.3. The integrated circuit package of claim 1, wherein each of the antennas is cavity- backed.4. The integrated circuit package of claim 1, wherein the antennas include a transmitter antenna and a receiver antenna.5. The integrated circuit package of claim 4, further comprising an electromagnetic band gap structure disposed between the transmitter antenna and the receiver antenna.6. The integrated circuit package of claim 5, wherein the electromagnetic band gap structure includes a plurality of isolated features of the first layer.7. The integrated circuit package of claim 1, wherein the antennas include:a plurality of transmitter antennas aligned in a first direction; anda plurality of receiver antennas aligned in a second direction that is perpendicular to the first direction.8. The integrated circuit package of claim 7, wherein:the first direction is at an angle of about 45° relative to side surfaces of the plurality of transmitter antennas; andthe second direction is at an angle of about 45° relative to side surfaces of the plurality of receiver antennas.9. The integrated circuit package of claim 7, wherein:the antennas are configured to produce an electromagnetic wave at a center frequency; the plurality of transmitter antennas are arranged at a pitch that is about half a wavelength of the electromagnetic wave; andthe plurality of receiver antennas are arranged at the pitch that is about half the wavelength of the electromagnetic wave.10. The integrated circuit package of claim 1 further comprising a plurality of package connectors disposed on the first side of the plurality of dielectric layers and are electrically coupled to the integrated circuit die.11. An integrated circuit package comprising:an integrated circuit die; anda first conductor layer disposed on the integrated circuit die that includes antennas, wherein each of the antennas includes:a center portion;a first side portion adjacent the center portion;a first recess extending partially through the respective antenna between the center portion and the first portion;a second side portion adjacent the center portion; anda second recess extending partially through the respective antenna between the center portion and the second portion.12. The integrated circuit package of claim 11, wherein the first conductor layer further includes a first ground plane disposed around the antennas.13. The integrated circuit package of claim 12 further comprisinga second conductor layer disposed between the first conductor layer and the integrated circuit die, wherein:the second conductor layer includes a second ground plane directly underneath the first ground plane that is coupled to the first ground plane; anda region directly underneath each of the antennas is free of the second conductor layer.14. The integrated circuit package of claim 13, wherein the second conductor layer further includes a transmission line that electrically couples a first antenna of the antennas to the integrated circuit die.15. The integrated circuit package of claim 14, wherein the transmission line includes an impedance-matching stub.16. An apparatus comprising:a plurality of dielectric layers configured to couple to an integrated circuit die on a first side of the plurality of dielectric layers;a plurality of connectors disposed on the first side of the plurality of dielectric layers; a first set of conductive features disposed within the plurality of dielectric layers and configured to electrically couple the integrated circuit die to the plurality of connectors; anda second set of conductive features disposed within the plurality of dielectric layers that includes a plurality of radar antennas configured to electrically couple to the integrated circuit die.17. The apparatus of claim 16, wherein each antenna of the plurality of radar antennas is cavity-backed.18. The apparatus of claim 17, wherein the plurality of radar antennas include a plurality of transmitter antennas and a plurality of receiver antennas.19. The apparatus of claim 18, further comprising an electromagnetic band gap structure disposed between the plurality of transmitter antennas and the plurality of receiver antennas.20. The apparatus of claim 18, wherein:the plurality of transmitter antennas are aligned in a first direction that is about 45° relative to side surfaces of the plurality of transmitter antennas; andthe plurality of receiver antennas are aligned in a second direction that is perpendicular to the first direction and that is about 45° relative to side surfaces of the plurality of receiver antennas. |
ANTENNA-ON-PACKAGE INTEGRATED CIRCUIT DEVICEBACKGROUND[0001] Large-scale radar systems are used for tracking aircraft, forecasting weather, studying geological formations, observing planets, and other long-range applications. Such systems are often large and powerful. At the same time, rapid advances in signal processing and semiconductor fabrication have allowed radar systems to be miniaturized. These low-power, low-cost radar systems have opened the door to a wide variety of applications including self- driving cars, automated material-handling systems, collision avoidance, and other applications.[0002] A radar system senses distant objects by emitting electromagnetic waves using one or more transmitter antennas and receiving reflections of the electromagnetic waves using one or more receiver antennas. Control of the transmitted signals and processing of the received signals may be performed by a number of active and passive integrated circuit devices on one or more integrated circuit dies. In turn, the dies and devices may be incorporated into one or more semiconductor packages. A semiconductor package surrounds and protects the incorporated integrated circuit dies and/or devices. The package may include layers of rigid insulating material and layers of conductive material that extend through the insulating material to connect the dies and devices to each other and to the remainder of the system.SUMMARY[0003] In some examples, a Monolithic Microwave Integrated Circuit (MMIC) package is provided that includes an integrated circuit die, a set of transmitter antennas, and a set of receiver antennas. Accordingly, the MMIC package may be referred to as an Antenna-On-Package (AOP) radar device.[0004] In some examples, an integrated circuit package includes a plurality of dielectric layers and an integrated circuit die disposed on a first side of the plurality of dielectric layers. A plurality of conductor layers are interspersed with the plurality of dielectric layers, which include a first layer disposed on a second side of the plurality of dielectric layers opposite the first side. The first layer includes a set of antennas electrically coupled to the integrated circuit die. In some such examples, a first antenna of the set of antennas has a center portion, a first side portion partially separated from the center portion by a first recess, and a second side portion partially separated from the center portion by the second recess. In some such examples, the antennas are cavity-backed antennas. In some such examples, the set of antennas include at least one transmitter antenna and at least one receiver antenna. In some such examples, the integrated circuit package includes an electromagnetic band gap structure disposed between the at least one transmitter antenna and the at least one receiver antenna. In some such examples, the electromagnetic band gap structure includes a plurality of electrically isolated features of the first layer. In some such examples, the set of antennas include a plurality of transmitter antennas aligned in a first direction, and a plurality of receiver antennas aligned in a second direction that is perpendicular to the first direction. In some such examples, the first direction is at an angle of about 45° relative to side surfaces of the plurality of transmitter antennas; and the second direction is at an angle of about 45° relative to side surfaces of the plurality of receiver antennas. In some such examples, the set of antennas are configured to produce an electromagnetic wave at a center frequency. The plurality of transmitter antennas are arranged at a pitch that is about half a wavelength of the electromagnetic wave, and the plurality of receiver antennas are arranged at the same pitch. In some such examples, a plurality of package connectors is disposed on the first side of the plurality of dielectric layers and are electrically coupled to the integrated circuit die.[0005] In further examples, an integrated circuit package includes an integrated circuit die and a first conductor layer disposed on the integrated circuit die that includes a set of antennas. Each antenna of the set of antennas include a center portion, a first side portion adjacent the center portion, a first recess extending partially through the respective antenna between the center portion and the first portion, a second side portion adjacent the center portion, and a second recess extending partially through the respective antenna between the center portion and the second portion.[0006] In yet further examples, an apparatus includes a plurality of dielectric layers configured to couple to an integrated circuit die on a first side of the plurality of dielectric layers, a plurality of connectors disposed on the first side of the plurality of dielectric layers, a first set of conductive features disposed within the plurality of dielectric layers and configured to electrically couple the integrated circuit die to the plurality of connectors, and a second set of conductive features disposed within the plurality of dielectric layers that includes a plurality of radar antennas configured to electrically couple to the integrated circuit die. BRIEF DESCRIPTION OF THE DRAWINGS[0007] Features of the present invention are described the following detailed description and the accompanying drawings. In that regard:[0008] FIG. l is a cross sectional view of a portion of an antenna-on-package integrated circuit package according to some examples.[0009] FIG. 2 is a top view of an antenna-on-package integrated circuit package according to some examples.[0010] FIG. 3 is a perspective view of a package that includes a package-integrated antenna according to some examples.[0011] FIGS. 4A-4E are perspective views of layers of a package that includes a package- integrated antenna according to some examples.[0012] FIGS. 5A-5E are perspective views of layers of a package that includes a package- integrated antenna according to some examples.[0013] FIG. 6 is a perspective view of a package that includes an electromagnetic band gap cell according to some examples.[0014] FIG. 7 is a block diagram of a vehicle radar system according to some examples.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0015] Specific examples are described below in detail with reference to the accompanying figures. These examples are not intended to be limiting, and unless otherwise noted, no feature is required for any particular example. Moreover, the formation of a first feature over or on a second feature in the description that follows may include examples in which the first and second features are formed in direct contact and examples in which additional features are formed between the first and second features, such that the first and second features are not in direct contact.[0016] Relative terms that describe orientation, such as“above,”“below,”“over,”“under,” “on,” etc ., are provided for clarity and are not absolute relationships. For example, a first element that is“above” a second element may be just as accurately described as“below” the second element if the orientation of the device is flipped.[0017] This description provides a semiconductor package, such as a Monolithic Microwave Integrated Circuit (MMIC) package. The package includes transmitter and receiver radar antennas and an integrated circuit die with radar-processing circuitry. The circuitry may perform various functions such as driving signals that control the transmitter antennas and processing signals received by the receiver antennas. As the name suggests, this type of package that includes antenna along with the control circuitry may be referred to as an Antenna-On-Package (AOP) device. Incorporating the radar antennas into the same package as the control circuitry may avoid many of the challenges of coupling the antennas to an integrated circuit die through a printed circuit board. It may also reduce power loss by improved coupling of signal paths to transmit and receive antennas, resulting in better radar performance ( e.g ., improved maximum range). It may also greatly reduce the size of the overall radar system and simplify integration of the radar system into a vehicle, factory, facility, or other environment. It may also reduce system power and cost.[0018] In some examples, the dies are attached to an underside of the package near the package connectors in an undermount configuration, while the antennas are formed on a top side of the package. Because the antennas radiate energy through the thinner package top rather than through the bulk of the package, antenna efficiency may be improved and spurious radiation may be reduced. Furthermore, the configuration may reduce the overall package size by allowing antennas to be formed directly on top of the die. As yet a further advantage, this configuration allows the upper conductor levels to be reserved for routing transmission lines to and from the antennas with good isolation and minimal routing loss. It may also leave the lower routing levels for optimal routing of non-radio frequency I/O signals between the package connectors and the die(s). In sum, this configuration may provide high antenna efficiency, small size, and efficient routing.[0019] In some examples, the antennas are cavity-backed antennas shaped and configured to provide good antenna-to-antenna isolation. In some examples, the antennas may have a slotted E-shaped configuration to improve antenna bandwidth, and the slot position and depth may be tuned based on the desired frequency response. The antennas may be arranged in arrays, and in some examples, antennas are rotated 45° within an array to reduce antenna-to-antenna coupling caused by close spacing.[0020] To isolate the receiver antennas from the transmitter antennas, the package may include Electromagnetic Band Gap (EBG) structures between the receiver antennas and the transmitter antennas and along edges of the package that dampen surface waves and spurious radiation.[0021] These advantages are merely provided as examples, and unless otherwise noted, no particular advantage is required for any particular embodiment.[0022] Examples of an AOP integrated circuit package 100 are described with reference to FIGS. 1 and 2. In that regard, FIG. l is a cross sectional view of a portion of the AOP integrated circuit package 100 according to some examples. FIG. 2 is a top view of the AOP integrated circuit package 100 according to some examples.[0023] The package 100 includes one or more dielectric layers that provide physical support for and isolate a network of interconnecting conductors. Examples of dielectric layers include back-side solder resist layers 102, intermediate dielectric layers 104, a core dielectric layer 106, and a front-side solder resist layer 108 disposed opposite the back-side solder resist layers 102.[0024] As they may form the exterior of the package, the front-side solder resist layer 108 and back-side solder resist layers 102 may include dielectric materials selected to be impervious to air and moisture, to provide good crack resistance, and to control solder flow, in addition to providing electrical isolation. The front-side solder resist layer 108 and back-side solder resist layers 102 may also be referred to as solder mask layers. The front-side solder resist layer 108 and the back-side solder resist layers 102 may be formed to any suitable thickness, and in various examples, the front-side solder resist layer 108 and the back-side solder resist layers 102 are between about 5 pm and about 30 pm thick.[0025] The intermediate dielectric layers 104 may include any suitable dielectric materials, and examples include resin laminates. The intermediate dielectric layers 104 may be formed to any suitable thickness and, in various examples, are between about 10 pm and about 50 pm thick.[0026] The core dielectric layer 106 may provide the bulk of the rigidity and may be configured accordingly. In that regard, the core dielectric layer 106 may be thicker than the back-side solder resist layers 102, the intermediate dielectric layers 104, and the front-side solder resist layer 108. In some examples, the core dielectric layer 106 is between about 150 pm and about 250 pm thick. The core dielectric layer 106 may include any suitable dielectric materials, which may be selected, in part, based on resistance to deformation. In various examples, the core dielectric layer 106 includes resin laminates and ceramics.[0027] Conductive traces extend throughout the dielectric layers 102-108 to carry signals and power between the devices of the integrated circuit package 100. The traces may be divided among conductor layers 110-116 that extend primarily horizontally and conductive vias 118-122 that extend primarily vertically. For ease of reference, horizontal conductor layers 110-116 are referred to as Ml layer 110, M2 layer 112, M3 layer 114, and M4 layer 116; and via layers 118- 122 are referred to as VI layer 118, V2 layer 120, and V3 layer 122. The conductive traces within the layers 110-122 may include any suitable conductive material, such as copper, aluminum, gold, silver, nickel, tungsten, and/or alloys thereof. The integrated circuit package 100 may also include package interconnect connectors 124, such as ball grid array connectors, land grid array connectors, pin grid array connectors, and/or surface-mount leads, to carry signals and power between the devices of the integrated circuit package 100 and the remainder of a radar system.[0028] The integrated circuit package 100 may also include a number of integrated circuit dies 126 coupled to the dielectric layers. In turn, each integrated circuit die 126 may include a number of active circuit elements ( e.g ., bipolar junction transistors, field effect transistors, etc.) and/or passive circuit elements (e.g., resistors, capacitors, inductors, diodes, transformers, etc.) formed on a semiconductor substrate. The circuit elements of the integrated circuit dies 126 may perform operations related to radar sensing such as driving radar transmitter antennas to produce electromagnetic waves and processing signals produced when reflected electromagnetic waves are received by radar receiver antennas.[0029] Within an integrated circuit die 126, the circuit elements are electrically coupled by an electrical interconnect, which may include a number of bond pads 128 for sending and receiving signals off the die 126. To carry these signals beyond the die 126, the bond pads 128 are electrically coupled to the rest of the package 100 during a die attach process by a suitable technique, such as soldering, thermosonic bonding, ultrasonic bonding, epoxy die attach, and/or other suitable techniques.[0030] Many of these techniques also provide a degree of physical coupling as the material (e.g, solder, underfill material) that electrically couples the bond pads 128 also physically couples the top or face of the die 126 to the package. To further secure the die 126 and to prevent intrusion by air and/or moisture, a mold compound 130 may also be applied to the top, sides, and/or bottom of the integrated circuit die 126. A mold compound 130 may include an epoxy resin with one or more fillers, catalysts, flame retardants, adhesion promotors, and/or other additives and may be configured to create a hermetic seal around the die 126. Suitable mold compounds 130 include epoxy cresol novolac (ECN) resins and other types of resins.[0031] The integrated circuit dies 126 may be physically coupled to the remainder of the package 100 in any suitable configuration. For example, the integrated circuit dies 126 may be coupled in an undermount arrangement where the integrated circuit dies 126 are on the same side of the package as the package interconnect connectors 124.[0032] The AOP package 100 may include a number of antennas coupled to the circuitry of the die 126. Examples of transmitter antennas 132 and receiver antennas 134 are shown in the top view of FIG. 2. In this particular view, the front-side solder resist layer 108 is omitted to better illustrate the underlying layers including the Ml layer 110, which is used to form the set of radar transmitter antennas 132 and the set of radar receiver antennas 134. Example structures of the antennas 132 and 134 are described in more detail in subsequent figures.[0033] The integrated circuit package 100 may include any number of radar transmitter antennas 132 and receiver antennas 134 depending on the application. The antennas may be grouped into arrays, and in some examples, the transmitter antennas 132 and receiver antennas 134 are arranged to produce a Multi-Input Multi-Output (MIMO) array. In some such examples, the array of transmitter antennas 132 is aligned in a first direction 136 perpendicular to the set of receiver antennas 134, which is aligned in a second direction 138. This allows beamforming in both the azimuth and elevation planes. Within the array, the transmitter antennas 132 may be spaced apart in the first direction 136 by any suitable amount, and the receiver antennas 134 may be spaced apart in the second direction 138 by any suitable amount. In some examples, the antennas 132 and 134 are configured to emit and receive electromagnetic waves at a set of frequencies and are arranged at a center-to-center pitch 140 that is less than or equal to about half of the wavelength of the electromagnetic waves at the center frequency ( e.g ., 1.9 mm spacing corresponding to about 79 GHz). This spacing may avoid grating lobes that may otherwise create ambiguity in object detection.[0034] As can be seen, the transmitter antennas 132 and the receiver antennas 134 may be oriented at a 45° angle so that the first direction 136 and the second direction 138 are at about a 45° angle relative to the side surfaces of the antennas 132 and 134. It has been determined that closely arranged antennas in an array may experience antenna-to-antenna mutual coupling that reduces the accuracy of angle-of-arrival (AoA) estimation for a given MIMO array. However, rotating the antennas 132 and 134 so that the arrays of antennas 132 and 134 extend diagonal (e.g., at about 45°) to the side surfaces of the antennas 132 and 134 has been determined to reduce this coupling and thereby provide greater accuracy. [0035] To better isolate the receiver antennas 134 from direct interference by the transmitter antennas 132, the integrated circuit package 100 may also include an Electromagnetic Band Gap (EBG) structure 142. In some examples, the EBG structure 142 is configured to dampen surface waves along the integrated circuit package 100 and other sources of interference. In various examples, the EBG structure 142 has been demonstrated to improve isolation by 6 dB or more. The EBG structure 142 may also be configured to improve the radiation patterns of the antennas 132 and 134. The EBG structure 142 may include a number of conductive features in the conductor layers 110-122, and example configurations are described in more detail below.[0036] Examples of a package-integrated antenna 302 suitable for use as one of antennas 132 and/or 134 are described with reference to FIGS. 3 and 4A-4E. FIG. 3 is a perspective view of a package 300 that includes the package-integrated antenna 302 according to some examples. FIGS. 4A-4E are perspective views of specific conductor layers of the package 300. FIG. 4A is perspective view of an M3 layer 114 of the package 300 that includes the package-integrated antenna 302 according to some examples. FIG. 4B is perspective view of a V2 layer 120 of the package 300 that includes the package-integrated antenna 302 according to some examples. FIG. 4C is perspective view of an M2 layer 112 of the package 300 that includes the package- integrated antenna 302 according to some examples. FIG. 4D is perspective view of a VI layer 118 of the package 300 that includes the package-integrated antenna 302 according to some examples. FIG. 4E is perspective view of an Ml layer 110 of the package 300 that includes the package-integrated antenna 302 according to some examples.[0037] The package-integrated antenna 302 may be formed by one or more layers of the package in which it is incorporated. In some examples, the package-integrated antenna 302 includes an Ml layer 110, an M2 layer 112, an M3 layer 114, a VI layer 118, a V2 layer 120, a front-side solder resist layer 108, an intermediate dielectric layer 104, and core dielectric layer 106, each substantially as described above. In the perspective view of FIG. 3, the dielectric layers are translucent to avoid obscuring the conductive features.[0038] Referring first to FIGS. 3 and 4 A, in the M3 layer 114, the package 300 may include a first ground plane 304 of conductive material that extends underneath and beyond the antenna 302. Referring to FIGS. 3 and 4B, in the V2 layer 120, the package 300 may include one or more vias 306 that couple the first ground plane 304 to a second ground plane 308 in the M2 layer 112. The vias 306 may define sides of a cavity that lies directly underneath a patch 316 of the antenna 302. The cavity may contain dielectric material of the intervening dielectric layers e.g ., core dielectric layer 106, intermediate dielectric layer 104, etc) while being free of any conductive features (other than possibly a transmission line 310 and via 312A coupled to the antenna 302) between the antenna patch 316 in the Ml layer 110 and the first ground plane 304 in the M3 layer. In this way, the resulting antenna 302 may be considered a cavity-backed antenna 302. This configuration may improve isolation of the antenna 302 and/or improve radiation efficiency.[0039] Additionally, the vias 306 may define and surround a cut out for a conductive transmission line 310 in the M2 layer 112. The V2 layer 120 may also include one or more vias 306A that couple the transmission line 310 to lower layers and to a bond pad 128 of the die 126.[0040] Referring to FIGS. 3 and 4C, in the M2 layer 112, the package 300 may include the second ground plane 308 that surrounds the antenna 302 but does not extend directly underneath. In this way, the M2 layer 112 further defines the cavity underneath the antenna 302.[0041] The M2 layer 112 may also include the conductive transmission line 310 (e.g., a microstrip or stripline) that couples to the antenna 302. In the case of a transmitter antenna 132, the transmission line 310 carries a driving signal from a die 126 to the antenna 302 that causes the antenna 302 to produce an electromagnetic wave. In the case of a receiver antenna 134, the transmission line 310 carries a signal produced by the antenna 302 in response to a reflected electromagnetic wave to a die 126 that processes the signal. The second ground plane 308 in the M2 layer 112 may be cut out so as not to couple to the transmission line 310.[0042] To minimize losses and/or to reduce signal reflection, the antenna may be impedance matched to the circuitry of the die 126. Accordingly, in some of the examples of FIG. 4C, the transmission line 310 includes one or more portions 311 with varying trace width to tune the impedance. In some such examples, the transmission line 310 includes a quarter-wave stub in series, such as a quarter-wave transformer stub, configured to adjust the impedance of the antenna to match the impedance of the circuitry on the die 126.[0043] Referring to FIGS. 3 and 4D, in the VI layer 118, the package 300 may include one or more vias 312 that couple the second ground plane 308 to a third ground plane 314 in the Ml layer 110. The vias 312 may further define the cavity underneath the antenna 302 and further define the cut out for the transmission line 310. The VI layer 118 may also include one or more vias 312A that couple the transmission line 310 to an antenna patch in the Ml layer 110. [0044] Referring to FIGS. 3 and 4E, in the Ml layer 110, the package 300 may include the third ground plane 314 and the patch 316 of the antenna 302. As noted above, the antenna 302 may be configured to emit and/or receive electromagnetic waves at a set of frequencies. In some examples, the antenna 302 is tuned for 78.5 GHz radar and provides at least 5 GHz of bandwidth from 76 GHz to 81 GHz. The set of frequencies may govern the shape and structure of the antennas, including the patch 316. In some examples, the patch 316 has a continuous geometric shape ( e.g ., a simple rectangle that extends uninterrupted from end to end), although it may also have any other suitable antenna shape. Accordingly, in some examples, the patch 316 includes a center portion 318 and side portions 320 disposed on opposite sides that are partially separated from the center portion 318 by recesses that extend partially through the patch 316 in a direction parallel to an edge of the patch 316. In some examples where the patch 316 has a width 322 in a first direction of between about 1000 pm and about 2000 pm and a length 324 in a second direction of between about 500 pm and about 1000 pm, the recesses may extend between about 400 pm and about 900 pm into the patch 316 in the second direction (e.g., about 50 pm less than the patch length 324). In such examples, the recesses may have a width in the first direction between about 50 pm and about 100 pm and may form the center portion 318 and the side portions 320 to have widths in the first direction of between about 200 pm and about 1500 pm. The widths of the center portion 318 and side portions 320 may be the same or different from one another. Of course, other configurations of the patch 316 are both contemplated and provided for.[0045] Further examples of a package-integrated antenna 502 suitable for use as one of antennas 132 and/or 134 are described with reference to FIGS. 5A-5E. FIG. 5A is perspective view of an M3 layer 114 of a package 500 that includes the package-integrated antenna 502 according to some examples. FIG. 5B is perspective view of a V2 layer 120 of the package 500 that includes the package-integrated antenna 502 according to some examples. FIG. 5C is perspective view of an M2 layer 112 of the package 500 that includes the package-integrated antenna 502 according to some examples. FIG. 5D is perspective view of a VI layer 118 of the package 500 that includes the package-integrated antenna 502 according to some examples. FIG. 5E is perspective view of an Ml layer 110 of the package 500 that includes the package- integrated antenna 502 according to some examples.[0046] In many aspects, the package 500 is substantially similar to the package 300 of FIGS. 3-4E. For example, referring to FIG. 5A, the package 500 may include a first ground plane 304 of conductive material in the M3 layer 114 that extends directly underneath and beyond the antenna 502. Referring to FIG. 5B, in the V2 layer 120, the package 500 may include one or more vias 306 that couple the first ground plane 304 to a second ground plane 308 in the M2 layer 112. The vias 306 may define a cavity directly underneath the antenna 502. The V2 layer 120 may also include one or more vias 306A that couple the transmission line 310 to lower layers and to a bond pad 128 of the die 126.[0047] Referring to FIG. 5C, in the M2 layer 112, the package 500 may include the second ground plane 308 that surrounds the antenna 502 but does not extend directly underneath. In this way, the M2 layer 112 further defines the cavity underneath the antenna 502.[0048] The M2 layer 112 may also include the conductive transmission line 310 that couples to the antenna 502. To minimize losses and/or signal reflection, the antenna 502 may be impedance matched to the circuitry of the die 126. Accordingly, in some of the examples of FIG. 5C, the transmission line 310 includes one or more short-circuited or open-circuited stubs 504 to adjust the impedance of the antenna to match the impedance of circuitry on the die 126.[0049] Referring to FIG. 5D, in the VI layer 118, the package 500 may include one or more vias 312 that couple the second ground plane 308 to a third ground plane 314 in the Ml layer 110. The VI layer 118 may also include one or more vias 312A that couple the transmission line 310 to an antenna patch in the Ml layer 110.[0050] Referring to FIG. 5E, in the Ml layer 110, the package 500 may include the third ground plane 314 and the patch 316 of the antenna 502. The patch 316 may be substantially similar to that described above and may be configured to emit and/or receive electromagnetic waves at a set of frequencies. In some examples, the patch 316 includes a center portion 318 and side portions 320 disposed on opposite sides that are partially separated from the center portion 318 by recesses that extend partially through the patch 316 in a direction parallel to an edge of the patch 316.[0051] The antennas of FIGS. 3-5E are merely some examples of suitable antennas, and the package may incorporate other suitable antenna structures both additionally and in the alternative.[0052] As shown in FIG. 2, the AOP package 100 may include an EBG structure 142 disposed between the transmitter antennas 132 and the receiver antennas 134. The EBG structure 142 may include any number of repeating EBG cells disposed in direct contact with one another. Examples of an EBG cell 602 suitable for use in the EBG structure 142 are described with reference to FIG. 6. FIG. 6 is a perspective view of a package 600 that includes the EBG cell 602 according to some examples. Adjacent EBG cells 602 align along the dashed boundary.[0053] The EBG cell size depends on the frequency or frequency range that the EBG cell 602 is intended to dampen. In some examples, it is a square cell with a length and a width between about 200 pm to 300 pm to dampen 76 GHz to 81 GHz waves.[0054] Similar to a package-integrated antenna, the EBG cell 602 may be formed from one or more layers of the package in which it is incorporated. In some examples, the EBG cell 602 includes features of an Ml layer 110, an M2 layer 112, an M3 layer 114, a V2 layer 120, a front side solder resist layer 108, an intermediate dielectric layer 104, and a core dielectric layer 106, each substantially as described above. For clarity, the dielectric layers are transparent to show the underlying conductor layers.[0055] In the M3 layer 114, the EBG cell 602 may include a bottom conductive feature 604 that extends along an entirety of the EBG cell 602. When the EBG cell 602 is disposed next to another EBG cell 602 in an EBG structure 142, the bottom conductive feature 604 may couple across EBG cells 602 so that the combined bottom conductive feature 604 extends along the entirety of the EBG structure 142. In particular, the combined bottom conductive features 604 may extend past the EBG structure 142 to couple to the first ground plane 304 underneath the antenna(s) 302.[0056] In the M2 layer 112, the EBG cell 602 may include an intermediate conductive feature 606. The intermediate conductive feature 606 may have any suitable shape depending on the frequency or frequency range that the EBG cell 602 is intended to dampen. In some examples, the intermediate conductive feature 606 is a rectangular prism with a length that is between about 150 pm and about 250 pm and a width that is between about 150 pm and about 250 pm. The intermediate conductive feature 606 may be sized such that an intermediate conductive feature 606 of an EBG cell 602 and an intermediate conductive feature 606 of an adjacent EBG cell 602 are separated by a gap that is between about 40 pm and about 80 pm.[0057] In the V2 layer 120, the EBG cell 602 may include a via 608 that couples the bottom conductive feature 604 to the intermediate conductive feature 606.[0058] In the Ml layer 110, the EBG cell 602 may include a set of top conductive features 610. As with the intermediate conductive feature 606, the top conductive features 610 may have any suitable shape depending on the frequency or frequency range that the EBG cell 602 is intended to dampen. In some examples, the top conductive features 610 are rectangular prisms each with a length that is between about 150 pm and about 250 pm and a width that is between about 150 pm and about 250 pm. The top conductive features 610 may be arranged at the periphery of the EBG cell 602 such that when the EBG cell 602 is disposed next to another EBG cell 602 in an EBG structure 142, the top conductive features 610 may couple across EBG cells 602. In this regard, a combined top conductive feature 610 may be up to 4x the size (twice the length and twice the width) of a top conductive feature 610 of any one EBG cell 602.[0059] In some examples, no vias extend between the intermediate conductive feature 606 and the top conductive features 610, and thus, the top conductive features 610 are capacitively coupled to the intermediate conductive feature 606. In that regard, the top conductive features 610 may be conductively isolated from a remainder of the package 600.[0060] Of course, these are merely some examples of an EBG cell 602 and other suitable EBG cells 602 are both contemplated and provided for.[0061] An example of a system 700 in which the AOP integrated circuit packages 100, 300, 500, and/or 600 may be used is described with reference to FIG. 7. In that regard, FIG. 7 is a block diagram of a vehicle radar system 700 according to some examples.[0062] The system 700 includes a set of transmitter antennas 702, a set of receiver antennas 704, and a radar controller 706. The transmitter antennas 702 may be substantially similar to transmitter antennas 132 above, and the receiver antennas 704 may be substantially similar to the receiver antennas 134 above. Accordingly, the set of transmitter antennas 702 and the set of receiver antennas 704 may be physically incorporated into an AOP integrated circuit package 708 substantially similar to circuit packages 100, 300, 500, and/or 600, above. In turn, the radar controller 706 may be housed in one or more dies, such as the die 126 above, and physically incorporated into the AOP integrated circuit package 708.[0063] The system 700 may be incorporated into an automobile or other vehicle by deploying any number of instances of the integrated circuit package 708 around the perimeter to detect other vehicles, within the interior to detect passengers, and/or in any other suitable location throughout the vehicle. In some examples, the system 700 includes as many as 30 or more integrated circuit packages 708 deployed throughout the vehicle for collision avoidance. [0064] The system 700 further includes a Controller Area Network (CAN) bus 710 that communicatively couples the integrated circuit packages 708 to one or more of a system-level controller 712, a display 714, an audible alert device 716, and/or an automatic vehicle steering controller 718.[0065] In operation, the radar controller 706 generates a radar signal and one or more of the transmitter antennas 702 radiate a corresponding electromagnetic wave. Objects within the surrounding environment may reflect the electromagnetic wave causing a reflected echo to be received by one or more of the receiver antennas 704. The radar controller 706 may receive the corresponding radar signal from the receiver antennas 704 and may process the signal. The radar controller 706 may then transmit digital information regarding the radar signal or the radar return over the CAN bus 710.[0066] The system-level controller 712 receives the information from the CAN bus 710, and processes the information. In some examples, the system -level controller 712 processes the information to determine whether a collision is impending. If so, the system-level controller 712 may send a warning or notification that causes the display 714 and/or to the audible alert device 716 to alert the driver. Additionally or in the alternative, the system-level controller 712 may send a command to the automatic vehicle steering controller 718 to take action to avoid the collision, such as steering or breaking. Such collision avoidance steering commands may be conditioned on the system-level controller 712 determining, based on inputs from other AOP integrated circuit packages 708, that steering away from the impending collision would not steer into a different collision situation.[0067] The integrated circuit packages described herein may advantageously be used in other systems and designs, unrelated to automobile radars. In that regard, while an automobile radar MMIC is one example, application of these teachings to other non-automotive and non-radar applications is consistent with and contemplated by this description. |
An input receiver for stepping down a high-voltage domain input signal into a low-voltage-domain stepped-down signal includes a waveform chopper. The waveform chopper chops the high-voltage domain input signal into a first chopped signal and a second chopped signal. A high-voltage-domain receiver combines the first chopped signal and the second chopped signal into a high-voltage-domain combined signal. A step-down device converts the high-voltage-domain combined signal into a stepped-down low-voltage-domain signal. |
ClaimsWe claim:1. An input receiver, comprising:a waveform chopper configured to substantially pass an input signal as a first chopped signal when the input signal is greater than a threshold voltage and to clamp the first chopped signal at the threshold voltage when the input signal is less than the threshold voltage, and wherein the waveform chopper is further configured to substantially pass the input signal as a second chopped signal when the input signal is less than the threshold voltage and to clamp the second chopped signal at the threshold voltage when the input signal is greater than the threshold voltage; anda chopped waveform receiver including a first switch configured to switch on to charge a combined signal to a first power supply voltage when the first chopped signal is clamped at the threshold voltage and a second switch configured to switch on to ground the combined signal when the second chopped signal is clamped at the threshold voltage.2. The input receiver of claim 1, wherein the threshold voltage is an internal power supply voltage that is approximately one-half of the first power supply voltage.3. The input receiver of claim 2, wherein the waveform chopper further comprises: a first pass transistor;a second pass transistor; anda capacitor having a first terminal and a second terminal, the capacitor configured to receive the input signal at the first terminal and to produce a bias voltage at the second terminal, wherein a gate for the first pass transistor and a gate for the second pass transistor are both coupled to the second terminal of the capacitor.4. The input receiver of claim 3, wherein the first switch comprises a PMOS transistor having its gate driven by the first chopped signal and a source coupled to a power supply configured to supply the first power supply voltage, and wherein the second switch comprises an NMOS transistor having it gate driven by the second chopped signal and its source coupled to ground.5. The input receiver of claim 2, wherein the chopped waveform receiver includes a first transistor configured to charge a terminal of the first switch to the internal power supply voltage in response to the combined signal being grounded.6. The input receiver of claim 5, wherein the first transistor comprises a PMOS transistor having a source coupled to a power supply node for supplying the internal power supply voltage, a drain coupled to the terminal of the first switch, and a gate coupled to an output node carrying the combined signal.7. The input receiver of claim 2, wherein the chopped waveform receiver includes a second transistor configured to charge a terminal of the second switch to the internal power supply voltage in response to the combined signal being charged to the first power supply voltage.8. The input receiver of claim 7, wherein the second transistor comprises anNMOS ti'ansistor having a drain coupled to th e terminal of the second switch, a source coupled to a power supply node for supplying the internal power supply voltage, and a gate coupled to an output node for carrying the combined signal.9. The input receiver of claim 2, further comprising a step down device configured to step down the combined signal to a stepped-down signal that cycles between ground and the internal power supply voltage.10. The input receiver of claim 9, wherein the step down device comprises an NMOS native pass transistor.1 1. The input receiver of claim 10, further comprising a Schmitt trigger configured to receive the stepped-down signal and produce an output signal.12. A method, comprising:receiving an input signal that cycles between approximately 0 V and a first power supply voltage VDDX, VDDX being approximately twice an internal voltage supply level VDD;chopping the received input signal into a first chopped signal that substantially equals the input signal when the input signal is greater than VDD and equals VDD when the input signal is less than VDD;chopping the input signal into a second chopped signal that substantially equals the input signal when the input signal is less than VDD and equals VDD when the input signal is greater than VDD; andcombining the first chopped signal and the second chopped signal into a combined signal by charging the combined signal to VDDX when the first chopped signal equals VDD and by grounding the combined signal when the second chopped signal equals VDD.13. The method of claim 12, further comprising: stepping down the combined signal into a stepped-down signal that cycles between ground and VDD.14. The method of claim 13, further comprising:applying hysteresis to the stepped-down signal to provide a final output signal that also cycles between ground and VDD.15. The method of claim 12, wherein combining the first chopped signal and the second chopped signal comprises controlling a first switch responsive to the first chopped signal and controlling a second switch responsive to the second chopped signal.16. An input receiver, comprising:a waveform chopper configured to substantially pass an input signal as a first chopped signal when the input signal is greater than an internal voltage supply VDD and to clamp the first chopped signal at VDD when the input signal is less than VDD, and wherein the waveform chopper is further configured to substantially pass the input signal as a second chopped signal when the input signal is less than VDD and to clamp the second chopped signal at VDD when the input signal is greater than VDD;means for combining the first chopped signal and the second chopped signal into a combined signal that is charged to a power supply voltage VDDX when the first chopped signal equals VDD and that is grounded when the second chopped signal equals VDD, wherein VDDX is approximately twice VDD; anda native pass transistor configured to step down the combined signal into a VDD-domain stepped-down signal.17. The input receiver of claim 16, wherein VDDX is approximately 3.3 V and VDD is approximately 1.8 V.18. The input receiver of claim 16, wherein the waveform chopper comprises: a voltage divider including a capacitor, the voltage divider being configured to receive the input signal such that a bias voltage develops at a terminal of the capacitor, a first pass transistor configured to pass the input signal as the first chopped signal, the first pass transistor having a gate controlled by the bias voltage; anda second pass transistor configured to pass the input signal as the second chopped signal, the second pass transistor having a gate controlled by the bias voltage.19. The input receiver of claim 18, wherein the waveform chopper further comprises:a first clamping transistor configured to clamp the first chopped signal at VDD when the input signal is less than VDD; anda second clamping transistor configured to clamp the second chopped signal at VDD when the input signal is greater than VDD.20. The input receiver of claim 18, wherein the first pass transistor is a non-native PMOS transistor, and wherein the second pass transistor is a non-native NMOS transistor. |
HIGH- VOLTAGE INPUT RECEIVER USING LOW- VOLTAGE DEVICESCross-Reference to Related Application[0001] This application claims priority to U.S. Nonprovisional Application No. 14/254,706, filed on April 16, 2014, which is herein incorporated by reference in its entirety.Technical Field[0002] This application relates to receivers, and more particularly for a receiver that converts a high-voltage-domain input signal into a received low-voltage-domain signal.Background[0003] As semiconductor technology has advanced into the deep submicron regime, the power supply voltage is scaled down in concert with the scaling down of transistor dimensions. Nevertheless, input/output (I/O) standards from higher-voltage regimes may still need to be supported. But the thick-oxide transistors in modern high- density integrated circuits may not be able to accommodate any voltage higher than some maximum level such as two volts across their gate-source, gate-drain, or source- drain junctions. To safely receive input signals with voltages that exceed such maximum levels, it is conventional to use native transistors in the integrated circuit's input receiver.[0004] An example conventional input receiver 100 is shown in Figure 1A. A native NMOS pass transistor 105 has its gate driven by the internal power supply voltage VDD. This internal voltage VDD is lower than a power supply voltage VDDX that is cycled to by a VDDX-domain input signal 102 received at a drain of native pass transistor 105. The level for VDDX depends upon the signaling protocol for input signal 102. For example, one signaling protocol may have input signal 102 cycle between 0 and 3.3V (VDDX) according to its frequency. In contrast, VDD may equal 1.8V or 1.65 V, which is a safer level for modern devices. In that regard, if 3.3V were impressed across any pair of terminals for native pass transistor 105 (drain-to-source, gate-to-source, or gate-to-drain), native pass transistor 105 may fail. More generally, VDD equals approximately one-half of VDDX, regardless of the level for VDDX as determined by the signaling protocol.[0005] As input signal 102 rises to VDD, it passes through to the drain of native pass transistor 105 since its voltage threshold is zero volts. The gate-to-source voltage for native pass transistor 105 eventually drops to zero, which prevents the source of native pass transistor from rising higher than VDD. Although the drain continues to rise to 3.3V in a cycle of input signal voltage 102, native pass transistor 105 is not strained since there is no more than a VDD voltage difference between its drain and source. Similarly, there is never more than a VDD voltage difference between the gate and drain or between the gate and source of native pass transistor 105.[0006] A receiver such as an inverter 110 powered by the VDD power supply voltage inverts the source voltage to produce a VDD-domain or stepped-down output signal 115 from VDDX-domain input signal 102. Inverter 110 drives output signal 115 to internal circuitry (not illustrated) of the integrated circuit that includes input receiver 100. Although native pass transistor 105 avoids voltage strain problems in converting VDDX-domain input signal 102 into a VDD-domain output signal 115, input receiver 100 suffers from a number of problems. For example, an external source drives input signal 102. Input receiver 100 has no control over this external source. Native pass transistor 105 thus passes whatever duty cycle and slew rate it receives through to inverter 110. The duty cycle and slew rate for VDD-domain output signal 115 from inverter 110 may thus be unacceptably distorted. In addition, further distortion results from input signal 102 oscillating between voltage minimums and voltage maximums that differ from the desired levels of ground and VDDX. Moreover, native devices such as native pass transistor 105 are very sensitive to process variations. Use of input receiver 100 is thus limited to relatively low input signal frequencies such as in the tens of MHz to satisfy a +/- 5% duty cycle error requirement.[0007] Accordingly, there is a need in the art for step-down input receivers providing more accurate performance in higher frequency domains.Summary[0008] An input receiver is provided that includes a waveform chopper for receiving an input signal. The waveform chopper chops the input signal into a first chopped signal and a second shopped signal with regard to a threshold voltage such as an internal power supply voltage VDD. The waveform chopper passes the input signal to drive the first chopped signal when the input signal cycles above VDD. However, the waveform chopper clamps the first chopped signal at VDD when the input signal cycles below VDD. Similarly, the waveform chopper passes the input signal to drive the second chopped signal when the input signal cycles below VDD but clamps the second chopped signal at VDD when the input signal cycles above VDD.[0009] A VDDX-domain receiver combines the chopped signals into a VDDX- domain combined signal. VDDX is a power supply voltage of approximately twice VDD. The VDDX-domain receiver charges the combined signal to VDDX when the first chopped signal is clamped at VDD. Conversely, the VDDX-domain receiver discharges the combined signal to ground when the second chopped signal is clamped at VDD.Brief Description of the Drawings[0010] Figure 1 is a circuit diagram of a conventional input receiver for converting a high-voltage-domain input signal into a received low-voltage-domain signal.[0011] Figure 2 is a block diagram of an input receiver for converting a high- voltage-domain input signal into a received low-voltage-domain signal having an improved slew rate, duty cycle, minimum and maximum voltage levels, and high- frequency performance in accordance with an embodiment of the disclosure.[0012] Figure 3 is a circuit diagram of the waveform chopper in the input receiver of Figure 2.[0013] Figure 4 is a timing diagram for the input signal and the corresponding first chopped signal and second chopped signal for the waveform chopper of Figure 3.[0014] Figure 5 is a circuit diagram of the chopped waveform receiver in the input receiver of Figure 2.[0015] Figure 6 is a flowchart for an example method of use for the input receiver of Figure 2.[0016] Embodiments of the disclosed input receiver and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures. Detailed Description[0017] An input receiver 200 illustrated in Figure 2 steps down a high-voltage- domain (VDDX) input signal 102 into a reduced-voltage-domam (VDD) output signal 225 with improved duty cycle, slew rate, and voltage minimum and maximum levels. To do so, input receiver 200 receives VDDX-domain input signal 102 at a waveform chopper 205. In contrast, conventional receiver 100 discussed with regard to Figure 1 receives input signal 102 at native pass transistor 105. Input signal 102 is intended to cycle between 0 volts and the high-voltage-domain supply voltage VDDX although it may be off from these minimum and maximum voltage levels due to inaccuracies in the input signal source (not illustrated). In that regard, input receiver 200 has no control over the quality of input signal 102 with regard to its slew rate, duty cycle, and maximum and minimum voltage levels since the external source (not illustrated) generates input signal 102 and drives it to the die (not illustrated) that includes input receiver 200. Waveform chopper 205 chops input signal 102 with regard to a threshold voltage such as an internal power supply voltage VDD that equals approximately VDDX/2. For example, in one embodiment VDDX may equal 3.3 V whereas VDD may equal 1.8 V or 1.65 V. The VDD voltage level is low enough such that low- voltage domain devices (not illustrated) downstream from input receiver 200 are not damaged by it. For example, the die containing input receiver 200 may include both thick-gate-oxide devices as well as thin-gate-oxide devices. The thick-gate-oxide devices are robust to the relatively high level for VDD such as 1.8V. In contrast, the thin-gate oxide devices are not robust to such voltage levels but instead can withstand only reduced voltage levels such as 1 V or lower.[0018] The devices in input receiver 200 may comprise thick-gate-oxide devices so that they are robust to VDD voltage levels. However, these devices are not robust to VDDX voltage differences. Although input receiver receives VDDX-domain input signal 102, the design of input receiver 200 ensures that each device in input receiver 200 never has an unsafe voltage level (e.g., VDDX) across any of its terminals (gate-to- source, gate-to-drain, and drain-to-sources) as will be explained further herein.[0019] Waveform chopper 205 produces two chopped signals: a first chopped signal (padsig_p) 230 that cycles between VDD and VDDX, and a second chopped signal (padsig 240 that cycles between 0 V and VDD. Waveform chopper 205 forms first and second chopped signals padsig _p 230 and padsig _n 240 with regard to, for example, VDD. More generally, VDD is representative of a threshold voltage for the chopping performed by waveform chopper 205. In that regard, note again that input signal 102 cycles (ideally) between 0 V and VDDX. Each cycle of input signal 102 will thus include a lower-half cycle in which input signal 102 cycles between ground and VDD and an upper-half cycle in which input signal 102 cycles between VDD and VDDX. Waveform chopper 205 substantially passes each upper-half cycle of input signal 102 as first chopped signal padsig _p 230. But waveform chopper 205 clamps first chopped signal padsig_p 230 at VDD when input signal 102 drops below VDD in its lower-half cycles. Each cycle of first chopped signal padsig_p 230 will thus include a clamped half-cycle and a non-clamped half-cycle. The clamped half-cycles correspond to the lower-half cycles for input signal 102. The non-clamped half-cycles correspond to the upper-half cycles for input signal 102. In the clamped half-cycles, first chopped signal padsig__p 230 is clamped at VDD during that portion of the lower- half cycle for input signal 102 when it drops below VDD. In the remaining portions of the clamped half-cycles, first chopped signal padsig_p 230 substantially equals input signal 102 as it rises from VDD or falls toward VDD. Similarly, in the non-clamped half-cycles, first chopped signal padsig_p 230 substantially equals input signal 102 as it rises and falls between VDD and VDDX in its upper-half cycles.[0020] Similarly, waveform chopper 205 substantially passes each lower-half cycle of input signal 102 as a non-clamped half cycle of second chopped signal padsig_n 240. However, waveform chopper 205 clamps second chopped signal padsigjn 240 at VDD when input signal 102 rises above VDD in its upper-half cycles. The upper-half cycles for input signal 102 thus correspond to the clamped half cycles for second chopped input signal padsig_n 240. As discussed above, input receiver 200 has no control over the quality of input signal 102. So the upper-half cycles for input signal 102 may not reach the desired or intended voltage level of VDDX. Similarly, the lower-half cycles for input signal 102 may not reach 0 V or ground (VSS).Nevertheless, one can be reasonably confident that input signal 102 is above VDD for a majority of the time in each upper-half cycle. First chopped signal padsigjp 230 will thus be clamped at VDD for most (or at least an appreciable portion) of each of its clamped half cycles. Similarly, one can be reasonably confident that input signal 102 is below VDD for a majority of the time in each lower-half cycle. Second chopped signal padsig_n 240 will thus be clamped at VDD for most (or at least an appreciable portion) of each of its clamped half cycles.[0021] One can therefore appreciate that a "combined" signal that cycles between 0 V and VDDX may be advantageously reconstructed from the clamped half cycles for first and second chopped signals padsig_p 230 and padsigji 240. For example, suppose that such a combined signal was driven to VDDX whenever first chopped signal padsig_p 230 is clamped at VDD. Similarly, suppose that the combined signal was grounded whenever second chopped signal padsig_n 240 is clamped at VDD. Since first chopped signal padsig_p is clamped at VDD as input signal 102 drops from VDD to ground whereas second chopped signal padsig_n 240 is clamped at VDD as input signal 102 rises from VDD to VDDX, the resulting combined signal is inverted or 180 degrees out of phase with input signal 102. Generating a combined signal in this fashion is quite advantageous because the combined signal will then have the desired minimum and maximum voltage levels. In contrast, these minimum and maximum voltage levels cannot be guaranteed for input signal 102. Moreover, because the clamped VDD levels occur for most of (or at least an appreciable portion of) each clamped half cycle for first and second chopped signals padsigjp 230 and padsig_n 240, the resulting combined signal would then have a desirable duty factor and slew rate. In contrast, the duty cycle and slew rates for input signal 102 have no such guarantee of a desirable duty factor, slew rate, or maximum and minimum voltage levels.[0022] Referring again to Figure 2, a VDDX-domain chopped waveform receiver 210 processes the first and second chopped signals padsig_p 230 and padsig_n 240 to produce a combined signal 235 that cycles as just described to achieve these advantages. The result is that input signal 102, which ideally cycles between 0 and VDDX, is processed to produce combined signal 235 that also cycles between 0 and VDDX. But note that input signal 102 is not merely reproduced to form combined signal 235. Instead, the combination of waveform chopper 205 and VDDX-domain chopped waveform receiver 210 improves the slew rate, duty cycle, and enforces the desired minimum and maximum voltage levels for combined signal 235 as discussed above.[0023] Given these improvements in slew rate, duty cycle, and the signal voltage minimum and maximum levels, a step-down device 215 such as a native pass transistor may then be used to form a VDD-domain output signal 245 from combined signal 235. As discussed analogously with regard to native pass transistor 105 of Figure 1, step-down device 215 may comprises an MOS native pass transistor (not illustrated) that receives VDDX-domain combined signal 235 at one drain/source terminal to pass a VDD-domain output signal 245 at its remaining drain/source terminal as controlled by VDD being applied to its gate. There is no voltage threshold loss in a native transistor so VDD-domain output signal 245 may saturate at VDD (as opposed to VDD minus some threshold voltage) when combined signal 235 cycles above VDD.[0024] In some embodiments, a hysteresis circuit 220 such as a Schmitt trigger may further process VDD-domain output signal 245 to form a final VDD-domain output signal 225 as discussed further herein. Alternatively, VDD-domain output signal 245 may be used as an output signal without any hysteresis treatment.[0025] Because of the slew rate and duty cycle adjustment and the enforcement of the desired voltage maximum and minimum levels by the combination of waveform chopper 205 and VDDX-domain chopped waveform receiver 210, input signal 102 may have a relatively high frequency such as hundreds of MHz or higher yet it may be stepped down from the VDDX domain to the VDD domain without loss of fidelity. These advantageous features may be better appreciated with reference to the following example embodiments.[0026] A circuit diagram for an example waveform chopper 205 is shown in Figure 3. A voltage divider formed by a capacitor 300 in series with a resistor 305 receives input signal 102 at a first terminal 302 of capacitor 300. Resistor 305 couples between a power supply node supplying the internal power supply voltage VDD and a remaining second terminal 301 for capacitor 300. Should input signal 102 be grounded, a voltage (designated as Vbias) for second tenninal 301 will thus settle to VDD. As input signal 102 rises to VDD, Vbias will rise slightly higher than VDD but to a level lower than VDDX before settling again to VDD as input signal 102 continues to rise to VDDX. The actual amount of voltage increase over VDD for Vbias depends upon the voltage division as determined by the resistance of resistor 305 and capacitance of capacitor 300. Conversely, as input signal 102 falls from VDDX to VDD, Vbias will be pulled temporarily lower than VDD before again settling to its default level of VDD as input signal continues to cycle towards ground and then back towards VDD.[0027] These temporary increases and decreases of Vbias with respect to its default level of VDD are advantageous because Vbias biases the gates of a PMOS pass transistor 310 and an NMOS pass transistor 315 in waveform chopper 205. The drain/source terminals for PMOS pass transistor 310 couple between first terminal 302 of capacitor 300 and an output node 320 for carrying first chopped signal padsig_p 230. Similarly, the drain/source terminals for NMOS pass transistor 315 couple between first terminal 302 and an output node 325 for carrying second chopped signal padsig_n 240. The operation of NMOS pass transistor 315 will be discussed first.[0028] As input signal 102 rises from 0 V to VDD, Vbias will jump slightly higher than VDD as discussed above. This rise in the gate voltage on NMOS pass transistor 315 assists it to pass as much as possible of the rising edge of input signal 102 through to second chopped signal padsig_n 240. But note that NMOS pass transistor 315 is not a native transistor. This is advantageous in that process variations for second chopped signal padsig n 240 are reduced but at the cost of a threshold voltage loss in the rising edge of second chopped signal padsig _n 240 in comparison to the rising edge of input signal 102. This threshold voltage loss is reduced by having Vbias drive the gate of NMOS pass transistor 315 as opposed to simply biasing this gate with VDD. In addition, an NMOS clamping transistor 330 has a source coupled to output node 325 and a drain coupled to a power supply node providing VDD. The gate of NMOS clamping transistor 330 is driven by the input signal 102. Although clamping NMOS transistor 330 is also a non-native transistor, its gate voltage will rise toward VDDX as input signal 102 rises to VDDX. Thus, even with a threshold voltage loss, clamping NMOS transistor 330 may readily clamp second chopped signal padsig 240 at VDD as input signal 102 rises above VDD towards VDDX.[0029] Operation of PMOS pass transistor 310 is analogous. As input signal 102 rises to VDDX, Vbias on the gate of PMOS pass transistor 310 becomes a virtual ground since Vbias will settle to VDD. As known in the PMOS arts, PMOS transistors pass a strong logic 1. Thus PMOS pass transistor 310 has no issue with regard to passing the rising edge of input signal 102 through to first chopped signal padsig_p 230 as input signal 102 rises from VDD to VDDX. However, PMOS transistors in general will pass a weak logic 0. To mitigate a resulting distortion on passing the falling edge of input signal 102 as it falls from VDDX to VDD, Vbias is temporarily pulled below VDD due to the effect of capacitor 300 as input signal 102 falls from VDDX to VDD. In this fashion, PMOS pass transistor 310 may pass more of the falling edge for input signal 102 through to first chopped signal padsigjp 230 as input signal 102 drops to VDD. In addition, a clamping PMOS transistor 335 has a source coupled to output node 320 and a remaining drain to a power supply node carrying VDD. The gate of clamping PMOS transistor 335 is driven by input signal 102. Clamping PMOS transistor 335 will thus be switched on while input signal 102 drops below VDD to clamp second chopped signal padsig_p 230 at VDD.[0030] Transistors 310, 315, 330, and 335 may all comprise thick-gate-oxide transistors such that they are robust to VDD-level voltage differences across their terminals. The biasing of the gates of pass transistors 310 and 315 with Vbias protects these transistors as input signal 102 rises to VDDX. Similarly, the biasing for both the source of clamping transistor 335 and the drain of clamping transistor 330 to VDD protects the clamping transistors as input signal 102 rises to VDDX,[0031] An example waveform for input signal 102 is shown in Figure 4 along with waveforms for corresponding first and second chopped signals padsig_p 230 and padsig_n 240. Second chopped signal padsig_n 240 is clamped at VDD for most of each upper-half cycle of input signal 102 as input signal 102 rises above VDD.Similarly, first chopped signal padsig_p 230 is clamped at VDD for most of each lower-half cycle for input signal 102 as input signal 102 falls below VDD. One can thus appreciate that the clamped half cycles in which first chopped signal padsig_p 230 is clamped at VDD and the clamped half cycles in which second chopped signal padsig_n 240 is clamped at VDD have relatively attractive duty cycles. As will be explained below, chopped waveform receiver 210 advantageously combines the clamped half cycles - in other words, while first chopped signal padsig_p 230 is at its VDD clamped level, combined signal 235 is driven to a logical one level (VDDX) whereas combined signal 235 is discharged to a logical zero level (VSS) while second chopped signal padsig_n 240 is clamped at VDD. The "good" half-cycles in chopped signals padsig_p 230 and padsig_n 240 are retained (the clamped half cycles) whereas their "bad" half-cycles are discarded (the non-clamped half cycles). In this fashion, the problems discussed earlier with regard to the prior art are conquered -input signal 102 may have an undesirable slew rate and minimum/maximum levels yet it is processed into combined signal 235 having the desired minimum level (VSS or ground), the desired maximum level (VDDX or 2* VDD), a desirable slew rate, and a desirable duty cycle.[0032] An example chopped waveform receiver 210 is shown in Figure 5. First chopped signal padsig_p 230 controls a first switch such as a PMOS transistor 500. The source of PMOS transistor 500 is tied to a power supply node for providing VDDX whereas its gate is driven by first chopped signal padsig_p 230. The clamped level of VDD for first chopped signal padsig_p 230 thus acts as a virtual ground for PMOS transistor 500 and switches this transistor fully on so that it charges its drain to VDDX when first chopped signal padsig_p 230 is clamped at VDD. A PMOS transistor 505 couples between the drain of PMOS transistor 500 and a resistor Rl. The bias signal Vbias drives the gate of PMOS transistor 505 so that PMOS transistor 505 is also fully on when first chopped signal padsig_p 230 clamps at VDD.[0033] Second chopped signal padsig_n 240 controls a second switch such an NMOS transistor 515. The source of NMOS transistor 515 is tied to ground and its gate driven by second chopped signal padsig _n 240. As shown in Figure 4, second chopped signal padsig_n 240 cycles down to VSS while first chopped signal padsig_p 230 is clamped at VDD. Thus, when PMOS transistor 500 is switched on, NMOS transistor 515 is switched off. The drain of NMOS transistor 515 couples to a source of another NMOS transistor 510 having a gate driven by Vbias. Thus, NMOS transistor 510 will also be off when NMOS transistor 515 is off. The drain of NMOS transistor 510 couples to a resistor R2 in series with resistor Rl. Output signal 235 is driven from a node between resistors Rl and R2. In general, downstream devices (not illustrated) that process combined signal 235 have a high input impedance such that relatively little current ever flows through resistors Rl or R2. The result is that when PMOS transistors 500 and 505 are switched on and transistor 515 switched off, combined signal 235 is driven to VDDX since there is effectively no resistive voltage drop across resistor Rl.[0034] When second chopped signal padsig_n 240 is clamped at VDD, both NMOS transistors 510 and 515 are switched on whereas PMOS transistors 505 and 500 are off. Combined signal 235 is thus discharged to ground in response to chopped signal padsig_n 240 being clamped at VDD. A PMOS transistor 520 couples between a power supply node providing VDD and the drain of PMOS transistor 500. PMOS transistor 520 is thus driven on when second chopped signal padsig_n 240 is clamped at VDD (which discharges combined signal 235) to protect PMOS transistor 500 from unsafe voltage levels. In that regard, PMOS transistor 500 has its source tied to VDDX and thus cannot have zero volts at its drain or it would be damaged, PMOS transistor 520 prevents the drain of PMOS transistor 505 from falling below VDD. Similarly, an NMOS transistor 525 has its source coupled to a power supply node providing VDD and its drain coupled to the drain of NMOS transistor 515. When first chopped signal padsig_p 230 is clamped at VDD, NMOS transistor 525 is switched on to charge the source of NMOS transistor 510 to VDD. In this fashion, NMOS transistor 510 is protected from excessive voltage levels since its drain is charged to VDDX at that time.[0035] In one embodiment, chopped waveform receiver 210 may be deemed to comprise a means for combining first chopped signal padsig_p 230 and the second chopped signal padsig _p 240 into combined signal 235 that is charged to VDDX when first chopped signal padsig p 230 equals VDD and that is grounded when second chopped signal padsig_n 240 equals VDD.[0036] Optional hysteresis generator 220 may comprise a Schmitt trigger or other suitable device. The resulting hysteresis is beneficial to alleviate the "shoulders" shown in Figure 4 for first and second chopped signals padsig_p 230 and padsig_n 240 as these signals approach their clamped levels of VDD. These irregularities in voltage occur due to pass transistors 310 and 315 being non-native and thus having non-zero threshold voltages. Hysteresis generator 220 has a high voltage threshold that input signal 102 must cross for final output signal 225 to be driven high to VDD. This high voltage threshold may be higher than VDD so that hysteresis generator 220 is not influenced by the irregularity in first chopped signal padsig_p 230 as first chopped signal padsig p 230 falls towards VDD. Similarly, hysteresis generator 220 may have a low voltage threshold that is lower than VDD so that hysteresis generator 220 is not influenced by the irregularity in second chopped signal padsig_n 240 as second chopped signal padsig n 240 rises to VDD. In this fashion, the duty cycle for final output signal 225 may be improved.[0037] Figure 6 is a flowchart for an example method of operation for an input receiver in accordance with an embodiment of the disclosure. The method begins with a step 600 of receiving an input signal that cycles between approximately ground and VDDX, VDDX being approximately twice an internal power supply voltage VDD. A step 605 comprises chopping the input signal into a first chopped signal that substantially equals the input signal when the input signal is greater than VDD and equals VDD when the input signal is less than VDD. Similarly, the method includes a step 610 of chopping the input signal into a second chopped signal that substantially equals the input signal when the input signal is less than VDD and equals VDD when the input signal is greater than VDD. Finally, the method includes a step 615 of combining the first chopped signal and the second chopped signal into a combined signal by charging the combined signal to VDDX when the first chopped signal equals VDD and by grounding the combined signal when the second chopped signal equals VDD.[0038] As those of some skill in this art will by now appreciate and depending on the particular application at hand, many modifications, substitutions and variations can be made in and to the materials, apparatus, configurations and methods of use of the devices of the present disclosure without departing from the spirit and scope thereof. In light of this, the scope of the present disclosure should not be limited to that of the particular embodiments illustrated and described herein, as they are merely by way of some examples thereof, but rather, should be fully commensurate with that of the claims appended hereafter and their functional equivalents. |
Video sources may be located on the Internet and particular videos at those sources may be selected for subsequent replay by using graphical controls provided, for example, in connection with a browser. These controls may permit the use of select, particular video segments for subsequent replay by adding them to a playlist. Then when the user has assembled the playlist in the order desired, the play of the playlist can be selected. The playlist video may then be displayed for the user on a remote display, such as a high definition television display. At the same time, the user's computer screen may display a control view which allows the user to view and add annotations and to control the play of a video on the high definition television screen. |
1.A method for video processing, comprising:Receiving and rendering video on a computer that includes a display;Sending the video for remote display on a television;Enabling playback of the resulting stream to be controlled from an interface that is overlaid on a portion of the computer display;Providing an icon for indicating availability of a note associated with the display on the television;Providing an icon on the television when the annotation is available;The user is enabled to select a note displayed on the computer display.2.The method of claim 1 wherein said receiving comprises receiving a video selected from the Internet for display on a television.3.The method of claim 2, further comprising enabling said video to be queued in a playlist for playing on said television.4.The method of claim 1 further comprising automatically selecting among the plurality of displays to display the video.5.The method of claim 1 further comprising providing a reduced size graphical user interface on said computer display for controlling display on said television.6.The method of claim 1 further comprising combining the video with additional information available on the computer and transmitting the combined information for display on the television.7.The method of claim 5 further comprising providing an icon on said reduced size graphical user interface to indicate when the annotation is available for the video being displayed on said television.8.A device for video processing, comprising:Means for receiving a request for content for playback;Means for requesting the content and any available annotations for the content from a remote server in response to the request for the content;Means for displaying the content on a television while displaying a user interface on a computer to control the television display;Means for providing an icon for indicating availability of an annotation associated with display on the television;Means for providing an icon on the television when the annotation is available;Means for enabling a user to select an annotation displayed on the computer display.9.The device of claim 8 further comprising means for providing said remote server with an authorization associated with the request for content and annotations.10.A method for video processing, comprising:Receiving a request for content for playback;Responding to the request for the content, requesting the content from the remote server and any available annotations for the content;Displaying the content on a television while displaying a user interface on the computer to control the television display;Providing an icon for indicating availability of a note associated with the display on the television;Providing an icon on the television when the annotation is available;The user is enabled to select a note displayed on the computer display.11.The method of claim 10, further comprising providing the remote server with an authorization related to the request for content and comments.12.A device for video processing, comprising:Means for receiving a video identification;Means for identifying any annotations associated with the video in response to the video identification;Means for filtering the annotation based on user preferences;Providing the user with the annotation and means for indicating where the annotation should be played during playback of the video;Means for displaying content on a television while displaying a user interface on a computer to control the television display;Means for providing an icon for indicating availability of an annotation associated with display on the television;Means for providing an icon on the television when the annotation is available;Means for enabling a user to select an annotation displayed on the computer display.13.The device of claim 12, further comprising means for filtering the annotation based on the annotation author.14.The device of claim 13 further comprising means for filtering the annotation based on the user's friend list.15.A method for video processing, comprising:Receiving a video identification;Identifying any annotations associated with the video in response to the video identification;Filtering the annotations based on user preferences;Providing the annotation to the user and an indication of where the annotation should be played during playback of the video;Displaying content on the television while displaying a user interface on the computer to control the television display;Providing an icon for indicating availability of a note associated with the display on the television;Providing an icon on the television when the annotation is available;The user is enabled to select a note displayed on the computer display.16.The method of claim 15 further comprising filtering the annotation based on the annotation author.17.The method of claim 16 further storing instructions to filter the annotation based on the user's friend list.18.An apparatus for video processing, comprising:a processor for controlling the first display and the second display and for providing to a first speaker associated with the first display and to be operated by the processor on the first display Applying the audio associated with the generated content for playback by the first speaker and for providing to a second speaker associated with the first display and for operation on the second display by the processor The audio associated with the content generated by another application for playback by the second speaker;Stored, the storage being coupled to the processor.19.The apparatus of claim 18 wherein said processor is operative to create an audio driver for said content to be played on said second display on said device.20.The device of claim 18 wherein said processor is operative to display said content on a television while displaying a user interface on said device to control said television display. |
Remote control of TV monitorsbackgroundThe present application relates generally to television displays, and more particularly to enabling television displays to be remotely controlled.Television displays are becoming more and more popular for the display of web-based content. Thus, a television display including a high definition television display can be used to display information that is accessed by a user from the Internet, typically through a user's personal computer. The personal computer can control the user's experience with video on the television, including additional information that enhances the video content.In many cases, a personal computer is not directly connected to a television display via a lead, but provides a wireless connection, such as Intel's Wireless Display (WiDi) wireless connection. In this way, the user can send a video obtained from the Internet for display on a television screen.DRAWINGSFigure 1 is a block diagram of an embodiment of the present invention;2 is a front elevational view of a computer display in accordance with one embodiment;3 is a diagram of a personal computer display in accordance with one embodiment of the present invention;Figure 4 is a flow chart of one embodiment of the present invention;Figure 5 is a flow chart of one embodiment of the present invention;Figure 6 is a flow chart of another embodiment of the present invention;Figure 7 is a flow chart of still another embodiment of the present invention;Figure 8 is a diagram of a client server system in accordance with one embodiment of the present invention;Figure 9 is a flow chart of still another embodiment of the present invention;Figure 10 is a flow diagram of an audio manager in accordance with one embodiment of the present invention.Detailed waysAccording to some embodiments, content obtained from the Internet, such as video content obtained from a site such as YouTube, may first be rendered on a user's personal computer (PC) and then on a television remote from the host computer used to download the information. Displayed on the display. For example, information can be wirelessly transmitted from a personal computer to an adapter for display on a television. The controls for the television display can be displayed at a location other than the television, i.e., displayed on the host computer display. The host computer can be used for other functions by simply displaying the television controls in a reduced size window on the host computer display. This allows the rest of the host computer display screen to be used to run other functions.Referring to Figure 1, a remote video display system can include media sources 12a, 12b, and 12n. These media sources may be any kind of computer application, video information or audio information that may be streamed from the Internet, available on a local area network, or stored on a device in a local system. Thus, the information can be obtained by the source manager 14 coupled to the video manager core 16. Source manager 14 extracts and extracts media from different sources. In one embodiment, the source manager can add different wrappers to media from different sources, such as YouTube, Viddler, or Hulu. In another embodiment, the source manager can add different packaging for the host based media player or productivity application.Video manager core 16 may be part of the user's host computer 11. The host computer 11 may be, for example, a cellular phone, a laptop computer, a desktop computer, a mobile internet device (MID), a netbook, a storage server, or a tablet computer, just to name a few examples. The video manager core coordinates all other components and includes things like settings and preferences. It is also responsible for rendering multiple media items into a presentation to be displayed on a remote display. For example, it can support picture-in-picture, where two different media sources are synthesized before being sent for display.The host computer 11 can include a display manager 18 that controls a dedicated display coupled to the host computer 11 and a remote display on the television. Thus, display manager 18 can be coupled to wireless video manager 26, which controls the remote display on the television screen. The wireless display manager can be Intel's WiDi technology platform. Display manager 18 can also discover and configure a variety of available displays. It also determines which displays are best for the available media. For example, it can determine how close the display is to the host computer 11 using, for example, wireless proximity sensing. As another example, display manager 18 may determine which television is used last or most often with the display manager and defaults the remote display to the television. In other cases, host display manager 18 may make its decision based on privacy settings that ensure that sensitive video defined by the user will not be displayed on a common screen.The playlist manager 15 controls the media play order. The user can queue multiple media files and/or streams in the playlist. The playlist manager maintains the playlist state between reboots, so the user can resume the playlist from the playlist stop point.In some embodiments, recommendation engine 20 provides recommendations to the user to view media related to content that has been previously viewed.The controller view 28 shown in FIG. 2 is a control graphical user interface on the host computer display 36 that is displayed, for example, on a user's personal computer monitor display via overlap, in a reduced size. Player view 30 is an actual view of a video that can be rendered, for example, on television screen 38, which is coupled wirelessly or by a wired connection.Thus, referring to FIG. 3, the reduced size controller view display on the user's host computer can include controls 32 to control playback of the video information. It may also include additional controls 34 to adjust settings, play a video selected on a video playlist to be played, view the history of the video to be displayed, and control the size of display 38 (FIG. 1). Likewise, graphical user interface button 33 can be used to select a display type (interlaced or progressive) and resolution (eg, 720P).Annotation engine 22 (Fig. 1) collects contextual annotations for display with the media currently being played. In some embodiments, the context annotations and tags can be overlaid on the actual media being played. In other embodiments, these contextual annotations can be displayed using controller view 28. Such tags may include product placement tags, additional information about places, things and people displayed in the media, comments on the user's social network, and the like. These notes may be provided by the actual content publisher, provided by a third party vendor, and as an add-on service, to name a few.To implement the annotation engine 22 functionality, the controller view 28 shown in Figure 3 can be used. In particular, the controller view can include a timeline display user interface 31 that shows the amount of time that has elapsed in the available video that is currently being displayed. Thus, in the illustrated example, the timeline shows the current time (10:30), the total time for presentation (sixty minutes), and may include a bar 41 indicating how many videos have been displayed. Along the top edge of the timeline user interface 31 may be a marker 29 indicating the availability of annotations associated with the video displayed at a particular time, by the marker 29 along the temporal user interface 31. The location is indicated. Additionally, a small marker (not shown) may be present on the television display during display on the television display 38. This reminds the viewer that a comment can be obtained. Moreover, on computer display 36, associated with timeline 31, the same marker icon 29 can be used. The user can select the appropriate marker icon 29 on the controller view 28 to cause the annotation not to be displayed on the television display 38 but on the computer display 36.Thus, in some embodiments, the two displays can operate simultaneously. For example, in some embodiments, the computer display can provide a private display for one user or for fewer users than the television display 38.These annotations can be displayed on the host computer display as a small marker 29 on the controller timeline 31. When playback reaches the marker, the appropriate action is triggered and display manager 18 can display a small visual queue to the user on the television display to indicate that context information is available. The user can also extend the controller view on the host computer display to show more information about the annotation. This allows the user to view the annotation without interrupting the entire media experience.In some embodiments, the annotations may be color coded depending on the source and/or type of annotation. For example, a friend's comment can be red, the ad is blue and the video annotation is green, and the others are yellow. Thus, the annotation engine 22 controls when annotations stored in the annotation source 24 are provided on top of the existing video playback. The annotation source 24 can also be located inside the computer 11.Controller view 28 may also include graphical user interface buttons 43 for setup. This allows the user to enter a variety of user preferences for disambiguating audio and video, including the various inputs described below. Similarly, a graphical user interface button 45 for additional functionality can be provided that can provide an indication of which annotations are available.In one embodiment, when the user selects the playlist button 35, a drop down menu 47 is generated that includes a plurality of entries 49 for each of the available videos in a top-to-bottom sequence, the videos will be pressed The sequence plays. Each entry 49 can include a thumbnail 51 from the video and a textual description extracted from the video metadata. The sequence defined in the playlist can be changed by the user. For example, in one embodiment, each of the entries 49 can be dragged and dropped to reorder the video play sequence.Wireless display manager 26 may be a mechanism for interacting with a wireless display, such as by Intel's WiDi technology, to initiate and establish a connection to a high definition television.In some embodiments, the video for display on the host computer 11 display 36 or on the television screen 38 can be selected by using the user's browser. For example, in one embodiment, the plug-in can provide a graphical user interface button such that when the user views the information that the user wishes to view on the Internet, the user can select (eg, by mouse click) the button (in the form of a graphical user interface) ) to make this information added to the playlist. Another graphical user interface button 35 on the controller view 28 of FIG. 3 may be selected when the playlist has been defined and the user wishes to play the video. When the user selects "Play", the video selected by the user is displayed on the television screen 38 instead of the personal computer screen 36. As a result, the video is displayed on the TV screen instead of on the personal computer display. However, the controls used to control the display are available on the user's personal computer monitor display. As a result, controls are provided on one screen, and the video is displayed remotely on the television screen.To achieve these capabilities, several steps can be automated so that content can be sent from a video application, browser or web page to a television, such as a high definition television. This can be done by checking available wireless display adapters or wired adapters such as the High Definition Multimedia Interface (HDMI) or the display port that has been inserted. The app is opened, scanned, and connected to the nearest adapter. Any available wireless device discovery technology can be used to locate the nearest wireless adapter. The screen mode can be set to "expand" to another display (ie, television 38) and the television 38 can be automatically set to the appropriate high definition settings. The player view 30 on the television can be set to full screen display and the controller view 28 on the host computer can be set to a reduced display. Media viewing can be separated from media control.In some embodiments of the invention, the sequences may be implemented in software, hardware, firmware, network resources, or a combination of these. In a software implemented embodiment, the sequences may be implemented via instructions stored on a non-transitory computer readable medium such as a semiconductor, optical or magnetic memory. The instructions can be executed by a suitable processor. In some embodiments, the instructions may be stored in a separate memory from the processor, and in other cases, an integrated circuit may perform the storage and execution of the instructions. For example, video manager core 16 may include storage 17, storage 17 storage instructions 39.Thus, the sequence 39 implemented by the video manager 16 begins with the discovery of an available display, as shown in block 40 of FIG. In the case of a wireless display, a wireless discovery procedure can be implemented to identify all available displays or the nearest wireless display (by using a typical conventional wireless discovery protocol). In addition, any cable television display can be identified, for example because they use an HDMI port. A check at diamond 42 determines if a video has been received or a video has been selected for the playlist.The user can browse the Internet and the user's browser can provide buttons to select videos located on the Internet to add to the list for subsequent playback. Local video can be processed in the same way. The list of videos to be played back later is referred to herein as a playlist. Therefore, the user can add any video found on the Internet to the user's playlist. Of course, in some embodiments, the user may facilitate playlist display and may reorder and edit the playlist.When the user is ready to play back the video, the user can simply operate the playlist button 35 in the form of a graphical user interface of FIG. Thus, the user selects the play/pause button 57 to cause the entire playlist to be played video by video in the order desired by the user, as shown by diamond 42. When the user selects play through the controller view 28 at diamond 42 the television screen can be selected for playback (block 44) and the information is automatically displayed on the television in an appropriate size, as indicated by block 46. At the same time, the controller view 28 graphical user interface is displayed on the host computer display (block 48). For example, the controller view can be a reduced size interface that allows the screen to be used primarily for other functions, but still be able to control the video being played on the television. The computer automatically generates an output to the television (block 49) by processing the video and creating an output presentation (such as including a comment mark in the output stream or putting the two videos together for side-by-side playback). The video is displayed on the television as shown in block 50.Referring to Figure 5, a sequence for implementing annotation engine 22 can be implemented in hardware, software, or firmware, or any combination of these, in accordance with one embodiment. In a software embodiment, the non-transitory medium can store computer executable instructions. At diamond 60, a check is made to determine if the current time is equal to one or more annotation times at which the annotation can be played in parallel with the video currently being played on television 38. If so, a comment icon is displayed on the television 38 display, as indicated by block 62. The check at diamond 64 determines if a comment marker is selected on the controller view graphical user interface 28 by one of the selection markers 29. If so, in one embodiment, the annotations are displayed on computer display 36 without obscuring controller view 28, as indicated by block 66. In some embodiments, any previously displayed markers can be selected at any time.Referring back to FIG. 1, in some embodiments, personal computer 11 can be coupled to Blu-ray player 104. The Blu-ray player can be an external component of computer system 11 or part of computer system 11. The operation of the Blu-ray player can be operated in a manner similar to that already described. That is, in the Blu-ray disc, the information for controlling video playback is a stream separate from the information constituting the video content. Thus, in accordance with some embodiments of the present invention, display manager 18 may decompose control and content information, for example, display control information as a controller view on a computer system while displaying video and additional content on the television display.Referring to Figure 6, an implementation of an annotation engine can operate on the fly, effectively, in accordance with another embodiment of the present invention. That is, the annotations can be combined if the associated video is selected for playback. When the associated video is selected for playback, in one embodiment, the system can automatically contact the remote server to obtain the necessary information about what these annotations are and where they should be inserted. Thus, during the process of summoning the video for playback, annotations can be developed and inserted, and a marker can be provided to indicate where the annotation is active during the playback of the video.Referring to Figure 6, the sequence 70 can be implemented in software, hardware or firmware or a combination of these. In a software embodiment, it can be implemented as computer readable instructions stored on a non-transitory computer readable medium. The check at diamond 72 determines if the content has been selected. If so, the annotation server can be contacted, as indicated by block 74. In one embodiment, as indicated by block 76, an authorization may be provided to the annotation server to indicate that the user is authorized to use the service provided by the annotation server. In response, the annotation engine can receive content and timestamp indicating where the annotation is going relative to the associated video, as indicated by block 78. As indicated by block 80, information regarding timestamps and annotations can be stored for playback if the user selects. In addition, the marker and other implementation details can be populated at this time or when content playback reaches a timestamp of a particular annotation.For example, referring to FIG. 8, in one embodiment, the network configuration enables computer system 11 to be coupled to television display 38 via a wireless short range network. The computer system can also be connected to a remote server, such as YouTube video server 94 and annotation server 82, via network 92. The annotation server 82 can provide annotations selected by the user or by other entities, and these annotations include timestamps indicating where these annotations are going relative to the playback of the associated video.Referring to Figure 7, the operation of the annotation server 82 can be implemented as a sequence 82, which can be implemented in hardware, software, firmware, or a combination of these. In a software embodiment, the computer executable instructions can be stored on a non-transitory computer readable medium. According to one embodiment, the sequence begins by receiving an identification of the video, as indicated by block 84. For example, a YouTube video clip can be identified. At block 86, the server identifies a comment related to the video. Next, at block 88, the annotations are filtered based on user preferences. For example, a user may wish to obtain only notes made by friends and relatives or other limited groups. Thus, for example, a video on YouTube that can be viewed by a large number of people can be annotated by a large number of people, but the user may wish to limit the received annotations only to those annotations of interest.One way to define annotations of interest is to identify individuals who are only interested in their annotations relative to their annotations. Thus, as indicated by block 88, the annotations can be filtered based on user-provided preferences. For example, a user may wish to see only comments from friends in a predetermined social network or friends list, or those from authoritative sources such as trusted commentators or publishers. As indicated by block 90, the user may be provided with the selected annotation, along with a timestamp indicating where the annotation went, and an identification along with the associated video. The system also allows users to insert their own comments on the video. These comments will be time stamped and saved on the annotation server if the user has the right to do so.Comments can also be filtered locally. Local filtering can be context based based on user preferences, user purchase patterns, or other criteria.According to yet another embodiment, the input to the television or to the computer system can be broken down. Inputs may be received on computer system 11 and on associated television 35 during playback of the selected video. For example, some televisions now have keyboards and other input devices associated with them, as well as conventional remote controls. In one embodiment, inputs to two different displays may be associated. For example, a user may wish to make a call using a keyboard associated with the television and may wish the call to be called out through computer system 11. According to some embodiments, user preferences may be provided in advance to indicate which of the two devices (television or computer system) will process the particular input command regardless of which system's input device is used. The input can be applied as appropriate.Thus, as shown in FIG. 9, the sequence 96 for input decomposition can be implemented in hardware, software, firmware, or a combination of these. In a software-based embodiment, the instructions may be stored in a non-transitory computer readable medium.At block 98, user preferences are received and stored. These preferences indicate which system the input should be applied to when an input is received during the ongoing video presentation. At block 100, an input can be received from a user. Based on those user preferences, the system distributes these inputs to the correct system, either a computer system or a television, as shown in block 102.In other embodiments, the sound may be disambiguated from the display. For example, in some embodiments, when an extended television display is selected for playing a video, any sound that would be generated on the computer system can be generated on the television. Thus, an indication of an incoming telephone call, incoming email, etc. can sound on the television. In some embodiments, this may be undesirable, and the user may specify which of the computer system and the television should be used to generate a particular sound. As a result, the audio output can be disambiguated from the video information. In particular, the audio can be linked to the video such that audio associated with the presentation on the television can be provided on the speaker 118b, associated with the content that is combined for display on the television, and with the content on the display of the computer system 11 Associated audio can be played on speaker 118a, which is associated with computer display 36. This allows the computer system to be used more efficiently for other functions while, for example, playing video on a television. Typically, the audio associated with a given graphical display element can be played by a speaker associated with the display on which the graphical view is provided.In accordance with some embodiments of the present invention, audio manager 110 (FIG. 10) may implement the sequences in software, hardware or firmware or a combination of these. In a software embodiment, the audio manager can be implemented by computer executable instructions stored on a non-transitory computer readable medium. In one embodiment, audio manager 110 may be part of display manager 18 (FIG. 1).A check at diamond 112 determines if the extended mode display has been activated. If so, a separate display audio driver is created for the standalone or extended display, as indicated by block 114. Thus, for example, when an extended display is placed on a remote television, the sound associated with the element presented on the remote television display can be sent to the standalone display via a new audio driver for use on the speaker associated with the standalone television display. provide. At the same time, sounds associated with computer system 11, such as sounds that inform incoming emails, are not sent to the television display. The audio and video are linked together on a separate display, as indicated by block 116. Thus, in some embodiments, sound associated with an independent or extended display is generated in relation to the extended display, and sound associated with the host or base computer system 11 is generated locally on system 11. In some embodiments, it is also possible to program where the sound is produced. For example, sounds generated in association with a television can be programmatically selected to be displayed on a computer system as well. Likewise, it may be desirable to receive notifications on incoming television or other audible alerts on the television system.Thus, in some embodiments, it is possible to remotely control the display of a video presentation on a television through the user's personal computer. In some embodiments, this can be done without the need to assign the entire personal computer for this function. That is, the video can be displayed on the television while other operations are performed on a single display associated with the personal computer.The "an embodiment" or "an embodiment" or "an embodiment" or "an embodiment" or "an embodiment" or "an embodiment" or "an embodiment" or "an" Thus, appearances of the phrases "a" or "an" In addition, the particular features, structures, or characteristics may be set forth in other suitable forms than the specific embodiments shown, and all such forms may be included in the claims of the application.While the invention has been described with respect to a limited number of embodiments, many modifications and variations are apparent to those skilled in the art. The appended claims are intended to cover all such modifications and alternatives |
The invention provides methods for etching metal interconnect layers. In some examples, a method (100) comprises: obtaining a substrate having at a metal interconnect layer deposited over the substrate (402); forming a first dielectric layer (406) on the metal interconnect layer (404); forming a second dielectric layer (408) on the first dielectric layer; forming a capacitor metal layer on the second dielectric layer; patterning and etching the capacitor metal layer and the second dielectric layer to the first dielectric layer to leave a portion of the capacitor metal layer and the second dielectric layer on the first dielectric layer (410); forming an anti-reflective coating to cover the portion of the capacitor metal layer and the second dielectric layer, and to cover the metal interconnect layer (412); and patterning the metal interconnect layer to form a first metal layer and a second metal layer (414). |
1.A method including:Obtaining a substrate, which has a metal interconnection layer deposited on the substrate;Forming a dielectric layer on the metal interconnection layer;Forming a capacitor metal layer on the second dielectric layer;The capacitor metal layer and the second dielectric layer are patterned and etched to the first dielectric layer to leave portions of the capacitor metal layer and the second dielectric layer in the first On the dielectric layerForming an anti-reflective coating to cover the capacitor metal layer and the portions of the second dielectric layer, and to cover the metal interconnection layer; andThe metal interconnection layer is patterned to form a first metal layer and a second metal layer.2.The method of claim 1, further comprising forming a second dielectric layer on the dielectric layer.3.The method of claim 2, wherein the thickness of the second dielectric layer is between 1000 angstroms and 1600 angstroms.4.4. The method of claim 2, wherein the second dielectric layer comprises silicon nitride and has a refractive index between 2.3 and 2.9.5.The method according to claim 1, further comprising:Forming an interlayer dielectric in contact with the anti-reflective coating; andThe interlayer dielectric is patterned and etched to form a via structure.6.The method of claim 1, wherein the thickness of the capacitor metal layer is between 1000 angstroms and 1400 angstroms.7.The method of claim 1, wherein the capacitor metal layer comprises titanium nitride.8.The method of claim 1, wherein the thickness of the dielectric layer is between 100 and 200 angstroms.9.The method of claim 1, wherein the anti-reflective coating comprises silicon oxynitride.10.The method according to claim 9, wherein the refractive index of the anti-reflection coating is between 1.7 and 2.1.11.The method according to claim 9, wherein the thickness of the anti-reflective coating is between 100 angstroms and 400 angstroms.12.A method including:Obtaining a substrate, which has a metal layer deposited on the substrate;Forming a silicon nitride layer on the metal layer;Forming a titanium nitride layer on the silicon nitride layer;Patterning and etching the titanium nitride layer and the silicon nitride layer to form a capacitor dielectric, leaving a part of the silicon nitride layer on the metal layer;Forming an anti-reflective coating to cover the exposed portions of the titanium nitride layer and the silicon nitride layer; andThe metal layer is patterned.13.The method according to claim 12, wherein the thickness of the silicon nitride layer is between 1000 angstroms and 1600 angstroms.14.The method according to claim 13, wherein the refractive index of the silicon nitride layer is between 2.3 and 2.9.15.The method according to claim 13, wherein the thickness of the titanium nitride layer is between 1000 angstroms and 1400 angstroms.16.The method of claim 12, wherein forming the anti-reflective coating includes forming silicon oxynitride.17.The method according to claim 16, wherein the refractive index of the silicon oxynitride is between 1.7 and 2.1.18.An integrated circuit, which includes:SubstrateA first metal layer and a second metal layer, which are located on the same lateral level above the substrate;A first dielectric, which is disposed on the first metal layer;A first anti-reflection coating, which is disposed on the first dielectric;A second dielectric, which is disposed on the second metal layer;A third dielectric, which is disposed on the second dielectric;A capacitor metal layer, which is disposed on the third dielectric layer; andA second anti-reflective coating is provided on the capacitor metal layer and the second dielectric.19.The integrated circuit of claim 18, wherein the capacitor metal layer, the second dielectric, the third dielectric, and the second metal layer implement a capacitor.20.The integrated circuit of claim 18, wherein the first anti-reflective coating and the second anti-reflective coating comprise silicon oxynitride. |
Method for etching metal interconnection layerSummary of the inventionAccording to at least one example of the present disclosure, a method includes: obtaining a substrate having a metal interconnection layer deposited over the substrate; forming a first dielectric layer on the metal interconnection layer; and forming on the first dielectric layer A second dielectric layer; forming a capacitor metal layer on the second dielectric layer; patterning and etching the capacitor metal layer and the second dielectric layer onto the first dielectric layer, so that the capacitor metal layer and the second dielectric layer Part of the layer is left on the first dielectric layer; forming an anti-reflective coating to cover the capacitor metal layer and parts of the second dielectric layer, and cover the metal interconnection layer; and patterning the metal interconnection layer to form the first metal Layer and second metal layer.According to at least one example of the present disclosure, a method includes forming a silicon nitride layer on a metal layer; forming a titanium nitride layer on the silicon nitride layer; patterning and etching the titanium nitride layer and the silicon nitride layer to form a capacitor Dielectric, leaving part of the silicon nitride layer on the metal layer; and forming an anti-reflection coating to cover the exposed parts of the titanium nitride layer and the silicon nitride layer.According to at least one example of the present disclosure, an integrated circuit includes: a substrate; a first metal layer and a second metal layer located on the same lateral level above the substrate; and a first dielectric disposed on the first metal layer , A first anti-reflective coating disposed on the first dielectric; a second dielectric disposed on the second metal layer and a third dielectric disposed on the second dielectric; a capacitor metal layer disposed on the third dielectric layer; And a second anti-reflective coating disposed on the capacitor metal layer and the second dielectric.Description of the drawingsFor a detailed description of various examples, reference will now be made to the accompanying drawings, in which:Figure 1(a) is a cross-sectional view of an illustrative integrated circuit fabricated on a semiconductor substrate according to various examples.Figure 1(b) depicts portions of the integrated circuit shown in Figure 1(a) according to various examples.Fig. 2 shows the reflectivity of anti-reflection coatings according to various examples.Fig. 3 shows the reflectivity of anti-reflection coatings according to various examples.Figure 4 shows a method according to various examples.Figures 5(a) to 5(l) show methods according to various examples.Fig. 6 depicts portions of the integrated circuit shown in Fig. 1(a) according to various examples.Detailed waysIntegrated circuits (ICs) are usually manufactured in large quantities on a single semiconductor wafer of high-quality (e.g., electronic grade) silicon (or other semiconductor materials, such as gallium arsenide) using microfabrication processing technology. ICs include microelectronic components such as transistors, which are coupled to each other using metal interconnect layers. These metal interconnect layers (or sometimes referred to herein as metal layers) provide signal paths between microelectronic components. In some cases, the metal layers appear on different lateral levels spaced vertically from each other. Lateral levels appear above the semiconductor wafer and are connected by structures that are vertical trenches filled with suitable metal.In some cases, the integrated circuit includes a capacitor, and the capacitor can be fabricated on one of the metal interconnect layers, where the metal interconnect layer serves as the conductive plate of the capacitor. The dielectric material together with the metal layer may be deposited on the above-mentioned metal interconnection layer to form a capacitor. In some cases, the dielectric material used to implement the capacitor also performs the function of an anti-reflective coating on the underlying metal interconnection layer. The anti-reflective coating patterns the metal interconnection layer. In other words, because the metal reflects light, the anti-reflection coating prevents the reflection of light used in the photolithography process, thereby patterning the metal interconnection layer.ICs operating at high voltages (such as 48V or higher) use thick dielectrics to meet reliability specifications. It is challenging to pattern the underlying bottom metal interconnection layer in the presence of a thick dielectric that also serves as an anti-reflective coating. The patterning of the underlying metal interconnection layer is particularly challenging for ICs formed at the following technology nodes (for example, 130 nm). Therefore, new manufacturing methods are needed to alleviate the above-mentioned problems.Therefore, a method and apparatus in which the dielectric does not perform the anti-reflective coating function is described, and during manufacturing, a separate layer that performs the anti-reflective coating function is deposited. Since a separate layer is used as the anti-reflective coating, the characteristics of the anti-reflective coating and the dielectric layer can be adjusted independently to provide the required low reflection and high dielectric constant characteristics, respectively.In some examples, the capacitor is formed on the metal interconnection layer and includes a dielectric including a silicon nitride layer and a silicon dioxide layer, where the silicon dioxide layer is on the metal interconnection layer, and the silicon nitride layer is on the silicon dioxide layer. Layer up.In some examples, the capacitor includes a type of dielectric that includes a silicon nitride layer. In this example, the silicon nitride layer is on the metal interconnection layer. The metal interconnection layer is used as the first capacitor plate and the second metal layer, for example, a titanium nitride layer deposited on the dielectric is used as the second capacitor plate. After etching the second metal layer and the dielectric, an anti-reflective coating including, for example, silicon oxynitride is deposited to pattern the underlying metal interconnection layer for the subsequent manufacturing process.In the example where the capacitor dielectric contains a silicon nitride layer on a silicon dioxide layer, (after etching) an anti-reflective coating of silicon oxynitride is formed on the part of the silicon dioxide layer not covered by the silicon nitride layer, so that The part of the silicon dioxide layer covered by silicon oxynitride can be considered as part of the anti-reflective coating. In an example where the capacitor dielectric includes a silicon nitride layer on a metal interconnection layer, silicon oxynitride of an anti-reflective coating is formed on the portion of the silicon nitride layer remaining after the silicon nitride layer is etched to form the capacitor dielectric, so that The part of the silicon nitride layer covered by silicon oxynitride can be considered as part of the anti-reflective coating.FIG. 1(a) is a cross-sectional view of a portion of an illustrative integrated circuit 1 manufactured on a semiconductor substrate 51. FIG. For ease of description, the semiconductor substrate 51 is shown as a block. From the viewpoint of the manufactured IC, the substrate 51 may also include multiple isolation features (not explicitly shown in FIG. 1), such as shallow trench isolation (STI) features or local oxidation of silicon (LOCOS) features. Isolation features define and isolate various microelectronic components (not explicitly shown in Figure 1). Examples of various microelectronic components that can be formed in the substrate 51 include transistors (e.g., metal oxide semiconductor field effect transistors (MOSFET), complementary metal oxide semiconductor (CMOS) transistors, bipolar junction transistors (BJT), high voltage Transistors, high-frequency transistors, p-channel and/or n-channel field effect transistors (PFET/NFET, etc.), resistors, diodes, and other suitable components. One such microelectronic component is represented by a number in Figure 1(a) 50 mark. Perform various processes to form various microelectronic components, including deposition, etching, implantation, photolithography, annealing and other suitable processes. Before depositing the metal interconnection layer, the microelectronics fabricated in the semiconductor substrate 51 The element is covered with a pre-metal dielectric layer 59. The microelectronic element uses one or more of the metal interconnect layers 10, 20, 30, 40, 22, 23, and 24 to interconnect. Interlayer dielectric (ILD) 25 connects the metal The interconnect layers 10, 20, 30, 40, 22, 23, and 24 are electrically isolated from each other. The metal interconnect layers 10, 20, 30, 40, 22, 23, and 24 may sometimes be referred to herein as metal layers 10, 20 , 30, 40, 22, 23, and 24.In some examples, the metal layers 10, 20, 30, 40, 22, 23, and 24 have layers 11, 13, 15, 17, 33, 35, and 37 disposed on their respective top sides. In some examples, the metal layers 10, 20, 30, 40, 22, 23, and 24 have layers 12, 14, 16, 18, 34, 36, and 38 disposed on their respective bottom sides. In some examples, layers 12, 14, 16, 18, 34, 36, and 38 include titanium nitride or titanium/titanium nitride bilayers, which prevent oxidation of metal interconnect layers that will be deposited in subsequent steps. In other examples, at least one of the layers 11, 13, 15, 17, 33, 35, and 37 forms a capacitor with the metal interconnect layer under each of them. Examples of such capacitors are described in Figure 1(b) and Figure 6 above.The metal layers 24 and 40 are located on the same lateral level, and this lateral level is referred to herein as the MET1 level. Before the metal layers 24 and 40 are separate units, a single metal layer (not shown) is deposited on the pre-metal dielectric layer 59 and then the single metal layer is patterned to form the metal layers 24 and 40. Some metal layers present on the MET 1 level are coupled to the microelectronic components manufactured in the pre-metal dielectric layer 59 through via structures. For example, the block 50 is connected to the metal layer 40 via the structure 6. The metal layers 23 and 30 are provided on the second level (or "MET 2 level") of the metal layer. From a manufacturing perspective, a single metal layer is first deposited on the MET 2 level and then patterned to form metal layers 23 and 30. Some metal layers present on the MET 2 level may be coupled to the block 50 through a connection formed by a combination of one or more via structures and metal layers. For example, the metal layer 30 is coupled to the block 50 through the via structure 5 coupled to the metal layer 40, and the metal layer 40 is further coupled to the block 50 through the via structure 6.The metal layers 22 and 20 are provided in the ILD 25 and exist on the same lateral level, and this lateral level may be referred to as the third level (or "MET 3 level") of the metal layer. From a manufacturing point of view, a single metal layer is deposited on the MET 3 level and then patterned to form metal layers 22 and 20. Some metal layers present on the MET 3 level may be coupled to the block 50 through a connection formed by a combination of one or more via structures and metal layers. For example, the metal layer 20 is coupled to the block 50 through the via structure 4 coupled to the metal layer 30, and the metal layer 30 is further coupled to the block 50 via the via structure 5, the metal layer 40 and the via structure 6. As described in further detail below, the methods described in this disclosure refer to the patterning of metal layers in MET 1, 2 and 3 levels.The metal layer 10 is provided in the ILD 25, and exists on the lateral level as the fourth level (or "MET4 level") of the metal layer. The metal layer 10 may be coupled to the block 50 through a connection formed by a combination of one or more via structures and the metal layer. For example, the metal layer 10 is coupled to the block 50 through the via structure 3 coupled to the metal layer 20, and the metal layer 20 is further coupled to the block 50 through the via structure 4, the metal layer 30, the via structure 5, the metal layer 40 and the via structure 6. Block 50. The metal layer 10 is coupled to the top metal layer (not shown) through the via structure 2. The top metal layer is further coupled to other layers, which can be coupled to a power source (not shown) and act as a voltage source for the microelectronic element (represented here as block 50). The example depicted in FIG. 1(a) shows four levels of the metal layer, such as MET 1, 2, 3, and 4 levels. However, in other examples, the number of levels can vary. The metal layers 22, 23, and 24 appear to be floating. However, in an actual implementation, the metal layers 23 and 24 may be coupled to one of the other metal interconnection layers through a via structure not explicitly shown in FIG. 1(a).Reference is now made to Figure 1(b), which depicts the area 100 marked in Figure 1(a). Region 100 shows portions of layer 15 (Figure 1(a)) as layers 104, 106, 108, and 110 in Figure 1(b). The area 100 also shows portions of the layer 35 (FIG. 1(a)) as the layers 114, 111 in FIG. 1(b). Area 100 also shows portions of metal layers 23 and 30 as metal layers 112 and 102, respectively. Region 100 also depicts the portion of ILD 25 of Figure 1(a) as ILD 125 of Figure 1(b).As described above, the layer 15 of FIG. 1(a) and the metal layer 30 below it form a capacitor. Figure 1(b) depicts the layers present in layer 15 that implement such a capacitor. For example, layer 108 and layer 102 form the top and bottom plates of the capacitor, respectively, and layer 104 and layer 106 act as the capacitor's dielectric. In one example, layer 104 includes silicon dioxide and layer 106 includes silicon nitride. In other examples, the layers 104, 106 may include other dielectrics, such as aluminum oxide, hafnium oxide, and zirconium oxide. In one example, layer 108 includes titanium nitride, and metal layer 102 includes aluminum and copper alloys. In some examples, layer 108 is also referred to as a capacitor metal layer and includes tantalum/tantalum nitride, tungsten/tungsten nitride. In some examples, layers 104 and 106 may be formed of the same dielectric material, and an example of such an embodiment is described in the previous FIG. 6.In an example where the layer 106 includes silicon nitride, the thickness of the layer 106 is between 1000 angstroms and 1600 angstroms, and the refractive index is between 2.3 and 2.9. In an example where the layer 108 includes titanium nitride, the thickness of the layer 108 is between 1000 angstroms and 1600 angstroms. In some examples, layer 104 protects metal layer 102 during etching of layer 106.The thickness of layer 106 and layer 108 and various other parameters can be selected to achieve the desired capacitance and breakdown voltage of the resulting capacitor. For example, for layer 106, it includes silicon nitride with a thickness between 1200 angstroms and 1400 angstroms and a refractive index between 2.3 angstroms and 2.9 angstroms; for layer 108, it includes titanium nitride with thicknesses between 1000 angstroms and 1600 angstroms. In between, the resulting capacitor has a breakdown voltage of about 120V, which is very suitable for automotive applications with a 48V electrical system.As described in detail below, after the layers 106 and 108 are patterned and etched, an anti-reflective coating is deposited on the exposed portions. As explained below in FIG. 4, the anti-reflective coating helps pattern the metal interconnection layer to form the metal layers 102 and 112. The anti-reflective coating also helps to make other structures, such as through-hole structures to the metal layer 102 along with other metal layers and circuit components. The anti-reflective coating helps to achieve a small critical dimension (CD) during the photolithography step, and in some examples, can be stripped at a later point in the manufacturing process flow. In the example of FIG. 1(b), the anti-reflective coatings 110, 111 include silicon oxynitride. In some examples, the refractive index of the silicon oxynitride in the anti-reflective coatings 110 and 111 is between 1.7 and 2.1, and the thickness is between 200 angstroms and 400 angstroms. In some examples, the refractive index of silicon oxynitride is about 1.9. Such an example may have a capacitance density of approximately 0.4 femto-Farad per square micrometer.Referring now to FIG. 4, an illustrative method 400 is shown. Method 400 describes the manufacturing steps that can be performed to form the capacitor described in Figure 1(b). Method 400 also describes the use of an anti-reflective coating, which helps pattern the underlying metal interconnect layer. In one example, patterning forms a patterned metal layer, such as metal layers 102 and 112 of FIG. 1(b). The method 400 is described simultaneously with FIGS. 5(a)-5(l).The method 400 begins at step 402, which includes obtaining a substrate having one or more metal interconnect layers deposited over the substrate. Reference is now made to FIG. 5(a) depicting a metal interconnect layer 502. For the sake of illustration, it can be considered that the metal interconnection layer 502 exists in the MET 2 level, and in this example, the metal interconnection layer 502 is deposited on an interlayer dielectric layer similar to the ILD 25 (not shown in FIG. 5(a)). Explicitly show) on. For simplicity, FIG. 5(a)-FIG. 5(l) depict the manufacturing steps performed on the metal interconnection layer 502, and do not explicitly show the different layers that may exist under the metal interconnection layer 502. The metal interconnection layer 502 may be formed using sputtering or chemical vapor deposition (CVD) process. In some examples, the metal interconnection layer 502 may include an alloy of aluminum and copper.Then, the method 400 moves to step 404 (FIG. 5(b)), which includes forming a first dielectric layer 504 on the metal interconnection layer 502 using a CVD technique. In one example, the first dielectric layer 504 includes silicon dioxide. In other examples, the first dielectric layer 504 includes silicon nitride. The method 400 further proceeds to step 406 (FIG. 5(c)), which includes forming a second dielectric layer 506 on the first dielectric layer 504 using a CVD technique. In one example, the second dielectric layer 506 may include silicon nitride. Method 400 describes the use of two dielectric layers (layers 504, 506). However, in some examples, a single dielectric layer may be used. In this example, the single dielectric layer may include silicon nitride. This example is described in Figure 6 above.Then, the method 400 proceeds to step 408 (FIG. 5(d)), which includes forming a capacitor metal layer 508 on the second dielectric layer 506 using sputtering or CVD techniques. In one example, the capacitor metal layer 508 includes titanium nitride. The method 400 further proceeds to step 410, which includes patterning and etching the capacitor metal layer 508 and the second dielectric layer 506 to the first dielectric layer 504 to leave portions of the capacitor metal layer 508 and the second dielectric layer 506 On the first dielectric layer 504. The patterning and etching described in step 410 may include the first deposition of photoresist 510 on the capacitor metal layer 508 (FIG. 5(e)). The photoresist 510 is illuminated in the photolithography process, so that a portion of the photoresist 510 is exposed (FIG. 5(f)) and then stripped (FIG. 5(g)). The exposed portions of the capacitor metal layer 508 and the second dielectric layer 506 that are not covered by the photoresist 510 are etched, wherein the etching stops at the first dielectric layer 504. The second dielectric layer 506 and the capacitor metal layer 508 are etched to form layers 106 and 108, respectively (FIG. 5(g)). It is shown in FIG. 5(h) that the photoresist 510 is stripped.Then, the method 400 proceeds to step 412 (FIG. 5(l)), which includes forming an anti-reflective coating 512 to cover the capacitor metal layer 508, the first dielectric layer 504, and the second dielectric layer using sputtering, CVD or related techniques 506 (Vertical portion of layer 106) exposed portion. In one example, the anti-reflective coating 512 includes silicon oxynitride. The anti-reflective coating 512 helps pattern the metal interconnection layer 502 and connect the metal interconnection layer 502 with other circuit components. The composition and various parameters associated with the anti-reflective coating 512 can be independent of the various components associated with the layers (ie, the first dielectric layer 504, the second dielectric layer 506, and the capacitor metal layer 508) that make up the resulting capacitor. Parameters to choose. The characteristics of the resulting capacitor and anti-reflective coating 512 can be independently optimized.In some examples, the method 400 further includes a step 414, which includes patterning the metal interconnect layer 502 (FIG. 5(j)) to form the metal layers 102, 112. As described above, the presence of the anti-reflective coating 512 enables the metal interconnection layer 502 to be patterned by not reflecting the light used during photolithography. Before patterning, a suitable coating process is used to deposit a dry film or a photoresist film on the surface of the anti-reflective coating 512, followed by curing, deslagging, etc., and then a photolithography technique and/or etching process, such as Dry etching and/or wet etching process to expose the surface of the metal connection layer 502 to be etched. The anti-reflection layer 512 forms the anti-reflection layers 110 and 111 after the metal interconnection layer 502 is etched. The method 400 then proceeds to step 416, which includes using a CVD process to form an interlayer dielectric 125 in contact with the anti-reflective layers 110 and 111 in one example (FIG. 5(k)). In other examples, the anti-reflective layers 110, 111 may be etched away before the interlayer dielectric 125 is deposited.In some examples, the metal layers 102, 112 and layer 108 may be connected to other metal interconnect layers, thereby electrically connecting to other electrical components in the integrated circuit. As described above, the electrical connection is achieved using a via structure, and the via structure can be formed by patterning and etching the interlayer dielectric 125. In fact, in some examples, the method 400 may further proceed to step 418, which includes patterning and etching the interlayer dielectric 125 to form one or more via structures (FIG. 5(l)). The example shown in FIG. 5(1) shows via structures 101 and 103 in contact with the metal layer 112 and the capacitor metal 108, respectively. As described above in FIG. 1(b), the layers 104 and 106 of FIG. 1(b) may be formed of the same dielectric material, and an example of such an embodiment is described in FIG. 6.Reference is now made to FIG. 6, which depicts the area 100 labeled in FIG. 1(a) and includes a capacitor formed from a combination of layers 602, 606, and 608. Layer 602 is a metal layer that is the first plate of the resulting capacitor; layer 608 is the second plate of the resulting capacitor; layer 606 is a dielectric layer and includes silicon nitride. During the manufacturing of the portion 100 of FIG. 6, the layer 606 and the layer 608 are patterned and etched to form the resulting capacitor; after the etching, a portion of the layer 606 covers the layer 602. When comparing the example of FIG. 6 with the example of FIG. 1, the layer 106 of FIG. 1(b) has been etched to the layer 104, where the layer 104 protects the metal layer 102, but in the example of FIG. 6, a certain layer 606 (which Silicon nitride (which may include silicon nitride) remains on layer 602 after etching. The layers 608, 606, and 602 are respectively similar to the layers 108, 106, and 102 of FIG.Anti-reflective coatings 610, 611 are formed over the exposed portions of layer 608 and layer 606, and as shown in the example of FIG. 1(b) and the above manufacturing process, other circuit components (not shown) are manufactured to be connected to the figure When one or more layers of 6, such as layer 602, anti-reflective coatings 610, 611 are useful. In the photolithography step, the anti-reflective coatings 610, 611 help reduce the critical dimensions and can be stripped at a later point in the process flow. During the process flow, an interlayer dielectric 625 is formed in contact with one or more layers of the portion 100 of FIG. 6. The anti-reflection coatings 610 and 611 are similar to the anti-reflection coatings 110 and 111 respectively, and the description of the anti-reflection coatings 110 and 111 is applicable to the anti-reflection coatings 610 and 611 respectively.Referring now to FIG. 2, there is shown a schematic diagram depicting the reflectivity of the anti-reflective coating according to various examples. In the example of Figure 2, photoresist (not explicitly shown) is deposited on silicon oxynitride (or anti-reflective coating), where silicon nitride is deposited first, then etched to remove silicon nitride, and then re-deposited. Thickness of silicon oxynitride to form an anti-reflective coating. In the example of FIG. 2, the silicon oxynitride layer above the silicon nitride corresponds to the example of FIG. 6, where silicon oxynitride and silicon nitride can be regarded as anti-reflective coating 610.For the example of FIG. 2, the refractive index of silicon oxynitride is 1.9, and the dielectric constant k (imaginary part of the wave vector) is 0.45. The y-axis of FIG. 2 represents the reflectance value of the photoresist, and the x-axis represents the thickness of the silicon oxynitride.Each curve in Figure 2 is for a specific value of the thickness of silicon nitride under silicon oxynitride, where: for curve 202, the thickness of silicon nitride is 300 angstroms; for curve 204, the thickness of silicon nitride is 250 angstroms; for curve 206, the thickness of silicon nitride is 200 angstroms; for curve 208, the thickness of silicon nitride is 150 angstroms; for curve 210, the thickness of silicon nitride is 100 angstroms. As the specific example illustrated in FIG. 2, 150 angstroms of silicon nitride and silicon oxy-oxide are used to achieve the minimum reflectivity. Figure 2 illustrates that the reflectivity can depend on various parameters of the anti-reflective coating 610 and is not meant to imply any particular set of optimal values.FIG. 3 shows the reflectivity of anti-reflection coatings according to various examples. In the example of FIG. 3, a photoresist film is deposited on silicon oxynitride deposited on silicon dioxide. The silicon dioxide is above the metal layer. This example corresponds to the example of FIG. 1( b ), where the silicon dioxide layer 104 can be regarded as part of the anti-reflection layer 110. In the example illustrated in Figure 3, the anti-reflective coating is formed by first depositing low-deposited silicon oxynitride with a refractive index of 1.68, then etching off the deposited silicon oxynitride, and then depositing silicon oxynitride of different thickness and refractive index . The y-axis of FIG. 3 represents the reflectance value at the photoresist, the x-axis represents the thickness value of silicon oxynitride (LDSiON), and each curve represents various values of refractive index and dielectric constant k.For curve 302, the refractive index of silicon oxynitride is 1.68 and the dielectric constant k is 0.007. For curve 304, the refractive index of silicon oxynitride is 1.79 and the dielectric constant k is 0.13. For curve 306, the refractive index of silicon oxynitride is 1.79 and the dielectric constant k is 0.224. For curve 308, the refractive index of silicon oxynitride is 1.87 and the dielectric constant k is 0.3. For curve 310, the refractive index of silicon oxynitride is 1.9 and the dielectric constant k is 0.45. For curve 312, the refractive index of silicon oxynitride is 1.92 and the dielectric constant k is 0.53.The elliptic curve 314 in FIG. 3 is drawn to indicate the values of the refractive index, thickness, and dielectric constant k of silicon oxynitride that minimize the reflectivity (for the specific parameters shown in FIG. 3). For example, for the thickness range of about 250 angstroms to 350 angstroms, the refractive index is between 1.87 to 1.92, and relatively low reflectivity is obtained. Figure 3 illustrates how reflectivity depends on various parameters of the anti-reflection coating 110, and is not meant to imply any particular set of optimal values.In the above discussion and claims, the term "including" is used in an open-ended manner, so it can be interpreted as "including but not limited to...". And the term "coupled" means indirect or direct connection. Therefore, if the first device is coupled to the second device, the connection may be through a direct connection or through an indirect connection through other devices and connections. Similarly, the device coupled between the first component or location and the second component or location may be through a direct connection or through an indirect connection via other devices and connections. An element or feature that is "configured" to perform a task or function can be configured by the manufacturer during manufacturing (for example, programming or structural design) to perform the function and/or can be configured (or reconfigured) by the user after manufacturing to perform the function And/or other additional or alternative functions. The configuration may be through firmware and/or software programming of the device, the configuration and/or layout of hardware components, the interconnection of the devices, or a combination thereof. In addition, the use of the phrase "ground" or similar in the above discussion means including rack ground, ground ground, floating ground, virtual ground, digital ground, public ground and/or any other suitable or suitable for the teachings of the present disclosure. Form of ground connection. Unless otherwise stated, "about", "approximately" or "approximately" before a value means +/- 10% of the stated value.The foregoing discussion is intended to illustrate the principles and various embodiments of the present disclosure. Once the above disclosure is fully understood, those skilled in the art will clearly see many changes and modifications. The following claims are intended to be interpreted as covering all such changes and modifications. |
Memory subsystem error management enables dynamically changing lockstep partnerships. A memory subsystem has a lockstep partnership relationship between a first memory portion and a second memory portion to spread error correction over the pair of memory resources. The lockstep partnership can be preconfigured. In response to detecting a hard error in the lockstep partnership, the memory subsystem can cancel or reverse the lockstep partnership between the first memory portion and the second memory portion and create or set a new lockstep partnership. The detected error can be a second hard error in the lockstep partnership. The memory subsystem can create new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners. The memory subsystem can also be configured to change the granularity of the lockstep partnership when changing partnerships. |
CLAIMSWhat is claimed is:1. A method for managing errors in a memory subsystem, comprising:detecting a hard error in a first memory portion set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners;responsive to detecting the hard error, canceling the lockstep partnership between the first memory portion and the second memory portion;creating a new lockstep partnership between the first memory portion and a third memory portion as lockstep partners; andcreating a new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners.2. The method of claim 1, wherein detecting the hard error comprises detecting a second hard error in the lockstep partnership.3. The method of any of claims 1 to 2, wherein the lockstep partnership comprises a virtual lockstep partnership where the hard error is mapped out to a spare memory portion.4. The method of any of claims 1 to 3, wherein the first and second memory portions comprise ranks of memory.5. The method of any of claims 1 to 3, wherein the first and second memory portions comprise banks of memory.6. The method of any of claims 1 to 3, wherein the first and second memory portions comprise DRAM (dynamic random access memory) devices.7. The method of claim 6, wherein the first and second memory portions comprise DRAM devices in separate ranks.8. The method of claim 6, wherein the third and fourth memory portions comprise DRAM devices in different ranks.9. The method of any of claims 1 to 8, wherein at least one of creating the new lockstep partnership between the first memory portion and a third memory portion as lockstep partners or creating the new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners includes changing a level of granularity of the lockstep partnership.10. The method of claim 9, wherein detecting the hard error in the first memory portion comprises detecting a hard error in a memory portion that can be grouped with the first memory portion at a different level of granularity, and wherein creating the new lockstep partnership comprises creating a new lockstep partnership between the first memory portion and the third memory portion at the different level of granularity.11. The method of any of claims 1 to 10, wherein creating the new lockstep partnerships comprises dynamically changing a lockstep partnership entry in a lockstep table.12. The method of any of claims 1 to 11, wherein detecting the hard error comprises detecting a second hard error, and further comprising, prior to detecting the second hard error:detecting a first hard error in either the first or the second memory portions;setting an original lockstep partnership between the first memory portion and the second memory portion as lockstep partners in response to detecting the first hard error.13. The method of any of claims 1 to 11, wherein detecting the hard error comprises detecting the hard error in the first memory portion set in a predetermined lockstep partnership with the second memory portion.14. A memory management device to manage errors in an associated memory subsystem, comprising: error detection logic to detect a hard error in a first memory portion of the memory subsystem, wherein the first memory portion is set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners; anderror correction logic to cancel the lockstep partnership between the first and second memory portions responsive to detecting the hard error in the first memory portion, and to create a new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners.15. The memory management device of claim 14, wherein the lockstep partnership comprises a virtual lockstep partnership where the hard error is mapped out to a spare memory portion.16. The memory management device of any of claims 14 to 15, wherein the first and second memory portions comprise one of ranks of memory, banks of memory, or DRAM (dynamic random access memory) devices.17. The memory management device of claim 16, wherein the first and second memory portions comprise DRAM devices in separate ranks.18. The memory management device of claim 16, wherein the third and fourth memory portions comprise DRAM devices in different ranks.19. The memory management device of any of claims 14 to 18, wherein the error correction logic is to change a level of granularity of at least one lockstep partnership when creating the new lockstep partnership between the first memory portion and a third memory portion as lockstep partners, or the new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners.20. The memory management device of claim 19, wherein the error detection logic is to detect the hard error in a memory portion that can be grouped with the first memory portion at a different level of granularity, and wherein the error correction logic is to create the new lockstep partnership between the first memory portion and the third memory portion at the different level of granularity.21. The memory management device of any of claims 14 to 20, wherein the error correction logic is to create the new lockstep partnerships by dynamically changing a lockstep partnership entry in a lockstep table.22. The memory management device of any of claims 14 to 21, wherein the error detection logic is to a second hard error, and further comprising, prior to detecting the second hard error, the error detection logic is to detect a first hard error in either the first or the second memory portions; and the error correction logic is to set an original lockstep partnership between the first memory portion and the second memory portion as lockstep partners in response to detecting the first hard error.23. The memory management device of any of claims 14 to 21, wherein the error detection logic is to detect the hard error in the first memory portion set in a predetermined lockstep partnership with the second memory portion.24. An apparatus for managing errors in a memory subsystem, comprising means for performing operations to execute a method in accordance with any of claims 1 to 13.25. An article of manufacture comprising a computer readable storage medium having content stored thereon, which when accessed causes a machine to perform operations to execute a method in accordance with any of claims 1 to 13. |
DYNAMICALLY CHANGING LOCKSTEP CONFIGURATIONRELATED CASE[0001] The present application is a nonprovisional application based on U.S. Provisional Application No. 62/113,337, filed February 6, 2015, and claims the benefit of priority of that provisional application. The provisional application is hereby incorporated by reference.FIELD[0002] Embodiments of the invention are generally related to memory management, and more particularly to dynamically changing lockstep configuration.COPYRIGHT NOTICE/PERMISSION[0003] Portions of the disclosure of this patent document may contain material that is subject to copyright protection. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The copyright notice applies to all data as described below, and in the accompanying drawings hereto, as well as to any software described below: Copyright © 2015, Intel Corporation, All Rights Reserved.BACKGROUND[0004] Certain types of memory resources have high failure rates compared to most other platform components. For example, DDR (dual data rate) memory devices experience higher rates of failure than most other components (such as processors, storage, interface components, and/or others) that are part of a computing platform or server environment. Long-term storage components also experience significant rates of failure. Given that failures to the memory devices cause downtime and require servicing to a system, higher platform RAS (reliability, availability, and serviceability) is preferred.[0005] Traditionally there are multiple different sparing techniques employed to survive hard DRAM (dynamic random access memory) failures or hard errors, which can push out service requirements. A hard error refers to an error with a physical device which prevents it from reading and/or writing correctly, and is distinguished from transient errors which are intermittent failures. Techniques are known for SDDC (single device data correction) and-l- DDDC (double device data correction) to address hard failure. However, despite techniques for pushing out servicing of a memory subsystem, failure rates remain higher than desired, especially for larger memory configurations.BRIEF DESCRIPTION OF THE DRAWINGS[0006] The following description includes discussion of figures having illustrations given by way of example of implementations of embodiments of the invention. The drawings should be understood by way of example, and not by way of limitation. As used herein, references to one or more "embodiments" are to be understood as describing a particular feature, structure, and/or characteristic included in at least one implementation of the invention. Thus, phrases such as "in one embodiment" or "in an alternate embodiment" appearing herein describe various embodiments and implementations of the invention, and do not necessarily all refer to the same embodiment. However, they are also not necessarily mutually exclusive.[0007] Figure 1A is a block diagram of an embodiment of a system that distributes cachelines between channels in which dynamic lockstep management is implemented.[0008] Figure IB is a block diagram of an embodiment of the system of Figure 1A illustrating memory structure and lockstep logic.[0009] Figure 2 is a block diagram of an embodiment of a state machine for an adaptive double device data correction (ADDDC) implementation in a system in which dynamic lockstep management is implemented.[0010] Figure 3 is a legend for Figures 4A-9I, which illustrate logical representations of the states identified in Figure 2.[0011] Figure 4A is a logical representation of an initial bank failure.[0012] Figure 4B is a logical representation of a lockstep action to produce an ADDDC state in Region 0 responsive to an initial bank failure.[0013] Figure 5A is a logical representation of a different bank failure in a different memory device.[0014] Figure 5B is a logical representation of a lockstep action to produce an ADDDC state in Region 1 responsive to an additional bank failure in a different memory device.[0015] Figure 5C is a logical representation of a same bank failure in a different memory device when in an ADDDC state with failures in Region 0 and Region 1. [0016] Figure 5D is a logical representation of a lockstep action to elevate to an ADDDC+1 state responsive to a same bank failure in a different memory device.[0017] Figure 5E is a logical representation of an additional same bank failure in a different memory device when in an ADDDC+1 state with an additional Region 0 failure.[0018] Figure 5F is a logical representation of a lockstep action to elevate to anADDDC+1 state with failures in Region 0 and Region 1 responsive to an additional same bank failure in a different memory device.[0019] Figure 6A is a logical representation of a same bank failure in a different memory device when in an ADDDC state with a failure in Region 0.[0020] Figure 6B is a logical representation of a lockstep action to elevate to anADDDC+1 state with failures in Region 0 responsive to a same bank failure in a different memory device.[0021] Figure 7A is a logical representation of a same bank failure in the buddy region when in an ADDDC state.[0022] Figure 7B is a logical representation of a lockstep action to elevate to anADDDC+1 state with failures in the same bank in both the primary and buddy regions.[0023] Figure 7C is a logical representation of a lockstep action to reassign lockstep partnerships to remain in an ADDDC state with buddy regions mapped within common ranks.[0024] Figure 8A is a logical representation of a same device, different bank failure when in an ADDDC state.[0025] Figure 8B is a logical representation of a lockstep action to produce an ADDDC state in Region 1 responsive to a same device, additional bank failure.[0026] Figure 8C is a logical representation of a different device, different bank failure when in an ADDDC state having failures in the same bank of Region 0 and Region 1.[0027] Figure 8D is a logical representation of a different device, same bank failure when in an ADDDC state having failures in the same bank of Region 0 and Region 1.[0028] Figure 8E is a logical representation of an initial device failure.[0029] Figure 9A is a logical representation of a lockstep action to produce an ADDDC state in a buddy rank responsive to an initial device failure.[0030] Figure 9B is a logical representation of an additional device failure in the failed rank when in an ADDDC state. [0031] Figure 9C is a logical representation of an additional bank failure of a different device when in the failed rank in an ADDDC state.[0032] Figure 9D is a logical representation of a lockstep action to produce an ADDDC+1 state responsive to an additional device failure.[0033] Figure 9E is a logical representation of a same device failure in the buddy rank when in an ADDDC state.[0034] Figure 9F is a logical representation of a new bank failure in the same device in the buddy rank when in an ADDDC state.[0035] Figure 9G is a logical representation of a lockstep action to produce an ADDDC+1 state responsive to an additional device failure in the buddy rank.[0036] Figure 9H is a logical representation of a lockstep action to reassign lockstep partnerships to remain in an ADDDC state with buddy regions mapped to new ranks responsive to a same device failure in the buddy region.[0037] Figure 91 is a logical representation of a lockstep action to reassign lockstep partnerships to remain in an ADDDC state with a new buddy rank for a rank with a failed device, and a buddy bank within the previous buddy rank responsive to a new bank failure in the same device of the buddy region.[0038] Figure 10 is a flow diagram of an embodiment of a process for dynamically managing lockstep configuration.[0039] Figure 11 is a block diagram of an embodiment of a computing system in which dynamic lockstep management can be implemented.[0040] Figure 12 is a block diagram of an embodiment of a mobile device in which dynamic lockstep management can be implemented.[0041] Descriptions of certain details and implementations follow, including a description of the figures, which may depict some or all of the embodiments described below, as well as discussing other potential embodiments or implementations of the inventive concepts presented herein.DETAILED DESCRIPTION[0042] As described herein, memory subsystem error management enables dynamically changing lockstep partnerships. Lockstep refers to distributing error correction over multiple memory resources to compensate for a hard failure in one memory resource that prevents deterministic data access to the failed memory resource. A lockstep partnership refers to two portions of memory over which error checking and correction is distributed or shared. A memory subsystem detects a hard error in a first memory portion, where the first memory portion is set in a lockstep partnership with a second memory portion to spread error correction over the pair of memory resources. In response to detecting the hard error, the memory subsystem can reverse the lockstep partnership between the first memory portion and the second memory portion and set a new lockstep partnership. In one embodiment, the lockstep partnership is formed in response to detecting a failure or hard error in the second memory portion. The memory subsystem can create new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners. The memory subsystem can also be configured to change the granularity of the lockstep partnership when changing partnerships.[0043] The dynamic changing of lockstep partnerships can be applied to any application of lockstep. In one embodiment, the memory controller includes a lockstep table that represents lockstep relationships between portions of memory. The portion size can be configured for an implementation of lockstep. In one embodiment, such as in an implementation of DDDC (dual device data correction), the lockstep relationships can be preconfigured. Thus, a detected error results in error correction sharing between identified lockstep partners. As described herein, the lockstep partnerships can be dynamically reversed and reassigned. In one embodiment, such as in an implementation of ADDDC (adaptive dual device data correction), lockstep relationships are not defined until a first error is detected. For such an implementation, the first assignment of lockstep partners can be reversed and reassigned. For purposes of illustration only, most of the following descriptions and the figures refer to an implementation of ADDDC. It will be understood that the dynamic lockstep partnership changing or the dynamic changing of the lockstep configuration can be performed on any system that applies lockstep partnerships that can be configured to be reversed and reassigned. Thus, examples related to ADDDC will be understood as examples only, and are not restrictive.[0044] Figure 1A is a block diagram of an embodiment of a system that distributes cachelines between channels in which dynamic lockstep management is implemented. System 102 illustrates elements of a memory subsystem. Processor 110 represents hardware processing resources in system 102 that executes code and generates requests to access data and/or code stored in memory 120. Processor 110 can include a central processing unit (CPU), graphics processing unit (GPU), application specific processor, peripheral processor, and/or other processor that can generate requests to read from and/or write to memory 120. Processor 110 can be or include a single core processor and/or a multicore processor. Processor 110 generates requests to read data from memory 120 and/or to write data to memory 120 through execution of code. The code can include code that is stored locally to processor 110 and/or code stored in memory 120.[0045] Memory controller 130 represents logic in system 102 that manages access to memory 120. For access requests generated by processor 110, memory controller 130 generates one or more memory access commands to send to memory 120 to service the requests. In one embodiment, memory controller 130 can be a standalone component on a logic platform shared by processor 110 and memory 120. In one embodiment, memory controller 130 is part of processor 110. In one embodiment, memory controller 130 is a separate chip or die from processor 110, and is integrated on a common substrate with the processor die/chip as a system on a chip (SoC). In one embodiment, one or more memory resources of memory 120 can be integrated in a SoC with processor 110 and/or memory controller 130. Memory controller 130 manages configuration and status of memory 120 in connection with managing access to the memory resources. Memory controller 130 can be configured to generate the commands and manage access to data resources in a way expected to maximize bandwidth utilization of memory 120.[0046] In one embodiment, memory controller 130 manages memory 120 as a scalable memory buffer or other memory configuration where system 102 distributes cachelines between multiple channels 140. For example, memory 120 is illustrates having two channels, 140-0 and 140-1. It will be understood that the techniques described could be applied across more channels 140. In one embodiment, memory controller 130 distributes cachelines between separate channels 140 by locating half of a cacheline on DIMM (dual inline memory module) 142-0 of channel 140-0 and the other half of the cacheline on DIM M 140-1 of channel 140-1. The use of more channels can provide the same benefits, although the logic to implement the separating of the cachelines between the multiple channels may need to be modified. Running the memory channels in lockstep mode across channels 140 has the advantage of being able to apply DDDC (double device data correction). Lockstep mode refers to a state of operation in which a lockstep partnership is set and the lockstep partners share error correction data. Each channel 140 includes one or more DIM Ms 142. Each DIMM includes multiple memory devices 144. In one embodiment, each memory device 144 is a DRAM (dynamic random access memory) chip or device. It will be understood that in simpler system configurations, similar benefits could be achieved by separating memory devices 126 into channels 140, without necessarily needing to further separate memory devices 126 into DIM Ms 142.[0047] In one example configuration, consider that system 102 includes two channels 140, each channel having one DIM M 142 for purposes of this example, with 16 memory devices 126 per DIM M, plus one memory device 126 each for CRC (Cyclic Redundancy check) and for parity. If one memory device 126 fails, its data can be reconstructed with single device data correction (SDDC). For DDDC, system 102, via memory controller 130, can combine two memory devices 126 from two DIMMs 142, using 4 memory devices 126 per pair of DIMMs 142. Such a technique provides for 32 "data" devices, two devices for CRC (cyclic redundancy checking), one device for parity, and one spare device. If one of the memory devices 126 fails, the spare device can replace the failed device. After the failure of one memory device 126, traditional SDDC can be employed. Thus, DDDC allows recovery from two sequential DRAM failures on DIMMs 142, as well as recovery from a subsequent single-bit soft error on a DIMM 142.[0048] System 102 can implement ADDDC (adaptive double device data correction) to manage hard errors or hard failures. ADDDC provides lockstep to provide error correction for memory devices 126. ADDDC can use lockstep to carve out space for a spare device upon encountering a hard failure. System 102 can substitute a first memory device failure in a lockstep rank/bank to a spare device. More details regarding rank and bank architecture of memory 120 can be in accordance with that of system 104 of Figure IB. With ADDDC, a second failure within a lockstep rank/bank would traditionally trigger a service event. Thus, typically a second failure within the same region would trigger a service call. In one embodiment, with the ability to dynamically change lockstep configuration, in general a second failure in the lockstep partnership does not result in a service call if the two failures are in separate halves of the lockstep partnership.[0049] In one embodiment, memory controller 130 includes error logic 132 to manage error response, including lockstep configurations. In one embodiment, logic 132 can dynamically change lockstep partnerships. More specifically, logic 132 can enable memory controller to initially set or create lockstep partnerships to spread error correction over a pair of memory resources, and then cancel or reverse the lockstep partnership upon detection of an additional error in the lockstep partnership. After reversing the lockstep partnership, memory controller 130 via error logic 132 can create or set one or more new lockstep partnerships in response to the additional error to prevent generating a service call event. Dynamically reversing a lockstep partnership and setting one or more new lockstep partnerships can extend the ability of the ADDDC to handle error correction for at least one more additional hard error.[0050] Figure IB is a block diagram of an embodiment of the system of Figure 1A illustrating memory structure and lockstep logic. System 104 is one embodiment of system 102 of Figure IB. Processor 110 is eliminated for purposes of simplicity, but it will be understood that processing resources generate data access requests for memory 120. Memory 120 is illustrated in more detail showing a configuration of memory resources. One or more memory devices 126 are grouped in a rank 128. In one embodiment, a DIMM 142 of system 102 can include one or two ranks 128. In one embodiment, ranks 128 can include memory devices across physical boards or substrates. Each memory device 126 includes multiple banks 124, which are an addressable group of rows 122 or cachelines. In one embodiment, row 122 includes multiple cachelines. In one embodiment, each row 122 includes a page of cachelines. Each bank 124 can includes multiple rows 122.[0051] Referring again to an implementation of ADDDC, system 104 (and system 102 of Figure 1A) can provide improved ADDDC by dynamically changing lockstep partners. By dynamically changing lockstep partners, system 104 via memory controller 130 can prevent service calls in many circumstances that would traditionally require a service call. Thus, ADDDC can further improve service rates by a significant margin by providing the ability to survive the additional hard failure in a lockstep pair. The lockstep partners refer to the pair of banks 124 or ranks 128 or other memory portions that are working in lockstep. It will be understood that banks 124 and/or ranks 128 can be partnered in a lockstep relationship across DIMMs and/or channels of memory 120. In one embodiment, other levels of granularity besides banks or ranks can be employed for lockstep operation. Thus, descriptions with respect to bank or rank level granularity should be understood as exemplary, and are not restrictive. [0052] Most RAS improvements have an associated capacity or performance cost. However, dynamically changing lockstep partners can work with and significantly improve existing ADDDC implementations without any design, performance, or capacity cost. Thus, dynamically changing lockstep partners can be employed for ADDDC in serverenvironments, such as standalone servers, and/or server systems in which components are blades mounted in a server chassis. Additionally, changing lockstep partners can apply to legacy DDDC with design updates.[0053] It will be understood that traditional ADDDC implementations apply virtual lockstep to map out up to two sequential DRAM device failures per lockstep region. In a traditional ADDDC implementation, memory 120 would start in non-lockstep configuration until the first device failure. After the first device failure, memory controller 130 can apply a sparing engine (not specifically shown, but can be considered part of error manager 134) to convert the failure region to virtual lockstep. In virtual lockstep, a cacheline becomes stored across two memory locations. In one embodiment, the two memory locations can be referred to as Primary and Buddy locations. Such terminology will be used herein, but it will be understood that other terminology could be used without affecting the techniques of changing lockstep partners. A second sequential failure in the region covered by the lockstep partnership can be mapped out by moving to ADDDC+1 mode. With traditional ADDDC, the second sequential failure triggers the need for a service call to replace the failed memory.[0054] It was observed that memory subsystems employing the dynamic lockstep partnership changes described herein were able to survive approximately 50% of the second failures that affect a lockstep rank/bank. By providing the ability to survive even a second failure event in a lockstep pair, the RAS for the memory subsystem improves significantly. Improved RAS for the memory subsystem can significantly reduce service costs. It was observed that traditional ADDDC can improve service rates by a factor of lOx for large configurations. It will be understood that large configurations will have a large number of configuration parameters, and therefore exact numbers of service rates and service rate improvements will vary for each system based on its specific configuration. The use of dynamically changing lockstep partnerships can often allow the system to survive an additional hard failure (e.g., approximately 50% of the time). Thus, it is expected that dynamically changing lockstep partners can provide a further improvement of 5x. Estimates are approximate and can have a wide variance based on memory configuration.[0055] In one embodiment, memory controller 130 includes error manager 134, which can be part of error logic 132 of system 102 of Figure 1A. In one embodiment, memory controller 130 also includes lockstep mapping 136 as part of error manager 134 and/or as part of error logic 132. In one embodiment, lockstep mapping 136 is part of error manager 134, but they are not necessarily combined. Error manager 134 enables memory controller 130 to detect errors and determine an ADDDC state to apply to handle error correction for the error. Different ADDDC states are described in reference to Figures 2 through 91 below. Lockstep mapping 136 provides a mapping of what portions of memory are currently associated or set as lockstep partners. Error manager 134 includes determination logic to determine whether the current level of error correction or the current lockstep mapping 136 is sufficient to manage known hard errors. Error manager 134 includes determination logic to determine when and how to change lockstep partnerships to respond to additional errors that might occur in an existing lockstep partnership.[0056] In one embodiment, error manager 134 applies an implementation of ADDDC that uses virtual lockstep partners to handle error correction. In one embodiment, error manager 134 applies error correction with lockstep partners that are not virtual lockstep partners. In either case, error manager 134 includes logic to reverse a lockstep partnership and establish new lockstep partnerships. It will be understood that the "logic" referred to for error manager 134 and/or for other components described herein can refer to hardware and/or software (including firmware) logic. The logic configures the element to perform operations to accomplish what is described.[0057] In one embodiment, error manager 136 can dynamically change theconfiguration of lockstep partners in lockstep mapping 136. In traditional lockstep systems, the partnerships are fixed once set. Thus, errors occurring after the setting of a lockstep partnership would traditionally require a service call to replace the failed parts. As described herein, the lockstep partnership can be undone or reversed and then a new lockstep partnership set. Memory controller 130, e.g., via error manager 134, can perform both forward and reverse sparing operations to set and unset lockstep partnerships.[0058] Memory controllers with sparing logic are traditionally able to spare in the forward direction, and typically perform the forward sparing at a single fixed granularity. With reverse sparing, memory controllers are capable of memory sparing at multiple granularities such as: bit, device, cacheline, row, column, bank, sub-rank, rank, and dual inline memory module (DIM M). Reverse sparing allows the memory controllers to reverse or undo a sparing operation previously performed, which can allow the changing of lockstep partnerships and/or the changing of granularities of the failure states. Reverse sparing refers to moving a failure state backwards, such as moving from an N+1 failure state to an N failure state.[0059] As used herein, "forward sparing" can refer to physically moving data from a failed region of memory and storing it in new location where subsequent accesses to that data will be retrieved from the new location and not the failed location. "Reverse sparing" can refer to physically moving data from the new location back to the original failed location. Typically, reverse sparing will be done with the intent of subsequently forward sparing to another portion, at either the same or a different granularity. Memory controller 130 can use ECC (error correction coding) techniques to correct interim errors between the reverse sparing and subsequent forward sparing operations.[0060] It will be understood that memory 120 can have an architecture with addressable regions of size cacheline, column, row, bank, sub-rank, rank, DIMM, and channel, from smallest to largest. Each memory failure may be thought of as having 1) a particular region or section or portion affected; and, 2) a width (number of bits) affected. Memory devices 126 include address decoders or decoding logic to translate a received command address to a physical location with the memory.[0061] As mentioned above, in one embodiment, error manager 134 can include memory sparing logic configured to perform memory sparing operations in both the forward and reverse directions. For example, the memory sparing logic may initially perform a forward sparing operation in response to a detected memory failure at a first level of granularity, such as the bank level, moving the failure state from N to N+1. If error manager 134 detects a failure condition in another portion (e.g., at a higher level of granularity and/or in another portion of a lockstep partnership), it can perform a reverse sparing operation, moving the failure level from N+1 back to N and then perform forward sparing to move the failure level or error level from N back to N+1 with different granularity and/or with a different lockstep partnership. [0062] Reference to memory devices can apply to different memory types. Memory devices generally refer to volatile memory technologies. Volatile memory is memory whose state (and therefore the data stored on it) is indeterminate if power is interrupted to the device. Nonvolatile memory refers to memory whose state is determinate even if power is interrupted to the device. Dynamic volatile memory requires refreshing the data stored in the device to maintain state. One example of dynamic volatile memory includes DRAM (dynamic random access memory), or some variant such as synchronous DRAM (SDRAM). A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (dual data rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on June 27, 2007, currently on release 21), DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), LPDDR3 (low power DDR version 3, JESD209-3B, Aug 2013 by JEDEC), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide I/O 2 (Widel02), JESD229-2, originally published by JEDEC in August 2014), HBM (H IGH BANDWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (currently in discussion by JEDEC), WI03 (Wide I/O 3, currently in discussion by JEDEC), H BM2 (H BM version 2), currently in discussion by JEDEC), and/or others, and technologies based on derivatives or extensions of such specifications.[0063] In addition to, or alternatively to, volatile memory, in one embodiment, reference to memory devices can refer to a nonvolatile memory device whose state is determinate even if power is interrupted to the device. In one embodiment, the nonvolatile memory device is a block addressable memory device, such as NAN D or NOR technologies. Thus, a memory device can also include a future generation nonvolatile devices, such as a three dimensional crosspoint memory device, or other byte addressable nonvolatile memory device. In one embodiment, the memory device can be or include multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque (STT)-MRAM, or a combination of any of the above, or other memory. [0064] Figure 2 is a block diagram of an embodiment of a state machine for an adaptive double device data correction (ADDDC) implementation in a system in which dynamic lockstep management is implemented. It will be understood that state diagram 200 is merely one example of any number of possible state flows. The example states represented with the labels in diagram 200 are set out in Figures 4A-9I.[0065] In one embodiment, starting at state CB1 (case 1 of bank failure), and proceed to ABl (action 1 for bank failure CB1). From ABl, several additional failure scenarios are possible. Starting with the simpler cases, the state could proceed from ABl to CB4 (case 4 of bank failure) for a subsequent error of type CB4, and then to AB4 (action 4 for bank failure CB4). Once AB4 has been performed, a subsequent failure would result in a service call. It will be understood that in each case, movement from one state to another would be performed by a memory controller associated with the memory device experiencing the identified hard failures. The state could alternatively proceed from ABl to CB5 (case 5 of bank failure) for a subsequent error of type CB5. The memory controller can perform one of two actions in response to CB5, identified as AB5 (action 5 for bank failure CB5) and AB6 (action 5 for bank failure CB5). Once either AB5 or AB6 has been performed, a subsequent failure would result in a service call.[0066] The state could alternatively proceed from ABl to CB3 (case 3 of bank failure) for a subsequent error of type CB3. The memory controller can perform error correction AB3 (action 3 for bank failure CB3). A subsequent error at state AB3 can result in CB7 (case 7 of bank failure), in response to which the memory controller can perform the error correction actions of AB7 (action 7 for bank failure CB7). From state AB7, a subsequent error could result in a service call, depending on the error type, or could result in a subsequent error state CB8 (case 8 of bank failure). In response to state CB8, the memory controller can perform the error correction of state AB8 (action 8 for bank failure CB8). After state AB8, a subsequent failure would result in a service call.[0067] The state could alternatively proceed from ABl to CB2 (case 2 of bank failure) for a subsequent error of type CB2. The memory controller can perform one of two different error correction action, either AB2 (action 2 for bank failure CB2), or AR1 (action 1 for rank failure). It will be observed that the state can be changed from bank failure to rank failure for subsequent error of type CB2. As seen in diagram 200, the memory controller can alternatively arrive at state ARl as a result of an initial rank error of type CRl (case 1 of rank failure).[0068] Returning to state AB2, a subsequent error can result in one of two subsequent error states, depending on the error type. Thus, from AB2 the state can move to CB10 (case 10 of bank failure), for which the memory controller can perform the error correction of ARl mentioned above. Alternatively, the state can move from AB2 to CB6 (case 6 of bank failure). The memory controller can perform the error correction of AR2 (action 2 for rank failure) in response to state CB6. As seen in diagram 200, there are four potential error states for a failure subsequent to state ARl. Two of those failure states are CR2 (case 2 of rank failure) and CB11 (case 11 of bank failure), in response to which the memory controller can perform the error correction of state AR2, mentioned above.[0069] Alternatively to moving to either CR2 or CB11, an error subsequent to ARl can result in the state moving to CB9 (case 9 of bank failure) or CR3 (case 3 of rank failure), depending on the error. If the error results in state CR3, the memory controller can perform the error correction of AR3 (action 3 for rank failure) or AR4 (action 4 for rank failure). If the state moves to AR3, a subsequent error would result in a service call. In response to state CB9, the memory controller can move to AR3 or AR4, or perform the error correction of AB9 (action 9 for bank failure CB9).[0070] Figure 3 is a legend for Figures 4A-9I, which illustrate logical representations of the states identified in Figure 2. Table 300 illustrates a blank box (no shading or cross- hatching) for a normal region of memory. Such a section of memory is not experiencing any failure, and is not part of a lockstep partnership. The darkest level of shading (nearly black) shows a new failure. The lightest level of shading (lightest gray) represents a state of ADDDC Region 0. Thus, the lightest gray illustrates the primary and buddy regions of an ADDDC state for a first hard failure.[0071] The next level of gray represents a state of ADDDC Region 1. ADDDC Region 1 refers to the primary and buddy regions that are partnered for a subsequent failure when the memory is already in failure state ADDDC. The next two darker levels of shading, respectively, represent ADDDC+1 Region 0 and ADDDC+1 Region 1. Thus, they represent primary and buddy regions for, respectively, an elevated ADDDC state for subsequent errors. The single-line Crosshatch represents the portion of memory declared as the first failure (Failure 0) in a region. The double-line Crosshatch represents the portion of memory declared as the second failure (Failure 1) in the region.[0072] Figure 4A is a logical representation of an initial bank failure, represented as state CB1. Each of the states represented show D[17:0] indicating 18 memory devices (e.g., DRAMs), and B[15:0] indicating 16 banks per device. For purposes of logical representation, a bank failure is the finest granularity considered in the examples, although other failure granularity can be configured in certain implementations following the same techniques described in these examples. Thus, while the diagrams represent Ranks each with 18 devices having 16 banks per device, the examples are non-limiting. Thus, different configurations are possible. Two ranks (Rank A and Rank B) are shown as an example of primary and buddy ranks to use for lockstep partnerships portions. CB1 shows an initial failure in Bank 0 of Device 0 of Rank A.[0073] Figure 4B is a logical representation of a lockstep action to produce an ADDDC state in Region 0 responsive to an initial bank failure, represented as state ABl. The memory controller generates state ABl by creating Bank 0 of Rank B as a buddy region for Bank 0 of Rank A (the primary region). With the lockstep partnership, the memory subsystem is in a first ADDDC state.[0074] Figure 5A is a logical representation of a different bank failure in a different memory device, represented as state CB3. State CB3 illustrates a subsequent bank failure when the system is already in an ADDDC state. Thus, Bank 0 of Device 0 of Rank A is shown as Failure 0, and Bank 1 of Device 1 of Rank is shown as a currently detected error. The error of CB3 is thus a different bank in a different device in the same (primary) rank.[0075] Figure 5B is a logical representation of a lockstep action to produce an ADDDC state in Region 1 responsive to an additional bank failure in a different memory device, represented as state AB3. In state AB3, the memory controller produces an ADDDC state with Failure 0 in Bank 0 and Failure 1 in Bank 1, both of which are shared between Rank A and buddy Rank B for purposes of error correction. Reads to such error portions can be handled by error correction techniques described above with respect to forward sparing.[0076] Figure 5C is a logical representation of a same bank failure in a different memory device when in an ADDDC state with failures in Region 0 and Region 1, represented as state CB7. In state CB7, a subsequent error occurs in Bank 0 of Device 2 of Rank A. It will be understood that since Bank 0 is already subject to error correction with ADDDC, the second error is the most errors that can be handled by known error correction techniques. The subsequent error is a same bank, different device error in the primary rank.[0077] Figure 5D is a logical representation of a lockstep action to elevate to an ADDDC+1 state responsive to a same bank failure in a different memory device, represented as state AB7. In state AB7, the memory controller elevates the state of Bank 0 to ADDDC+1, with Failure 0 and Failure 1 in Bank 0. Subsequent failures cannot be handled, so thus a service call can be generated.[0078] Figure 5E is a logical representation of an additional same bank failure in a different memory device when in an ADDDC+1 state with an additional Region 0 failure, represented as state CB8. If instead the subsequent failure is Failure 1 in Bank 1, such as error in same Bank 1, different Device 3, then another error correction state can be used.[0079] Figure 5F is a logical representation of a lockstep action to elevate to anADDDC+1 state with failures in Region 0 and Region 1 responsive to an additional same bank failure in a different memory device, represented as AB8. In AB8 elevates the state of Bank 1 to ADDDC+1. With both Bank 0 in ADDDC+1 and Bank 1 in ADDDC+1, a subsequent failure cannot be handled and thus a service call can be generated.[0080] Figure 6A is a logical representation of a same bank failure in a different memory device when in an ADDDC state with a failure in Region 0, represented as state CB4. In state CB4, Bank 0 has Failure 0 in Device 0 of Rank A, and a subsequent failure is detected in the same Bank 0 of different Device 1.[0081] Figure 6B is a logical representation of a lockstep action to elevate to anADDDC+1 state with failures in Region 0 responsive to a same bank failure in a different memory device, represented as AB4. The memory controller elevates Bank 0 to ADDDC+1, seeing in has two failure regions, Failure 0 and Failure 1. A subsequent error in the same Bank 0 would not be able to be handled, and thus the memory controller may issue a service call. A subsequent error in a different bank could elevate the additional bank to ADDDC.[0082] Figure 7A is a logical representation of a same bank failure in the buddy region when in an ADDDC state, represented as state CB5. In state CB5, there is already an error The subsequent error in CB5 is the same Bank 0 in Device 0 of Rank B. Thus, both Ranks A and B have hard errors in Bank 0, Device 0. [0083] Figure 7B is a logical representation of a lockstep action to elevate to anADDDC+1 state with failures in the same bank in both the primary and buddy regions, represented as AB5. In state AB5, the memory controller elevates the state of Bank 0 from ADDDC to ADDDC+1 due to the two errors in the bank that is the subject of a lockstep partnership.[0084] Figure 7C is a logical representation of a lockstep action to reassign lockstep partnerships to remain in an ADDDC state with buddy regions mapped within common ranks, represented as state AB6. Alternatively to state AB5, in one embodiment, in response to the subsequent error detected in state CB5, the memory controller reverses the lockstep partnership between Bank 0, Rank A and Bank 0, Rank B, and reassigns locksteppartnerships. More specifically, in one embodiment, the memory controller can make Bank 15 of Rank A the buddy portion or buddy region for Bank 0 of Rank A, and similarly can make Bank 15 of Rank B the buddy portion or buddy region for Bank 0 of Rank B. Bank 15 is an example, and another bank could be selected. The same bank does not necessarily need to be selected in each of the ranks. After reassigning the lockstep partnership, state AB6 results in Banks 0 and 15 of Rank A in ADDDC with a single error, and Banks 0 and 15 of Rank B in ADDDC with a single error, as opposed to Bank 0 in both ranks being in ADDDC+1. Thus, dynamically changing the lockstep partnership can reduce the ADDDC level, and permit the system to sustain an addition subsequent error over keeping the same locksteppartnerships.[0085] Figure 8A is a logical representation of a same device, different bank failure when in an ADDDC state, represented as state CB2. In state CB2, Bank 0 of Device 0 of Rank A is already in failure, and Bank 0 is in state ADDDC across Ranks A and B. The subsequent failure detected is a different bank, same device failure in Bank 1 of Device 0.[0086] Figure 8B is a logical representation of a lockstep action to produce an ADDDC state in Region 1 responsive to a same device, additional bank failure, represented as state AB2. In state AB2, the memory controller can elevate Bank 1 to ADDDC, with Bank 1 shared in primary Rank A and buddy Rank B.[0087] Figure 8C is a logical representation of a different device, different bank failure when in an ADDDC state having failures in the same bank of Region 0 and Region 1, represented as state CB6. In CB6, the subsequent error detected is a different bank, different device error, with the error in Bank 2 of Device 1. Such an error can result in a service call, since Bank 0 and Bank 1 are already in ADDDC. In one embodiment, the memory controller could reverse the lockstep partnership between Rank A and Rank B of Bank 0, as well as the lockstep partnership between Rank A and Rank B of Bank 1. The memory controller could subsequently create a lockstep partnership between Rank A and Rank B of Device 0 and a lockstep partnership between Rank A and Rank B of Device 1. The partnerships could both be in ADDDC. Such an action is not illustrated, but is possible by reversing the lockstep partnerships and changing the granularity of the lockstep.[0088] Figure 8D is a logical representation of a same device, different bank failure when in an ADDDC state having failures in the same bank of Region 0 and Region 1, represented as state CBIO. In state CBIO, the subsequent error detected is in the same Device 0, different Bank 2.[0089] Figure 8E is a logical representation of an initial device failure, represented as state CRl. In state CRl, the error is all of Device 0 of Rank A. It will be observed how the error of state CBIO could be made to match the error of CRl, by declaring the entire Device 0 in error in CBIO. Thus, error correction actions for CBIO and CRl can be the same.[0090] Figure 9A is a logical representation of a lockstep action to produce an ADDDC state in a buddy rank responsive to an initial device failure, represented as state ARl. The memory controller can generate state ARl as error correction for states CBIO and CRl. The memory controller creates Rank B as the buddy region for Rank A, where the entire rank is the region affected, as illustrated by each complete rank being in state ADDDC. The failed portion is Device 0 of Rank A.[0091] Figure 9B is a logical representation of an additional device failure in the failed rank when in an ADDDC state, represented as state CR2. The state previous to state CR2 is when Rank A and Rank B are in a lockstep partnership due to failure in Device 0. The subsequent error detected is Device 1 of Rank A that has failed Device 0.[0092] Figure 9C is a logical representation of an additional bank failure of a different device when in the failed rank in an ADDDC state, represented as state CB11. The state previous to state CB11 is when Rank A and Rank B are in a lockstep partnership due to failure in Device 0. The subsequent error detected is a failure in Bank 0 of Device 1 of Rank A that has failed Device 0. It will be observed that the subsequent failure is of a different granularity (finer granularity) than Failure 0. However, the ADDDC state at the courser granularity of Rank may be kept even for such a failure as represented in CB11. [0093] Figure 9D is a logical representation of a lockstep action to produce an ADDDC+1 state responsive to an additional device failure, represented as state AR2. It will be observed that state AR2 can be used to respond to either state CBll or state CR2. In state AR2, device 1 is declared as failed, and Ranks A and B are elevated to ADDDC+1. In one embodiment, state AR2 could be reversed by spreading the errors in Ranks A and B to other ranks in different (changed) lockstep partnerships, such as what is represented in Figure 9H.[0094] Figure 9E is a logical representation of a same device failure in the buddy rank when in an ADDDC state, represented as state CR3. In state CR3, Device 0 of Rank B is detected as a subsequent failure when Rank A and Rank B are already in a lockstep partnership in ADDDC. Thus, Device 0 of Rank B is Failure 1 and Device 0 of Rank A is Failure 0.[0095] Figure 9F is a logical representation of a new bank failure in the same device in the buddy rank when in an ADDDC state, represented as state CB9. In state CB9, the failure of Bank 0 of Device 0 is a same device failure in the buddy Rank B. State CB9 can be thought of as a logical equivalent to the failure of CR3 even though the failure is of a different granularity (bank failure versus device failure).[0096] Figure 9G is a logical representation of a lockstep action to produce an ADDDC+1 state responsive to an additional failure in the buddy rank, represented as state AR3. State AR3 represents a typical error correction action for state CR3 or for state CB9, in which the memory controller maps out Device 0 of Rank B as a failing device. Typically, the memory controller will also initiate a service call because the failure region cannot handle a third device failure.[0097] Figure 9H is a logical representation of a lockstep action to reassign lockstep partnerships to remain in an ADDDC state with buddy regions mapped to new ranks responsive to a same device failure in the buddy region, represented as state AR4. Instead of taking the traditional action of AR3, in one embodiment, the system can delay the service call when the memory controller finds a new lockstep partner for each half of the lockstep pair in response to the subsequent failure. Instead of mapping out the second device in the same lockstep region and elevating the failure to ADDDC+1, in one embodiment the memory controller finds new lockstep partners in other ranks. In one embodiment, the system employs reverse sparing the region back to a non-lockstep configuration, followed by two forward sparing operations. [0098] In one embodiment, after reversing the lockstep partnership (e.g., via reverse sparing), the memory controller sets Rank A with original Failure 0 as a new lockstep partner with available non-failed Rank C. Additionally, Rank B with original Failure 1 is matched as lockstep partner with non-failed Rank D. Seeing that Rank B and Rank D are now lockstep partners, the failure of Device 0 in Rank B is now Failure 0. Both lockstep partnerships are now in ADDDC. Thus, AR4 creates two ADDDC regions, each with one device mapped out, instead of a single ADDDC+1 region with two devices mapped out. AR4 can therefore delay a service call for a subsequent failure.[0099] Figure 91 is a logical representation of a lockstep action to reassign lockstep partnerships to remain in an ADDDC state with a new buddy rank for a rank with a failed device, and a buddy bank within the previous buddy rank responsive to a new bank failure in the same device of the buddy region, represented as state AB9. In state AB9, as with state AR4, the system can delay a service call when the memory controller finds a new lockstep partner for each half of the lockstep pair in response to the subsequent failure. The failure addressed in AB9 is a subsequent bank failure in the buddy Rank or buddy region. Thus, the memory controller does not need to map out the entire Rank B to a new non-failed rank, but can simply remap a lockstep partnership for the failed Bank 0.[00100] In one embodiment, after reversing the lockstep partnership (e.g., via reverse sparing), the memory controller sets Rank A with original Failure 0 as a new lockstep partner with available non-failed Rank C. Additionally, Bank 0 of Rank B with original Failure 1 is matched as lockstep partner with non-failed Bank 15 (or other bank) of Rank B. Seeing that Bank 0 and Bank 15 of Rank B are new lockstep partners, the failure of Bank 0 is now Failure 0. Both lockstep partnerships are now in ADDDC. Thus, like AR4, state AB9 creates two ADDDC regions, each with one device mapped out, instead of a single ADDDC+1 region with two devices mapped out. AB9 can therefore delay a service call for a subsequent failure.[00101] Figure 10 is a flow diagram of an embodiment of a process for dynamically managing lockstep configuration. Process 1000 can be performed by a memory controller, such as an error engine and/or other lockstep management logic of the memory controller, to manage the lockstep partnerships in the system for error correction. Error detection logic of the memory controller detects a hard error in a first portion of memory, 1002. The first portion can be of any granularity monitored by the error detection logic. In oneembodiment, the memory controller sets a lockstep partnership between the first portion and a second portion of memory to spread error correction over the lockstep partners, 1004. In one embodiment, the lockstep partnership is preconfigured. It will be understood that when referring to detecting an error in a "first portion," it is not necessarily that the entire first portion is failed, only that there is a failure within the portion. For example, the first portion could be an entire bank across all devices in a rank, where an error was detected in only one bank of one specific device. The first portion is matched as a lockstep partner with a second portion of the same size.[00102] After generating the lockstep partnership to spread the error correction, or after applying a lockstep partnership that is preconfigured, the error detection logic detects another hard error in the lockstep partnership, 1006. The subsequent error can be any of a number of different errors, as described above. A subsequent error in a portion of the memory outside the lockstep partnership can either be handled with a different partnership being created or with a service call. However, a subsequent error in a portion that is included in the lockstep partnership can be handled in one embodiment by a change in the lockstep partnership. In one embodiment, the subsequent error can be handled without a service call if the second error occurs in the other half of the lockstep partnership as compared to the first error. Thus, in one embodiment, the memory controller cancels or reverses or unsets the lockstep partnership, 1008.[00103] In one embodiment, the memory controller changes lockstep partners when a second portion failure is not on the same lockstep half as the existing mapped out device and there is enough non-failed memory to support adding a new virtual lockstep pair. In one embodiment, the memory controller dynamically changes lockstep partners in a system that supports virtual lockstep (such as ADDDC). In one embodiment, the memory controller dynamically changes lockstep partners in a system that employs lockstep but not virtual lockstep (such as DDDC). The lockstep mechanism and the mechanisms for changing the lockstep partner can be applied at different granularities.[00104] In one embodiment, the memory controller determines whether to create or set a new lockstep partnership at the same granularity as the previous partnership, or whether to use one or more new partnerships of a different granularity, 1010. In one embodiment, if the same granularity is to be used, 1012 YES branch, the memory controller sets a new lockstep partnership between the first portion and a third non-failed portion of memory, 1014. The memory controller can keep a status log of all portions of the memory, and can thus determine whether a portion is failed or non-failed. In evaluating lockstep partnerships in response to a subsequent error detected, the memory controller can evaluate the status of memory portions to determine if there is a non-failed portion to use as an alternate lockstep partner. In one embodiment, the memory controller sets a new lockstep partnership between the second portion and a fourth portion of memory, 1016. Again, seeing that the same granularity is used, it will be understood that the third and fourth portions are of the same size as the first and third portions.[00105] In one embodiment, the memory controller determines to change granularity in the lockstep partnership, 1012 NO branch. When changing granularity, in one embodiment, the memory controller sets a new lockstep partnership at a new granularity between either the first or the second portions and a third portion of a different granularity, 1018. The memory controller can then set a new lockstep partnership for the other affected portion, 1020. The other new lockstep partnership can be of the same granularity as the first and second portions, or can be of a different granularity also.[00106] In one embodiment, the determination to change the granularity can include determining that a subsequent error can be grouped with one or more previous errors by adjusting to a higher or courser granularity, and setting a new lockstep partnership between portions at the courser granularity. Thus, for example, for a subsequent bank failure in the same DRAM that already has at least one failed bank, the memory controller can determine to fail the entire DRAM. Then, the memory controller can set a new partnership based on partnering up the failed DRAM with a non-failed DRAM by mapping the data of the whole DRAM out.[00107] Figure 11 is a block diagram of an embodiment of a computing system in which dynamic lockstep management can be implemented. System 1100 represents a computing device in accordance with any embodiment described herein, and can be a laptop computer, a desktop computer, a server, a gaming or entertainment control system, a scanner, copier, printer, routing or switching device, or other electronic device. System 1100 includes processor 1120, which provides processing, operation management, and execution of instructions for system 1100. Processor 1120 can include any type of microprocessor, central processing unit (CPU), processing core, or other processing hardware to provide processing for system 1100. Processor 1120 controls the overall operation of system 1100, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.[00108] Memory subsystem 1130 represents the main memory of system 1100, and provides temporary storage for code to be executed by processor 1120, or data values to be used in executing a routine. Memory subsystem 1130 can include one or more memory devices such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM), or other memory devices, or a combination of such devices. Memory subsystem 1130 stores and hosts, among other things, operating system (OS) 1136 to provide a software platform for execution of instructions in system 1100. Additionally, other instructions 1138 are stored and executed from memory subsystem 1130 to provide the logic and the processing of system 1100. OS 1136 and instructions 1138 are executed by processor 1120. Memory subsystem 1130 includes memory device 1132 where it stores data, instructions, programs, or other items. In one embodiment, memory subsystem includes memory controller 1134, which is a memory controller to generate and issue commands to memory device 1132. It will be understood that memory controller 1134 could be a physical part of processor 1120.[00109] Processor 1120 and memory subsystem 1130 are coupled to bus/bus system 1110. Bus 1110 is an abstraction that represents any one or more separate physical buses, communication lines/interfaces, and/or point-to-point connections, connected by appropriate bridges, adapters, and/or controllers. Therefore, bus 1110 can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (commonly referred to as "Firewire"). The buses of bus 1110 can also correspond to interfaces in network interface 1150.[00110] System 1100 also includes one or more input/output (I/O) interface(s) 1140, network interface 1150, one or more internal mass storage device(s) 1160, and peripheral interface 1170 coupled to bus 1110. I/O interface 1140 can include one or more interface components through which a user interacts with system 1100 (e.g., video, audio, and/or alphanumeric interfacing). Network interface 1150 provides system 1100 the ability to communicate with remote devices (e.g., servers, other computing devices) over one or more networks. Network interface 1150 can include an Ethernet adapter, wireless interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces.[00111] Storage 1160 can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 1160 holds code or instructions and data 1162 in a persistent state (i.e., the value is retained despite interruption of power to system 1100). Storage 1160 can be generically considered to be a "memory," although memory 1130 is the executing or operating memory to provide instructions to processor 1120. Whereas storage 1160 is nonvolatile, memory 1130 can include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system 1100).[00112] Peripheral interface 1170 can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 1100. A dependent connection is one where system 1100 provides the software and/or hardware platform on which operation executes, and with which a user interacts.[00113] In one embodiment, memory subsystem 1130 includes lockstep manager 1180, which can be memory management in accordance with any embodiment described herein. In one embodiment, lockstep manager 1180 is part of memory controller 1134. Manager 1180 can perform forward and reverse sparing. In particular, manager 1180 can employ reverse sparing to reverse a lockstep partnership assignment and reassign one or both of the lockstep partners to new lockstep partnerships. In one embodiment, system 1100 is a server system that includes multiple server boards or server blades in a chassis system. Each blade can include multiple processors 1170, and many memory devices 1132. In one embodiment, lockstep manager 1180 can dynamically change lockstep partnerships for portions of devices 1132.[00114] Figure 12 is a block diagram of an embodiment of a mobile device in which dynamic lockstep management can be implemented. Device 1200 represents a mobile computing device, such as a computing tablet, a mobile phone or smartphone, a wireless- enabled e-reader, wearable computing device, or other mobile device. It will be understood that certain of the components are shown generally, and not all components of such a device are shown in device 1200. [00115] Device 1200 includes processor 1210, which performs the primary processing operations of device 1200. Processor 1210 can include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing means. The processing operations performed by processor 1210 include the execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O (input/output) with a human user or with other devices, operations related to power management, and/or operations related to connecting device 1200 to another device. The processing operations can also include operations related to audio I/O and/or display I/O.[00116] In one embodiment, device 1200 includes audio subsystem 1220, which represents hardware (e.g., audio hardware and audio circuits) and software (e.g., drivers, codecs) components associated with providing audio functions to the computing device. Audio functions can include speaker and/or headphone output, as well as microphone input. Devices for such functions can be integrated into device 1200, or connected to device 1200. In one embodiment, a user interacts with device 1200 by providing audio commands that are received and processed by processor 1210.[00117] Display subsystem 1230 represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the computing device. Display subsystem 1230 includes display interface 1232, which includes the particular screen or hardware device used to provide a display to a user. In one embodiment, display interface 1232 includes logic separate from processor 1210 to perform at least some processing related to the display. In one embodiment, display subsystem 1230 includes a touchscreen device that provides both output and input to a user. In one embodiment, display subsystem 1230 includes a high definition (H D) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater, and can include formats such as full H D (e.g., 1080p), retina displays, 4K (ultra high definition or UHD), or others.[00118] I/O controller 1240 represents hardware devices and software components related to interaction with a user. I/O controller 1240 can operate to manage hardware that is part of audio subsystem 1220 and/or display subsystem 1230. Additionally, I/O controller 1240 illustrates a connection point for additional devices that connect to device 1200 through which a user might interact with the system. For example, devices that can be attached to device 1200 might include microphone devices, speaker or stereo systems, video systems or other display device, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices.[00119] As mentioned above, I/O controller 1240 can interact with audio subsystem 1220 and/or display subsystem 1230. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of device 1200. Additionally, audio output can be provided instead of or in addition to display output. In another example, if display subsystem includes a touchscreen, the display device also acts as an input device, which can be at least partially managed by I/O controller 1240. There can also be additional buttons or switches on device 1200 to provide I/O functions managed by I/O controller 1240.[00120] In one embodiment, I/O controller 1240 manages devices such asaccelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning system (GPS), or other hardware that can be included in device 1200. The input can be part of direct user interaction, as well as providing environmental input to the system to influence its operations (such as filtering for noise, adjusting displays for brightness detection, applying a flash for a camera, or other features). In one embodiment, device 1200 includes power management 1250 that manages battery power usage, charging of the battery, and features related to power saving operation.[00121] Memory subsystem 1260 includes memory device(s) 1262 for storing information in device 1200. Memory subsystem 1260 can include nonvolatile (state does not change if power to the memory device is interrupted) and/or volatile (state is indeterminate if power to the memory device is interrupted) memory devices. Memory 1260 can store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the execution of the applications and functions of system 1200. In one embodiment, memory subsystem 1260 includes memory controller 1264 (which could also be considered part of the control of system 1200, and could potentially be considered part of processor 1210). Memory controller 1264 includes a scheduler to generate and issue commands to memory device 1262.[00122] Connectivity 1270 includes hardware devices (e.g., wireless and/or wired connectors and communication hardware) and software components (e.g., drivers, protocol stacks) to enable device 1200 to communicate with external devices. The external device could be separate devices, such as other computing devices, wireless access points or base stations, as well as peripherals such as headsets, printers, or other devices.[00123] Connectivity 1270 can include multiple different types of connectivity. To generalize, device 1200 is illustrated with cellular connectivity 1272 and wireless connectivity 1274. Cellular connectivity 1272 refers generally to cellular network connectivity provided by wireless carriers, such as provided via GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, LTE (long term evolution - also referred to as "4G"), or other cellular service standards. Wireless connectivity 1274 refers to wireless connectivity that is not cellular, and can include personal area networks (such as Bluetooth), local area networks (such as WiFi), and/or wide area networks (such as WiMax), or other wireless communication. Wireless communication refers to transfer of data through the use of modulated electromagnetic radiation through a non-solid medium. Wired communication occurs through a solid communication medium.[00124] Peripheral connections 1280 include hardware interfaces and connectors, as well as software components (e.g., drivers, protocol stacks) to make peripheral connections. It will be understood that device 1200 could both be a peripheral device ("to" 1282) to other computing devices, as well as have peripheral devices ("from" 1284) connected to it. Device 1200 commonly has a "docking" connector to connect to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) content on device 1200. Additionally, a docking connector can allow device 1200 to connect to certain peripherals that allow device 1200 to control content output, for example, to audiovisual or other systems.[00125] In addition to a proprietary docking connector or other proprietary connection hardware, device 1200 can make peripheral connections 1280 via common or standards- based connectors. Common types can include a Universal Serial Bus (USB) connector (which can include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), Firewire, or other type.[00126] In one embodiment, memory subsystem 1260 includes lockstep manager 1266, which can be memory management in accordance with any embodiment described herein. In one embodiment, lockstep manager 1266 is part of memory controller 1264. Manager 1266 can perform forward and reverse sparing. In particular, manager 1266 can employ reverse sparing to reverse a lockstep partnership assignment and reassign one or both of the lockstep partners to new lockstep partnerships.[00127] In one aspect, a method for managing errors in a memory subsystem includes: detecting a hard error in a first memory portion set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners; responsive to detecting the hard error, canceling the lockstep partnership between the first memory portion and the second memory portion; creating a new lockstep partnership between the first memory portion and a third memory portion as lockstep partners; and creating a new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners.[00128] In one embodiment, detecting the hard error comprises detecting a second hard error in the lockstep partnership. In one embodiment, the lockstep partnership comprises a virtual lockstep partnership where the hard error is mapped out to a spare memory portion. In one embodiment, the first and second memory portions comprise ranks of memory. In one embodiment, the first and second memory portions comprise banks of memory. In one embodiment, the first and second memory portions comprise DRAM (dynamic random access memory) devices. In one embodiment, the first and second memory portions comprise DRAM devices in separate ranks. In one embodiment, the third and fourth memory portions comprise DRAM devices in different ranks. In one embodiment, at least one of creating the new lockstep partnership between the first memory portion and a third memory portion as lockstep partners or creating the new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners includes changing a level of granularity of the lockstep partnership. In one embodiment, detecting the hard error in the first memory portion comprises detecting a hard error in a memory portion that can be grouped with the first memory portion at a different level of granularity, and wherein creating the new lockstep partnership comprises creating a new lockstep partnership between the first memory portion and the third memory portion at the different level of granularity. In one embodiment, creating the new lockstep partnerships comprises dynamically changing a lockstep partnership entry in a lockstep table. In one embodiment, detecting the hard error comprises detecting a second hard error, and further comprising, prior to detecting the second hard error: detecting a first hard error in either the first or the second memory portions; setting an original lockstep partnership between the first memory portion and the second memory portion as lockstep partners in response to detecting the first hard error. In one embodiment, detecting the hard error comprises detecting the hard error in the first memory portion set in a predetermined lockstep partnership with the second memory portion.[00129] In one aspect, a memory management device to manage errors in an associated memory subsystem includes: error detection logic to detect a hard error in a first memory portion of the memory subsystem, wherein the first memory portion is set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners; and error correction logic to cancel the lockstep partnership between the first and second memory portions responsive to detecting the hard error in the first memory portion, and to create a new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners.[00130] In one aspect, the memory management device is included in a memory controller of a memory subsystem including multiple DRAMs (dynamic random access memory devices) each including a memory array, wherein the memory arrays are addressable according to multiple different levels of granularity; wherein the memory controller includes error detection logic to detect a hard error in a first memory portion of the memory subsystem, wherein the first memory portion is set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners; and error correction logic to cancel the lockstep partnership between the first and second memory portions responsive to detecting the hard error in the first memory portion, and to create a new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners; and wherein the memory subsystem is incorporated into a chassis system to couple to a blade server.[00131] In one embodiment, the lockstep partnership comprises a virtual lockstep partnership where the hard error is mapped out to a spare memory portion. In one embodiment, the first and second memory portions comprise one of ranks of memory, banks of memory, or DRAM (dynamic random access memory) devices. In one embodiment, the first and second memory portions comprise DRAM devices in separate ranks. In one embodiment, the third and fourth memory portions comprise DRAM devices in different ranks. In one embodiment, the error correction logic is to change a level of granularity of at least one lockstep partnership when creating the new lockstep partnership between the first memory portion and a third memory portion as lockstep partners, or the new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners. In one embodiment, the error detection logic is to detect the hard error in a memory portion that can be grouped with the first memory portion at a different level of granularity, and wherein the error correction logic is to create the new lockstep partnership between the first memory portion and the third memory portion at the different level of granularity. In one embodiment, the error correction logic is to create the new lockstep partnerships by dynamically changing a lockstep partnership entry in a lockstep table. In one embodiment, the error detection logic is to a second hard error, and further comprising, prior to detecting the second hard error, the error detection logic is to detect a first hard error in either the first or the second memory portions; and the error correction logic is to set an original lockstep partnership between the first memory portion and the second memory portion as lockstep partners in response to detecting the first hard error. In one embodiment, the error detection logic is to detect the hard error in the first memory portion set in a predetermined lockstep partnership with the second memory portion.[00132] In one aspect, an apparatus for managing errors in a memory subsystem includes: means for detecting a hard error in a first memory portion set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners; means for responsive to detecting the hard error, canceling the lockstep partnership between the first memory portion and the second memory portion; means for creating a new lockstep partnership between the first memory portion and a third memory portion as lockstep partners; and means for creating a new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners. The apparatus can includes mean for performing operations in accordance with any embodiment of the method as set forth above.[00133] In one aspect, an article of manufacture comprising a computer readable storage medium having content stored thereon, which when accessed causes a machine to perform operations including: detecting a hard error in a first memory portion set in a lockstep partnership as a lockstep partner with a second memory portion, wherein error correction is to be spread over the lockstep partners; responsive to detecting the hard error, canceling the lockstep partnership between the first memory portion and the second memory portion; creating a new lockstep partnership between the first memory portion and a third memory portion as lockstep partners; and creating a new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners. The article of manufacture can includes content for performing operations in accordance with any embodiment of the method as set forth above.[00134] In one aspect, a method for managing errors in a memory subsystem includes: detecting a hard error in a first memory portion; setting a lockstep partnership between the first memory portion and a second memory portion as lockstep partners, wherein error correction is spread over the first and second memory portions; detecting a hard error in the second memory portion; responsive to detecting the hard error in the second memory portion, reversing the lockstep partnership between the first memory portion and the second memory portion; setting a new lockstep partnership between the first memory portion and a third memory portion as lockstep partners; and setting a new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners.[00135] In one embodiment, detecting the hard error comprises detecting a second hard error in the lockstep partnership. In one embodiment, the lockstep partnership comprises a virtual lockstep partnership where the hard error is mapped out to a spare memory portion. In one embodiment, the first and second memory portions comprise ranks of memory. In one embodiment, the first and second memory portions comprise banks of memory. In one embodiment, the first and second memory portions comprise DRAM (dynamic random access memory) devices. In one embodiment, the first and second memory portions comprise DRAM devices in separate ranks. In one embodiment, the third and fourth memory portions comprise DRAM devices in different ranks. In one embodiment, at least one of setting the new lockstep partnership between the first memory portion and a third memory portion as lockstep partners or setting the new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners includes changing a level of granularity of the lockstep partnership. In one embodiment, detecting the hard error in the first memory portion comprises detecting a hard error in a memory portion that can be grouped with the first memory portion at a different level of granularity, and wherein setting the new lockstep partnership comprises setting a new lockstep partnership between the first memory portion and the third memory portion at the different level of granularity. In one embodiment, setting the new lockstep partnerships comprises dynamically changing a lockstep partnership entry in a lockstep table. In one embodiment, setting the original lockstep partnership between the first memory portion and the second memory portion as lockstep partners comprises implementing an adaptive dual device data correction (ADDDC) operation.[00136] In one aspect, a memory management device to manage errors in an associated memory subsystem includes: error detection logic to detect a first hard error in a first memory portion of the memory subsystem, and subsequently detect second hard error; and error correction logic to set a lockstep partnership between the first memory portion and the second memory portion as lockstep partners in response to detecting the first hard error, to spread error correction over the first and second memory portions, and to reverse the lockstep partnership between the first and second memory portions responsive to subsequently detecting the second hard error, and to set a new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners responsive to subsequently detecting the second hard error.[00137] In one aspect, the memory management device is included in a memory controller of a memory subsystem including multiple DRAMs (dynamic random access memory devices) each including a memory array, wherein the memory arrays are addressable according to multiple different levels of granularity; wherein the memory controller includes error detection logic to detect a first hard error in a first memory portion of the memory subsystem, and subsequently detect second hard error; and error correction logic to set a lockstep partnership between the first memory portion and the second memory portion as lockstep partners in response to detecting the first hard error, to spread error correction over the first and second memory portions, and to reverse the lockstep partnership between the first and second memory portions responsive to subsequently detecting the second hard error, and to set a new lockstep partnerships between the first memory portion and a third memory portion as lockstep partners and between the second memory portion and a fourth memory portion as lockstep partners responsive to subsequently detecting the second hard error; and wherein the memory subsystem is incorporated into a chassis system to couple to a blade server.[00138] In one embodiment, the lockstep partnership comprises a virtual lockstep partnership where the hard error is mapped out to a spare memory portion. In one embodiment, the first and second memory portions comprise one of ranks of memory, banks of memory, or DRAM (dynamic random access memory) devices. In one embodiment, the first and second memory portions comprise DRAM devices in separate ranks. In one embodiment, the third and fourth memory portions comprise DRAM devices in different ranks. In one embodiment, the error correction logic is to change a level of granularity of at least one lockstep partnership when setting the new lockstep partnership between the first memory portion and a third memory portion as lockstep partners, or the new lockstep partnership between the second memory portion and a fourth memory portion as lockstep partners. In one embodiment, the error detection logic is to detect the hard error in a memory portion that can be grouped with the first memory portion at a different level of granularity, and wherein the error correction logic is to set the new lockstep partnership between the first memory portion and the third memory portion at the different level of granularity. In one embodiment, the error correction logic is to set the new lockstep partnerships by dynamically changing a lockstep partnership entry in a lockstep table. In one embodiment, the error correction logic is to set the original lockstep partnership between the first memory portion and the second memory portion as lockstep partners as an operation of an adaptive dual device data correction (ADDDC) implementation.[00139] Flow diagrams as illustrated herein provide examples of sequences of various process actions. The flow diagrams can indicate operations to be executed by a software or firmware routine, as well as physical operations. In one embodiment, a flow diagram can illustrate the state of a finite state machine (FSM), which can be implemented in hardware and/or software. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated embodiments should be understood only as an example, and the process can be performed in a different order, and some actions can be performed in parallel. Additionally, one or more actions can be omitted in various embodiments; thus, not all actions are required in every embodiment. Other process flows are possible. [00140] To the extent various operations or functions are described herein, they can be described or defined as software code, instructions, configuration, and/or data. The content can be directly executable ("object" or "executable" form), source code, or difference code ("delta" or "patch" code). The software content of the embodiments described herein can be provided via an article of manufacture with the content stored thereon, or via a method of operating a communication interface to send data via the communication interface. A machine readable storage medium can cause a machine to perform the functions or operations described, and includes any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, etc., medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, etc. The communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide a data signal describing the software content. The communication interface can be accessed via one or more commands or signals sent to the communication interface.[00141] Various components described herein can be a means for performing the operations or functions described. Each component described herein includes software, hardware, or a combination of these. The components can be implemented as software modules, hardware modules, special-purpose hardware (e.g., application specific hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, hardwired circuitry, etc.[00142] Besides what is described herein, various modifications can be made to the disclosed embodiments and implementations of the invention without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |
A host may be coupled to a switched fabric and include a processor, a host memory coupled to the processor and a host-fabric adapter coupled to the host memory and the processor and be provided to interface with the switched fabric. The host-fabric adapter accesses a translation and protection table from the host memory for a data transaction. The translation and protection table entries include a region identifier field and a protection domain field used to validate an access request. |
1. A method comprising:generating a protection domain from a virtual address and a memory handle of a descriptor associated with a data transfer request; generating a region identifier from the memory handle; identifying an entry in a translation protection table (TPT), the entry having a corresponding physical address of a memory; determining if the generated protection domain corresponds to a value in a protection domain field of the entry; determining if the generated region identifier corresponds to a value in a region identifier field of the entry; and if the generated protection domain corresponds to a value in a protection domain field of the entry, and if the generated region identifier corresponds to the value in the region identifier field of the TPT, allowing access to the memory at the physical address. 2. The method of claim 1, wherein the memory handle includes a key portion and a handle portion.3. The method of claim 2, wherein said generating a region identifier from the memory handle comprises combining the key portion with a portion of the handle portion.4. The method of claim 1, wherein the protection domain comprises a protection tag in an NGIO (New Generation Input/Output) architecture.5. An apparatus comprising a host-fabric adapter capable of:generating a protection domain from a virtual address and a memory handle of a descriptor associated with a data transfer request; generating a region identifier from the memory handle; identifying an entry in a translation protection table (TPT), the entry having a corresponding physical address of a memory; determining if the generated protection domain corresponds to a value in a protection domain field of the entry; determining if the generated region identifier corresponds to a value in a region identifier field of the entry; and if the generated protection domain corresponds to a value in a protection domain field of the entry, and if the generated region identifier corresponds to the value in the region identifier field of the TPT, allowing access to the memory at the physical address. 6. The apparatus of claim 5, wherein the memory handle includes a key portion and a handle portion.7. The apparatus of claim 6, wherein said generating a region identifier from the memory handle comprises combining the key portion with a portion of the handle portion.8. The apparatus of claim 5, wherein the protection domain comprises a protection tag in an NGIO (New Generation Input/Output) architecture.9. A system comprising:at least one virtual buffer, each virtual buffer having at least one page of data; and a host-fabric adapter capable of allowing access to one or more pages of the at least one virtual buffer by: generating a protection domain from a virtual address and a memory handle of a descriptor associated with a data transfer request; generating a region identifier from the memory handle; identifying an entry in a translation protection table (TPT), the entry having a corresponding physical address of the at least one buffer; determining if the generated protection domain corresponds to a value in a protection domain field of the entry; determining if the generated region identifier corresponds to a value in a region identifier field of the entry; and if the generated protection domain corresponds to a value in a protection domain field of the entry, and if the generated region identifier corresponds to the value in the region identifier field of the TPT, allowing access to the at least one buffer at the physical address. 10. The system of claim 9, wherein the memory handle includes a key portion and a handle portion.11. The system of claim 10, wherein said generating a region identifier from the memory handle comprises combining the key portion with a portion of the handle portion.12. The system of claim 9, wherein the protection domain comprises a protection tag in an NGIO (New Generation Input/Output) architecture.13. An article comprising a machine-readable medium having machine-accessible instructions, the instructions when executed by a machine, result in the following:generating a protection domain from a virtual address and a memory handle of a descriptor associated with a data transfer request; generating a region identifier from the memory handle; identifying an entry in a translation protection table (TPT), the entry having a corresponding physical address of a memory; determining if the generated protection domain corresponds to a value in a protection domain field of the entry; determining if the generated region identifier corresponds to a value in a region identifier field of the entry; and if the generated protection domain corresponds to a value in a protection domain field of the entry, and if the generated region identifier corresponds to the value in the region identifier field of the TPT, allowing access to the memory at the physical address. 14. The article of claim 13, wherein the memory handle includes a key portion and a handle portion.15. The article of claim 14, wherein said instructions that result in generating a region identifier from the memory handle further result in combining the key portion with a portion of the handle portion.16. The article of claim 13, wherein the protection domain comprises a protection tag in an NGIO (New Generation Input/Output) architecture. |
TECHNICAL FIELDThe present invention relates to a data network, and more particularly, relates to the arrangement and use of a region identifier field provided in translation entries of a translation and protection table (TPT).BACKGROUNDIn disadvantageous network architectures, the operating system (OS) virtualizes network hardware into a set of logical communication endpoints and multiplexes access to the hardware among these endpoints (e.g., computers, servers and/or I/O devices). The operating system (OS) may also implement protocols that make communication between connected endpoints reliable (e.g., transmission control protocol, TCP).Generally, the operating system (OS) receives a request to send a message (data) and a virtual address that specifies the location of the data associated with the message, copies the message into a message buffer and translates the virtual address. The OS then schedules a memory copy operation to copy data from the message buffer memory to a target device. A translation and protection table (TPT) may be used to translate the virtual address, received in the form of descriptors, into physical addresses and to define memory regions before a host network adapter can access them (e.g., for transfer to/from a remote device) during data transfer (movement) operations. There is a need for a more efficient technique of using and accessing the translation and protection table (TPT) to perform virtual-to-physical address translations while providing additional memory access protection during data transfer operations.BRIEF DESCRIPTION OF THE DRAWINGSA more complete appreciation of example embodiments of the present invention, and many of the attendant advantages of the present invention, will become readily apparent as the same becomes better understood by reference to the following detailed description when considered in conjunction with the accompanying drawings in which like reference symbols indicate the same or similar components, wherein:FIG. 1 illustrates an example data network according to an embodiment of the present invention;FIG. 2 illustrates a block diagram of a host of an example data network according to an embodiment of the present invention;FIG. 3 illustrates a block diagram of a host of an example data network according to another embodiment of the present invention;FIG. 4 illustrates an example software driver stack of a host of an example data network according to an embodiment of the present invention;FIG. 5 illustrates an example translation and protection table;FIG. 6 illustrates an example translation and protection table;FIG. 7 illustrates an example translation and protection table entry according to the present invention;FIG. 8 illustrates one example embodiment of how the region identifier field may be created in accordance with the present invention;FIGS. 9A and 9B illustrate examples of descriptors;FIG. 10 illustrates an example send processing technique according to the present invention; andFIG. 11 illustrates an example write processing technique according to the present invention.DETAILED DESCRIPTIONThe present invention is applicable for use with all types of data networks and clusters designed to link together computers, servers, peripherals, storage devices, and communication devices for communications. Examples of such data networks may include a local area network (LAN), a wide area network (WAN), a campus area network (CAN), a metropolitan area network (MAN), a global area network (GAN), a storage area network and a system area network (SAN), including newly developed data networks using Next Generation I/O (NGIO), Future I/O (FIO), Infiniband and Server Net and those networks which may become available as computer technology develops in the future. LAN system may include Ethernet, FDDI (Fiber Distributed Data Interface) Token Ring LAN, Asynchronous Transfer Mode (ATM) LAN, Fiber Channel, and Wireless LAN. However, for the sake of simplicity, discussions will concentrate mainly on exemplary use of a simple data network having several example hosts and I/O units including I/O controllers that are linked together by an interconnection fabric, although the scope of the present invention is not limited thereto.Attention now is directed to the drawings and particularly to FIG. 1, an example data network having several interconnected endpoints (nodes) for data communications is illustrated. As shown in FIG. 1, the data network 100 may include, for example, an interconnection fabric (hereinafter referred to as "switched fabric") 102 of one or more switches A, B and C and corresponding physical links, and several endpoints (nodes) which may correspond to one or more I/O units 1 and 2, computers and servers such as, for example, host 110 and host 112. I/O unit 1 may include one or more controllers connected thereto, including I/O controller 1 (IOC1) and I/O controller 2 (IOC2). Likewise, I/O unit 2 may include an I/O controller 3 (IOC3) connected thereto. Each I/O controller 1, 2 and 3 (IOC1, IOC2 and IOC3) may operate to control one or more I/O devices. For example, I/O controller 1 (IOC1) of the I/O unit 1 may be connected to I/O device 122, while I/O controller 2 (IOC2) may be connected to I/O device 124. Similarly, I/O controller 3 (IOC3) of the I/O unit 2 may be connected to I/O devices 132 and 134. The I/O devices may be any of several types of I/O devices, such as storage devices (e.g., a hard disk drive, tape drive) or other I/O device.The hosts and I/O units including attached I/O controllers and I/O devices may be organized into groups known as clusters, with each cluster including one or more hosts and typically one or more I/O units (each I/O unit including one or more I/O controllers). The hosts and I/O units may be interconnected via a switched fabric 102, which is a collection of switches A, B and C and corresponding physical links connected between the switches A, B and C.In addition, each I/O unit includes one or more I/O controller-fabric (IOC-fabric) adapters for interfacing between the switched fabric 102 and the I/O controllers (e.g., IOC1, IOC2 and IOC3). For example, IOC-fabric adapter 120 may interface the I/O controllers 1 and 2 (IOC1 and IOC2) of the I/O unit 1 to the switched fabric 102, while IOC-fabric adapter 130 interfaces the I/O controller 3 (IOC3) of the I/O unit 2 to the switched fabric 102.The specific number and arrangement of hosts, I/O units, I/O controllers, I/O devices, switches and links shown in FIG. 1 are provided simply as an example data network. A wide variety of implementations and arrangements of any number of hosts, I/O units, I/O controllers, I/O devices, switches and links in all types of data networks may be possible.An example embodiment of a host (e.g., host 110 or host 112) may be shown in FIG. 2. Referring to FIG. 2, a host 110 may include a processor 202 coupled to a host bus 203. An I/O and memory controller 204 (or chipset) may be connected to the host bus 203. A main memory 206 may be connected to the I/O and memory controller 204. An I/O bridge 208 may operate to bridge or interface between the I/O and memory controller 204 and an I/O bus 205. Several I/O controllers may be attached to I/O bus 205, including I/O controllers 210 and 212. I/O controllers 210 and 212 (including any I/O devices connected thereto) may provide bus-based I/O resources.One or more host-fabric adapters 220 may also be connected to the I/O bus 205. Alternatively, the host-fabric adapter 220 may be connected directly to the I/O and memory controller (or chipset) 204 to avoid the inherent limitations of the I/O bus 205 (see FIG. 3). In either situation, the host-fabric adapter 220 may be considered to be a type of a network interface card (e.g., NIC which usually includes hardware and firmware) for interfacing the host 110 to a switched fabric 102. The host-fabric adapter 220 may be utilized to provide fabric communication capabilities for the host 110. For example, the host-fabric adapter 220 converts data between a host format and a format that is compatible with the switched fabric 102. For data sent from the host 110, the host-fabric adapter 220 formats the data into one or more packets containing a sequence of one or more cells including header information and data information.According to one example embodiment or implementation, the hosts or I/O units of the data network of the present invention may be compatible with an Infiniband architecture. Infiniband information/specifications are presently under development and will be published by the Infiniband Trade Association (formed Aug. 27, 1999) having the Internet address of http://www.Infinibandta.org. The hosts of I/O units of the data network may also be compatible with the "Next Generation Input/Output (NGIO) Specification" as set forth by the NGIO Forum on Mar. 26, 1999. The host-fabric adapter 220 may be a Host Channel Adapter (HCA), and the IOC-fabric adapters may be Target Channel Adapters (TCA). The host channel adapter (HCA) may be used to provide an interface between the host 110 or 112 and the switched fabric 102 via high speed serial links. Similarly, target channel adapters (TCA) may be used to provide an interface between the switched fabric 102 and the I/O controller of either an I/O unit 1 or 2, or another network, including, but not limited to, local area network (LAN), wide area network (WAN), Ethernet, ATM and fibre channel network, via high speed serial links. Both the host channel adapter (HCA) and the target channel adapter (TCA) may be implemented in the Infiniband architecture or in compliance with "Next Generation I/O Architecture: Host Channel Adapter Software Specification, Revision 1.0" as set forth by Intel Corp., on May 13, 1999. In addition, each host may contain one or more host-fabric adapters (e.g., HCAs). However, Infiniband and NGIO are merely example embodiments or implementations of the present invention, and the invention is not limited thereto. Rather, the present invention may be applicable to a wide variety of data networks, hosts and I/O controllers.As described with reference to FIGS. 2-3, the I/O units and respective I/O controllers may be connected directly to the switched fabric 102 rather than as part of a host 110. For example, I/O unit 1 including I/O controllers 1 and 2 (IOC1 and IOC2) and I/O unit 2 including an I/O controller 3 (IOC3) may be directly (or independently) connected to the switched fabric 102. In other words, the I/O units (and their connected I/O controllers and I/O devices) are attached as separate and independent I/O resources to the switched fabric 102 as shown in FIGS. 1-3, as opposed to being part of a host 110. As a result, I/O units including I/O controllers (and I/O devices) connected to the switched fabric 102 may be flexibly assigned to one or more hosts (rather than having a predetermined or fixed host assignment based upon being physically connected to the host's local I/O bus). The I/O units, I/O controllers and I/O devices which are attached to the switched fabric 102 may be referred to as fabric-attached I/O resources (i.e., fabric-attached I/O units, fabric-attached I/O controllers and fabric-attached I/O devices) because these components are directly attached to the switched fabric 102 rather than being connected as part of a host.In addition, the host 110 may detect and then directly address and exchange data with I/O units and I/O controllers (and attached I/O devices) which are directly attached to the switched fabric 102 (i.e., the fabric-attached I/O controllers), via the host-fabric adapter 220. A software driver stack for the host-fabric adapter 220 may be provided to allow host 110 to exchange data with remote I/O controllers and I/O devices via the switched fabric 102, while preferably being compatible with many currently available operating systems, such as Windows 2000. The host-fabric adapter 220 may include an internal cache 222.FIG. 4 illustrates an example software driver stack of a host 110 having fabric-attached I/O resources according to an example embodiment of the present invention. As shown in FIG. 4, the host operating system (OS) 400 includes a kernel 410, an I/O manager 420, and a plurality of I/O controller drivers for interfacing to various I/O controllers, including I/O controller drivers 430 and 432. According to an example embodiment, the host operating system (OS) 400 may be Windows 2000, and the I/O manager 420 may be a Plug-n-Play manager.In addition, a fabric adapter driver software module may be provided to access the switched fabric 102 and information about fabric configuration, fabric topology and connection information. Such a driver software module may include a fabric bus driver (upper driver) 440 and a fabric adapter device driver (lower driver) 442 utilized to establish communication with a target fabric-attached agent (e.g., I/O controller), and perform functions common to most drivers, including, for example, channel abstraction, send/receive I/O transaction messages, remote direct memory access (RDMA) transactions (e.g., read and write operations), queue management, memory registration, descriptor management, message flow control, and transient error handling and recovery. Such software module may be provided on a tangible medium, such as a floppy disk or compact disk (CD) ROM, or via Internet downloads, which may be available for plug-in or download into the host operating system (OS) or any other viable method.The host 110 may communicate with I/O units and I/O controllers (and attached I/O devices) which are directly attached to the switched fabric 102 (i.e., the fabric-attached I/O controllers) using a Virtual Interface (VI) architecture. Under the "Virtual Interface (VI) Architecture Specification, Version 1.0," as set forth by Compaq Corp., Intel Corp., and Microsoft Corp., on Dec. 16, 1997, the VI architecture comprises four basic components: virtual interface (VI) of pairs of works queues (send queue and receive queue), VI consumer which may be an application program, VI provider which may be hardware and software components responsible for instantiating VI, and completion queue (CQ). VI is the mechanism that allows VI consumers to directly access a VI provider. Each VI represents a communication endpoint, and endpoint pairs may be logically connected to support bi-directional, point-to-point data transfer. Under the VI architecture, the host-fabric adapter 220 and VI kernel agent may constitute the VI provider to perform endpoint virtualization directly and subsume the tasks of multiplexing, de-multiplexing, and data transfer scheduling normally performed by the host operating system (OS) kernel 410 and device driver 442 as shown in FIG. 4.The translation and protection table (TPT) 230 shown in FIG. 5 may be used to translate virtual addresses, received in a form of packet descriptors (e.g., a data structure that describes a request to move data), into physical addresses and to define memory regions of the host memory 206 that may be accessed by the host-fabric adapter 220 (validate access to host memory). In addition, the translation and protection table (TPT) 230 may also be used to validate access permission rights of the host-fabric adapter 220 and to perform address translation before accessing any other memory in the host 110. The translation and protection table (TPT) 230 may contain a plurality of TPT entries, for example, TPT(0), TPT(1) . . . TPT(t-2) and TPT(t-1), in the system memory address space. Each TPT entry (also called translation entry) may represent a single page of the host memory 206, typically 4 KB of physically contiguous host memory 206. The TPT table 230 may be stored within the host memory 206 or it may be stored in a different memory area of the host 110.FIG. 6 illustrates another translation and protection table (TPT) 240 that may be used to translate virtual addresses into physical addresses. As discussed above, the translation and protection table 240 may validate access permission rights of the host-fabric adapter 220 and perform address translation before accessing any other memory in the host 110. Each translation and protection table 240 may contain a plurality of entries that are associated with virtual buffers. For the example shown in FIG. 6, three virtual buffers may be associated with the translation protection table 240, namely virtual buffer A(VBa), virtual buffer B(VBb) and virtual buffer C(VBc). Each translation entry in the example of FIG. 6 may correspond to one page of a virtual buffer, or 4 KB of data. For this example, virtual buffer A includes 8 KB of data, virtual buffer B includes 12 KB of data and virtual buffer C includes 12 KB of data. Accordingly, the translation protection table 240 includes entries 244 and 246 for the addresses of page 1 and page 2 of virtual buffer A, respectively. The translation and protection table 240 also includes entries 248, 250 and 252 for the addresses of page 1, page 2 and page 3 of virtual buffer B, respectively. The translation and protection table 240 further includes entries 256, 258 and 260 for the addresses of page 1, page 2 and page 3 of virtual buffer C, respectively. The translation and protection table 240 may also include unused portions 242 that separate the pages of the different virtual buffers. That is, an unused portion 242 may separate the pages of virtual buffer A from the pages of virtual buffer B and a similar unused portion 242 may separate the pages of virtual buffer B from the pages of virtual buffer C. The unused portions 242 may also be provided at the beginning and end of the translation and protection table.FIG. 7 shows an example translation and protection table entry that includes a region identifier (also called region ID) field 330 in accordance with the present invention. Each TPT entry 300 may correspond to a single registered memory page and include a series of protection attributes (also referred to as access rights) 310, a translation cacheable flag 320, a region identifier field 330, a physical page address field 340, and a protection domain field 350. The protection domain field 350 may also be referred to as a protection tag field, especially with respect to a NGIO architecture.The protection attributes 310 may include, for example, a Memory Write Enable flag that indicates whether the host-fabric adapter 220 can write to a page (e.g., "1" page is write-enable, "0" page is not write-enable); a RDMA Read Enable flag that indicates whether the page can be a source of RDMA Read operation (e.g., "1" page can be source, "0" page cannot be source); and a RDMA Write Enable flag that indicates whether the page can be a target of RDMA Write operation (e.g., "1" page can be target, "0" page can not be target). The protection attributes 310 may control read and write access to a given memory region. These permissions are generally set for memory regions and virtual interfaces (VIs) when they are created, but may be modified later by changing the attributes of the memory region, and/or of the VI. If the protection attributes between a VI and a memory region do not match (during an attempted access), the attribute offering the most protection will be honored. For instance, if a VI has RDMA Read Enabled, but the memory region does not, the result is that RDMA reads on that VI from that memory region will fail. RDMA Read and Write access attributes are enforced at the remote end of a connection that is referred to by the descriptor. The Memory Write Enable access attribute is enforced for all memory access to the associated page. An attempted message transfer operation that violates a memory region's permission settings may result in a memory protection error and no data is transferred.The translation cacheable flag 320 may be utilized to specify whether the host-fabric adapter 220 may cache addresses across transaction boundaries. If the translation cacheable flag 320 is not set ("0" flag), the host-fabric adapter 220 may flush or discard a corresponding single TPT entry from the internal cache 222 and retranslate buffer and/or descriptor addresses each time a new transaction is processed. However, if the translation cacheable flag 320 is set ("1" flag), the host-fabric adapter 220 may choose to reserve or keep the corresponding single TPT entry in the internal cache 222 for future re-use. This way only a designated TPT entry as opposed to all TPT entries stored in the internal cache 222 at the end of an IO transaction may be flushed or discarded from the internal cache 222. Since the host-fabric adapter 220 is instructed to flush or discard individual TPT entries as opposed to all cached TPT entries stored in the internal cache 222 at the end of an IO transaction, the number of times the host-fabric adapter 220 must flush cached address translations in the internal cache 222 may be drastically reduced. The software driver of the host operating system 400 (see FIG. 4) may be used to set the status of the translation cacheable flag 320 of each individual TPT entry stored in the internal cache 222.The physical page address field 340 may include the physical page frame address of the entry. The protection domain field 350 includes identifiers that are associated both with VIs and with host memory regions to define the access permission. Memory access may be allowed by the host-fabric adapter 220 if the protection domain field of the VI and of the memory region involved are identical. Attempted memory accesses that violate this rule may result in a memory protection error and no data is transferred. While the protection domain 350 may be used to deny or allow access to the translation and protection table, it is possible that different virtual buffers may be associated with a similar protection domain (or protection tag). In this situation, a wrong address may be accessed for the virtual buffer. Stated differently, in cases where the virtual address supplied with the memory handle is outside of the range of addresses in the associated memory region, the combination of that address and the memory handle can point to a translation entry of a different memory region that contains the same protection domain as the associated memory region. This error could allow the contents of the memory regions to be corrupted without detection.The region identifier field 330 is provided to further deny or allow access to the translation and protection table. The region identifier field 330 provides memory access by the host-fabric adapter 220 if the region identifier field of the virtual interface and of the memory region involved are identical. The region identifier field 330 thereby provides further protection functionality. Each translation entry associated with a specific memory region contains the same region identifier.FIG. 7 shows a region identifier field 330 that would include a 12 bit region identifier. The use of 12 bits is merely an example embodiment. The present invention is not limited to this number of bits as other lengths of bits for the region identifier field 330 may also be provided in accordance with the present invention. The region identifier field 330 is used to determine whether a protection violation may occur and thus access to memory is denied. This use of the region identifier field 330 provides unique advantages such as the ability to additionally deny or allow access to the buffers based on information other than the protective domain field 350. As will be discussed below in greater detail, the region identifier field 330, unlike the protection domain field 350, is mathematically related to the entries within the translation and protection table and therefore helps to further distinguish between virtual buffers. Accordingly, the region identifier field 330 may be used to deny or allow access to a memory region even if different virtual buffers are associated with a similar protection domain.FIG. 8 shows one embodiment of how the region identifier field 330 may be obtained during memory registration or address translation by using a memory handle 400. The handle 400 may be 32 bits and include a 27 bit handle portion 370 and a 5 bit key portion 360. The 27 bit handle portion 370 is mathematically related to a specific translation entry and thus is related to a physical address of the data. The 5-bit key portion 360, on the other hand, may be assigned by the control software when a virtual memory buffer is registered. The 5 bit key portion 360 may be selected by any number of means such as a sequential value. For example, the control software may retain a copy of the value of the last key portion used for each protection domain (i.e., for each memory buffer). When a memory registration operation is requested, the control software may look at the last key portion used for a given protection domain (i.e., for one memory buffer) and then advance that value to the next sequential value. The new value of the key portion may be saved and used as the 5 bits of the key portion 360. Other algorithms for advancing the value of the key portion 360 may also be used, such as random selection. The host-fabric adapter 220 may determine the region identifier field 330 by combining the 5 bit key portion 360 with the lower seven bits 372 of the 27 bit handle portion 370. However, the 5 bits of the key portion 360 and the lower seven bits 372 of the 27 bit handle portion 370 are merely an example embodiment. The present invention is also applicable to other lengths for both the key portion 360 and the handle portion 370 and the combination thereof.The 32 bit handle 400 may be supplied as part of the operation of requesting access to memory via the host channel adapter 220. For operations arriving from a remote host, the incoming message may contain the virtual address, the handle and the length of the request. For operations originating on the local host, the outgoing descriptor may contain the virtual address, the handle and the length of the buffers to be accessed. As discussed above, the 12 bit region identifier field 330 may be generated from a 32 bit handle 400, which includes the 5 bit key portion 360 and the 27 bit handle portion 370.For purposes of completeness, data transfer operations between host 110 and I/O units and I/O controllers attached to the switched fabric 102 using TPT entries may be described as follows. Data transfer requests may be represented by descriptors. There are two general types of descriptors, send/receive and read/write (RDMA). Descriptors are data structures organized as a list of segments. Each descriptor may begin with a control segment followed by an optional address segment and an arbitrary number of data segments. Control segments may contain control and status information. Address segments, for read/write operations, may contain remote buffer information (i.e., memory associated with the VI targeted to receive the read/write request). Data segments for both send/receive and read/write operations may contain information about the local memory (i.e., memory associated with the VI issuing the send/receive or read/write request).FIG. 9A illustrates an example send/receive type descriptor 900 having a control segment 902 and a data segment 904. Data segment 904, in turn, has a segment length field 906, a memory handle field 908, and a virtual address field 910. The segment length field 906 specifies the length of the message to be sent or that is to be received. The memory handle field 908 may be used to verify that the sending/requesting process owns the registered memory region indicated by segment length 906 and virtual address 910. In one embodiment, the memory handle 908 may be 32 bits in length, corresponding to the 32 bit handle 400 shown in FIG. 8 that includes the 5 bit key portion 360 and the 27 bit handle portion 370. The 12 bit region identifier field 330 may be formed from this memory handle 908. For a send operation, the virtual address 910 identifies the starting memory location of the message (i.e., data) to be sent in the sending VI's local memory space. For a receive operation, the virtual address 910 identifies the starting memory location of where the received message (data) is to be stored in the requesting VI's local memory space.FIG. 9B illustrates an example read/write type descriptor 912 having a control segment 914, an address segment 916, and a data segment 918. The address segment 916 has a remote memory handle field 920 and a remote virtual address field 922. The data segment 918 has a segment length field 924, a local memory handle field 926, and a local virtual address field 928. Similar to that discussed above, the remote memory handle 920 and the local memory handle 926 may be 32 bits in length, corresponding to the 32 bit handle 400 shown in FIG. 8 that includes the 5 bit key portion 360 and the 27 bit handle portion 370. For a read operation, the remote virtual address 922 identifies the memory location in the remote process' memory space, of the message (data) to be read. The local virtual address 928 identifies the starting memory location in the local process' memory space of where the received message is to be placed. The amount of memory to be used to store the message is specified by the segment length field 924. For a write operation, the remote virtual address 922identifies the memory location in the local process' memory space of the message (data) to be written. The local virtual address 928 identifies the starting memory location in the local process' memory space of where the message being written is stored. The size of the message is specified by the segment length field 924. The remote memory handle 920 is that memory handle associated with the memory identified by remote virtual address 922. The local memory handle 926 is that memory handle associated with the memory identified by local virtual address 928 and may be 32 bits in length including a 5 bit key portion 360 and a 27 bit handle portion 370. The 12 bit region identifier field 330 may be formed from this local memory handle 926.When a descriptor is processed by the host-fabric adapter 220, the virtual address and the associated memory handle may be used to generate a protection domain (or protection tag or protection index). As discussed above, the protection domain may be used to identify a TPT entry that corresponds to a single page of registered memory on which the posted descriptor is located. The 32 bit handle 400 may also be used to generate the region identifier field 330 as discussed above by using the 5 bit key portion 360 and the lower 7 bits of the 27 bit handle portion 370. If the generated region identifier field corresponds with the region identifier field of the TPT table 240 and the protection domains also match, then access to the addresses within the TPT table 240 is allowed. On the other hand, if the generated region identifier field does not correspond with the region identifier field of the TPT table 240, then access to the address is denied. From the identified TPT entry, the physical address associated with the virtual address may be obtained. In send and receive operations, virtual address and the memory handles correspond to memory handle field 908 and virtual address field 910 of FIG. 9A. In read and write operations, the virtual address and memory handle correspond to the remote memory handle 920 and remote virtual address field 922 on the remote host-fabric adapter, and local memory handle field 926 and local virtual address field 928 on the local host-fabric adapter 220 of FIG. 9B.An example send descriptor may be processed by the host-fabric adapter 220 in the manner as shown in FIG. 10. The order of the blocks in FIG. 10 are merely an example embodiment as the blocks may be performed in other orders in accordance with the present invention. In block 1000, the host-fabric adapter 220 retrieves the message's starting virtual address 910 (in the local, or sending process' memory space), and a memory handle 908 associated with the message's memory region. The virtual address 910 and the memory handle 908 may be used to generate a protection domain (block 1002). The memory handle 908 may also be used to generate the region identifier field 330 as discussed above. The protection domain and the region identifier field 330 may be used to identify and retrieve translation information stored in a TPT entry that corresponds to a single page of registered memory on which the posted descriptor is located (blocks 1004 and 1006). If the retrieved protection domain matches the protection domain associated with the local (sending) process ('yes' prong of block 1008), and if the retrieved region identifier field and the sending process' region identifier field match ('yes' prong of block 1010), then the host-fabric adapter 220 sends the message toward the destination (remote) by transmitting (block 1012) the same (the message or data) via the switched to fabric 102 (see FIGS. 1-3). If the retrieved protection domain and the sending process' protection domain do not match ('no' prong of block 1008) or if the retrieved region identifier field and the sending process' region identifier field do not match ('no' prong of block 1010) then a memory protection fault may be generated (blocks 1113 and 1114) and no data is transferred via the switched fabric 102. Receive descriptors may be processed in an analogous fashion.Similarly, an example read descriptor may be processed by the host-fabric adapter 220 in the manner as shown in FIG. 11. The order of the blocks in FIG. 11 are merely an example embodiment as the operations may be performed in other orders in accordance with the present invention. In block 1100, the host-fabric adapter 220 retrieves the message's destination virtual address 928 (in the local, or receiving process' memory space), a memory handle 926 associated with the message's destination memory region, and indication of how long the incoming message is. The virtual address 928 and memory handle 926 may be used to generate a protection domain (block 1102). The memory handle 926 may be used to generate the region identifier field 330 as discussed above. The protection domain is used to identify and retrieve translation information stored in a TPT entry that corresponds to a single page of registered memory on which the posted descriptor is located (blocks 1104 and 1106). If the retrieved protection domain matches the protection domain associated with the local (receiving) process ('yes' prong of block 1108) and if the retrieved region identifier field and the sending process' region identifier field match, ('yes' prong of block 1110), then the host-fabric adapter 220 copies (block 1112) the message into the local process' memory. If the retrieved protection domain and the receiving process' protection domain do not match ('no' prong of block 1108) or if the retrieved region identifier field and the sending process' region identifier field 330 do not match ('no" prong of block 1110), then a memory protection fault is generated (blocks 1113 or 1114) and no data is transferred via the switched fabric 102. Write descriptors may be processed in an analogous fashion.FIGS. 10 and 11 show one embodiment of using a protection domain and region identifier field to validate an access request. The order of operations shown in these figures is not limited by the disclosed order as these operations may be performed in other orders.One further advantage of the present invention is that each time an address is translated and its protection is checked, only one access to the translation and protection table is needed.While there have been illustrated and described what are considered to be example embodiments of the present invention, it will be understood by those skilled in the art and as technology develops that various changes and modifications may be made, and equivalents may be substituted for elements thereof without departing from the true scope of the present invention. For example, the present invention is applicable to all types of redundant type networks, including, but not limited to, Infiniband, Next Generation Input/Output (NGIO), ATM, SAN (system area network, or storage area network), server net, Future Input/Output (FIO), fiber channel, Ethernet). In addition, the process shown in FIGS. 10 and 11 may be performed by a computer processor executing instructions organized into a program module or a custom designed state machine. Storage devices suitable for tangibly embodying computer program instructions include all forms of non-volatile memory including, but not limited to: semiconductor memory devices such as EPROM, EEPROM, and flash devices; magnetic disks (fixed, floppy, and removable); other magnetic media such as tape; and optical media such as CD-ROM disks. Many modifications may be made to adapt the teachings of the present invention to a particular situation without departing from the scope thereof. Therefore, it is intended that the present invention not be limited to the various example embodiments disclosed, but that the present invention includes all embodiments falling within the scope of the appended claims. |
One particular example implementation of an apparatus for mitigating unauthorized access to data traffic, comprises: an operating system stack to allocate unprotected kernel transfer buffers; a hypervisor to allocate protected memory data buffers, where data is to be stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; and an encoder module to encrypt the data stored in the protected memory data buffers, where the unprotected kernel transfer buffers receive a copy the encrypted data. |
WHAT IS CLAIMED IS: 1. An apparatus for mitigating unauthorized access to data traffic, comprising: an operating system stack to allocate unprotected kernel transfer buffers; a hypervisor to allocate protected memory data buffers, wherein data is to be stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; and an encoder module to encrypt the data stored in the protected memory data buffers, wherein the unprotected kernel transfer buffers receive a copy the encrypted data. 2. The apparatus of Claim 1, wherein the hypervisor is configured to protect the protected memory data buffers using extended page tables so that the protected memory data buffers are not accessible to unauthorized software. 3. The apparatus of Claim 2, wherein authorized filter drivers can access the protected memory data buffers. 4. The apparatus of Claim 2, the apparatus further comprising: an input/output memory management unit programmed by the hypervisor to control access to the protected memory data buffers, wherein the protected memory data buffers are not accessible by unauthorized user equipment. 5. The apparatus of Claim 1 or 4, wherein the unprotected kernel transfer buffers and the protected memory data buffers are to be allocated when an authorized application that will use the data is initialized. 6. The apparatus of Claim 1 or 4, wherein the hypervisor is configured to allocate secure memory mapped input/output (MMIO) regions, wherein addresses for the protected memory data buffers are to be stored in the secure MMIO regions. 7. The apparatus of Claim 6, wherein only authorized user equipment can access the secure MMIO regions. 8. The apparatus of Claim 1 or 4, wherein the data is at least one of video data and audio data, and wherein a policy is constructed to automatically control access to the data based on a location of the apparatus. 9. The apparatus of Claim 1 or 4, the apparatus further comprising: an input/output memory management unit to ensure that the data stored in the protected memory data buffers originated from an authorized source and was not subject to modifications or replay attacks by malware. 10. The apparatus of Claim 9, wherein the input/output memory management unit is configured to validate data integrity by verifying at least one cryptographic hash or at least one signature passed with the data from the authorized source. 11. The apparatus of Claim 1 or 4, wherein the unprotected kernel transfer buffers are used to copy the data to an application that requested the data. 12. The apparatus of Claim 1 1, wherein the application that requested the data is configured to decrypt the data. 13. The apparatus of Claim 1 or 4, wherein the protected memory data buffers are to be protected by the hypervisor. 14. At least one machine readable storage medium comprising instructions that, when executed, cause an apparatus to: allocate unprotected kernel transfer buffers; allocate protected memory data buffers, wherein data is stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; encrypt the data stored in the protected memory data buffers; and copy the encrypted data to the unprotected kernel transfer buffers. 15. The medium of Claim 14, wherein the protected memory data buffers are protected by a hypervisor using extended page tables so that the protected memory data buffers are not accessible by unauthorized software. 16. The medium of Claim 15, further comprising instructions to: control access to the protected memory data buffers using an input/output memory management unit programmed by the hypervisor, wherein the protected memory data buffers are not accessible by unauthorized user equipment. 17. The medium of Claim 14 or 16, wherein, the unprotected kernel transfer buffers and the protected memory data buffers are allocated when an authorized application that will use the data is initialized. 18. The medium of Claim 14 or 16, further comprising instructions to: allocate secure memory mapped input/output (MMIO) regions. 19. The medium of Claim 18, wherein addresses for the protected memory data buffers are stored in the secure MMIO regions. 20. The medium of Claim 18, wherein only authorized user equipment can access the secure MMIO regions. 21. The medium of Claim 14 or 16, further comprising instructions to: ensure that the data stored in the protected memory data buffers originated from an authorized source and was not subject to modifications or replay attacks by malware. 22. The medium of Claim 21, further comprising instructions to: validate data integrity by verifying at least one cryptographic hash or at least one signature passed with the data from the authorized source. 23. The medium of Claim 14 or 16, wherein the unprotected kernel transfer buffers are used to copy the data to an application that requested the data and the application that requested the data decrypts the data. 24. A method for mitigating unauthorized access to data traffic, comprising: allocating unprotected kernel transfer buffers; allocating protected memory data buffers, wherein data is stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; encrypting the data stored in the protected memory data buffers; copying the encrypted data to the unprotected kernel transfer buffers; and allocating secure memory mapped input/output (MMIO) regions, wherein addresses for the allocated protected memory data buffers are stored in the secure MMIO regions. 25. The method of Claim 24, further comprising: protecting the memory data buffers with a hypervisor using extended page tables so that that protected memory data buffers are not accessible to unauthorized software; and controlling access to the protected memory data buffers using an input/output memory management unit programmed by the hypervisor, wherein the protected memory data buffers are not accessible by unauthorized user equipment. |
MITIGATING UNAUTHORIZED ACCESS TO DATA TRAFFIC CROSS-REFERENCE TO RELATED APPLICATION This application claims the benefit of priority under 35 U.S.C. § 1 19(e) to U.S. Provisional Application Serial No. 61/697,497, "PREVENTING UNAUTHORIZED ACCESS TO AUDIO VIDEO STREAMS" filed September 6, 2012 and U.S. Non-Provisional Application Serial No. 13/863,168, "MITIGATING UNAUTHORIZED ACCESS TO DATA TRAFFIC" filed April 15, 2013 which are both hereby incorporated by reference in their entireties. TECHNICAL FIELD Embodiments described herein generally relate to mitigating unauthorized access to data traffic. BACKGROUND As electronic devices become more ubiquitous in the everyday lives of users they are heavily relied upon to securely process and store information. Unfortunately, the risk of unauthorized access to electronic devices and information has increased with the proliferation of the electronic devices. Illegal access to computer system information can be obtained by exploiting various security flaws found in the electronic devices. A common flaw is the susceptibility of data theft either directly from software as it executes, or, from the operating system (OS) or hardware on which the software is executing. Viruses, terminate-and stay- resident programs (TSRs), rootkits, co-resident software, multi-threaded OS processes, Trojan horses, worms, hackers, spoof programs, keypress password capturers, macrorecorders, sniffers, and the like can be effective at stealing data and can be generally classified as malware or rogue applications. Malware (or a rogue application) can steal data through the insertion of the malware as kernel filter drivers thus attacking kernel transfer buffers and/or an OS stack, or alternatively, the malware may simply ask for a resource and store/stream the data to a designated back end server. Some malware can hook the kernel drivers and tap into the data flow. It would be advantageous if an electronic device could be better protected against rogue software or a rogue application. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which: FIGURE 1 is a simplified block diagram illustrating an embodiment of an electronic device, in accordance with at least one embodiment of the present disclosure; FIGURE 2 is a simplified block diagram illustrating an embodiment of an electronic device, in accordance with at least one embodiment of the present disclosure; FIGURES 3 is a simplified block diagram illustrating an embodiment of an electronic device, in accordance with at least one embodiment of the present disclosure; FIGURE 4 is a simplified block diagram illustrating an embodiment of an electronic device, in accordance with at least one embodiment of the present disclosure; FIGURE 5 illustrates, an example flowchart in accordance with at least one embodiment of the present disclosure; FIGURE 6 illustrates, an example flowchart in accordance with at least one embodiment of the present disclosure; FIGURE 7 is a simplified block diagram associated with an example ARM ecosystem system on chip (SOC) of the present disclosure; and FIGURE 8 is a simplified block diagram illustrating example logic that may be used to execute activities associated with the present disclosure. DETAILED DESCRIPTION The following detailed description sets forth example embodiments of apparatuses, methods, and systems relating to dual touch surface multiple function input devices. Features, such as structure(s), function(s), and/or characteristic(s) for example, are described with reference to one embodiment as a matter of convenience; various embodiments may be implemented with any suitable one or more described features. FIGURE 1 is a simplified block diagram illustrating an embodiment of electronic device 10, in accordance with at least one embodiment. Electronic device 10 can include a host controller 14, a hypervisor/virtual machine manager 16, memory 18, drivers 20, an operating system (OS) stack 22, a data stream 24 (e.g., an audio/video data stream), a processor 26, a decoder module 34, and an application 36. OS stack 22 may include a video filter driver 30 and an audio filter driver 32 (other drivers may also be included but are not shown). Video filter driver 30 and audio filter driver 32 can each include an encoder module 28. Data (or data traffic) can flow from one or more (authorized) user equipment 12 to electronic device 10 and through an existing OS stack (e.g., OS stack 22). The existing OS stack can include OS-provided universal serial bus (USB) drivers and kernel services. These kernel services may provide a set of application program interfaces (APIs) which are used by filter drivers to insert functionality into a data path. Unprotected kernel transfer buffers can be allocated by OS stack 22 and managed by streaming kernel services. Protected audio/video buffers can be managed by USB filter drivers inserted into the OS stack using APIs provided by the OS stack. The examples of FIGURE 1 are merely examples of an electronic configuration, and do not limit the scope of the claims. For example, the number of electrical components may vary, the placement of the electrical components may vary, and/or the like. The use of the terms "audio" and "video" have been done for purposes of clarity and example only. While reference is made to an audio data stream, a video data stream, audio drivers, video drivers, audio filter drivers, video filter drivers, etc., the concepts described herein may be applied to other types of data traffic and drivers without departing from the scope and the broad teachings of the present disclosure. The term "data traffic" includes, but is not limited to data that may flow from user equipment 12 through electronic device 10. Processor 26 can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, processor 26 can transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by the processor) and the elements identified herein can be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array [FPGA], an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that can include digital logic, software, code, electronic instructions, or any suitable combination thereof. Electronic device 10 is inclusive of devices such as a receiver, a computer, a desktop computer, a laptop computer, an Internet appliance, a set-top box, an Internet radio device (IRD), a cell phone, a smart phone, a tablet, a personal digital assistant (PDA), a game console, a mobile Internet device, a Google Android™, an iPhone™, and iPad™, or any other device, component, element, or object capable of voice, audio, video, media, or other data exchanges. User equipment 12 may be inclusive of a suitable interface to a human user, such as a microphone, camera, webcam, display, a keyboard, a touchpad, a remote control, a peripheral device, or other equipment capable of generating voice, audio, video, media, or other data exchanges. Multimedia content creation and consumption is on the rise across a wide variety of electronic devices from cell phones to traditional laptops, desktop computers to backend cloud servers. Audio and/or video data is predominantly used in our daily lives from cell phone calls, video chat with family and friends, creating/uploading photos and videos to social media sites, etc. There have been an increasing number of attacks on this data, with recent examples including computer viruses/worms enabling a microphone on a device without a user's knowledge to listen into private conversations and spying on people by enabling/stealing the data from an integrated/discreet camera connected to an electronic device. The ability to steal this data is relatively easy, as most of the devices either run stock programming systems, which are easy to tap by the insertion of malware directly accessing kernel transfer buffers (e.g., via hooks on the driver in OS stack 22), or alternatively, the applications are able to simply ask for a resource (e.g., a microphone or webcam) and store/stream the data from the resource to a back end server of choice. Some OS's may provide application level access control to the data (e.g., only a video chat application can connect to a camera device or webcam). However, even though this may prevent another application from accessing the data, malware can tap into the kernel transfer buffers and snoop a private conversation. Other OS's may not even provide this high level filtering mechanism, as they are open source and can easily be modified by anyone with sufficient knowledge. In an embodiment, electronic device 10 can be configured to protect data flows within the OS with appropriate obfuscation techniques, so even if malware is able to insert itself within the kernel transfer buffers, the malware will not be able to access the data payloads. Because the internal and external connection of choice for a majority of peripherals is a USB, such as a USB camera and/or microphone, the examples used herein include audio and video data. However, similar approaches could be applied for other types of data and connection types (e.g., PCI, MIPI, CSI, etc.) as well as other peripherals (keyboard, mouse, etc.). Device 10 can be configured to protect data buffers and payloads as the data is moved from user equipment 12 to a user level application (e.g., application 36) and possibly beyond. Device 10 can be configured to protect data buffers receiving data from user equipment 12, thus ensuring that only authorized buffers (e.g., protected transfer buffers 60) receive the data. Once the data is received, device 10 may encrypt the data and pass it through an existing OS stack, where the data can only be decrypted by an authorized application that has a correct cryptographic key to decode the data (e.g., decoder module 34). Additionally, device 10 may include data integrity or authenticity assertions to the data as the data is passed to a user level application (e.g., application 36) and possibly beyond. Data integrity or authenticity ensures that the data originated from the correct source and furthermore was not subject to any modifications or replay attacks by any malware in the data path. The recipient of the data is able to validate these data integrity assertions by verifying the cryptographic hashes or signatures passed with the data. Hence, even though malware may be able to access (e.g., hook, tap into, etc.) the OS stack or kernel transfer buffers and access the encrypted data, the malware will not get access to the data payload without having the correct credentials and cryptographic keys to decipher the data. Before the data is encrypted, the data may be stored in protected buffers that are only accessible by trusted applications. This allows for protection of the data buffers so they are not accessible by any unknown kernel/user code and protects memory mapped input/output (MMIO) space designated for user equipment 12 so malware cannot modify/tamper with the memory interfacing with user equipment 12 and with the received data. In addition, a hypervisor may control access to the protected memory using an input/output memory management unit programmed by the hypervisor, where the protected memory is not accessible by unauthorized user equipment 12. Hence, obfuscation of the data payload as the data flows through the unmodified OS stack and kernel transfer buffers may be prevented before the data is released to an authorized application that has the correct cryptographic keys to decrypt the data. In one illustrative example, to stream audio and/or video data to a peer, an audio and/or video streaming application may rely on a number of OS services, which can extract data from a microphone and/or camera and forward the data to the audio and/or video streaming application. As part of this process, the audio and/or video streaming application may use OS services such as kernel streaming, USB video, USB port, etc. and provide a hierarchical set of services to connect to a USB device, negotiate different attributes such as data format, data speed, etc. and stream the data to the audio and/or video streaming application. The audio/video streaming application can initiate a request for the audio and/or video data to the appropriate OS services, which in turn can utilize video filter driver 30 and/or audio filter driver 32 to move the data up the OS stack and kernel transfer buffers. These drivers interface with USB port/hub drivers, which can in turn communicate with the appropriate user equipment 12 through host controller 14. Hypervisor/virtual machine manager 16 may map user equipment 12 MMIO regions in memory 18 and define different data structures used to interface with user equipment 12. Among other things, the data structures can include commands to user equipment 12, events received back from user equipment 12, and pointers to memory regions (transfer buffers) specified by upper layers of the system to handle data movement to and from user equipment 12. In the case of audio and/or video data, the audio and/or video data may be received from user equipment 12, but in other cases such as USB storage, the data may flow bidirectionally. From a high level perspective, there can be two secure regions of the memory in use: a secure MMIO region used to interface with user equipment 12 and secure transfer buffers used for data movement. MMIO regions can contain mappings and pointers to the secure transfer buffers. Device 10 may use direct memory access (DMA) to the transfer buffers and a port driver may manage the availability of these buffers to user equipment 12. Sometimes these buffers are also referred to as ring buffers, as they are typically recycled in a circular fashion. As data becomes available in a given transfer buffer, and interrupt is generated, which triggers an asynchronous event, indicating the availability of data. Once the data is passed to application 36 the transfer buffers can be cycled back to the port driver and made available to user equipment 12 for the next batch of data. In this mode of operation, during an insertion attack, kernel (Ring 0) malware can tap into the kernel transfer buffers as a filter driver and steal or copy the data undetected. The nefarious activity can be achieved by silently copying the data and sending it somewhere else, as well as the originally intended recipient. Other variations are also possible, where the data may be modified, replayed or replaced in some manner, but essentially stem from the same principle of insertion in the data path. If insertion is not possible, a more advanced attack may be to reprogram the MMIO regions to point to alternative buffers controlled by the malware so the data is only received by the malware. Device 10 can be configured to protect against an insertion attack by inserting a filter driver below a video and/or audio driver (e.g., video filter driver 30 and/or audio filter driver 32) and above a port driver (e.g., host controller 14). Using the inserted filter driver, secure transfer buffers can be created which are passed down to the filter driver, instead of the transfer buffers received from the higher layer drivers (which are held in a cache in the filter driver). In another example, the above data confidentiality/authenticity may be directly incorporated with the lowest level port drivers communicating directly with electronic device 10 or directly in electronic device 10. This allows core capabilities to move closer to hardware interfacing with user equipment 12 and ultimately may be implemented either in user equipment 12 or the hardware interfacing with user equipment 12 (e.g., USB host controller interface 14, etc.). Furthermore, using host controller 14 the system can protect the secure transfer buffers so they are only accessible to filter driver code by providing permission on extended page table (EPT) structures associated with secure transfer buffer memory regions and code selections in the drivers. Once data is received at the secure transfer buffers, the data may be encrypted in any manner while it is inaccessible to any other components. Once the data is encrypted, the data can be copied to the kernel transfer buffers and the data may also be copied to a user-space (e.g., application memory) in the normal manner. If any malware tries to access the data, the malware will encounter encrypted text. Authorized applications (e.g., application 36), which have been provisioned with correct cryptographic keys (in decoder module 34), are able to decrypt the data. In an embodiment, decoder module 34 and application 36 may be secured such that the data is not accessible to any other malicious application. A second attack from malware may be to directly modify the MMIO space and make the appropriate MMIO device descriptors point to transfer buffers owned by the malware itself. Electronic device 10 can mitigate against this attack in a similar manner as described above by providing protections around the device MMIO regions, which point to the secure buffers used to transfer the data to/from user equipment 12. This includes identifying the data structures in user equipment 12 managed by host controller 14 and protecting these from modifications by any unauthorized components. Using EPT structures to mark these memory regions as read only except from a trusted code path, electronic device 10 can be configured to obtain notifications (events) on modifications of these regions and validate that the MMIO structures are not subverted before committing any changes to the data structures. This allows electronic device 10 to essentially be a last barricade to modifying these regions and allows for a sanity check to deny any undesirable changes to the different structures used to map the transfer buffers receiving data. FIGURE 2 is a simplified block diagram illustrating an embodiment of electronic device 10, in accordance with at least one example embodiment. Electronic device 10 can include host controller 14, hypervisor/virtual machine manager 16, memory 18, drivers 20, OS stack 22, processor 26, decoder module 34, application 36, a port/hub driver 48, input/output memory management unit 80 (I/O MMU), and a trusted platform module 88. Hypervisor/virtual machine manager 16 can include an extended page table 40. Extended page table 40 may be a data structure used by a virtual memory system to store the mapping between virtual addresses and physical addresses. Memory 18 can include protected memory 42, hardware registers 46, kernel transfer buffers 54, and application memory 56 (e.g., userspace). Protected memory 42 can include protected MMIO regions 58 and protected transfer buffers 60. Drivers 20 can include a video driver 50 and an audio driver 52. Video filter driver 30 and audio filter driver 32 can each include an encoder module 26. I/O MMU 80 can be programmed by hypervisor/virtual machine manager 16 to control access to protected MMIO regions 58 such that protected MMIO regions 58 and protected transfer buffers 60 are not accessible by unauthorized user equipment 12. Kernel transfer buffers 54 are configured to transfer data to application 36. During boot up of electronic device 10 (or when user equipment is connected to electronic device 10 or electronic device 10 wakes up) trusted platform module 88 may be used to provide assertions that hypervisor/virtual manager 16 and host controller software booted in a secure manner and were not affected (i.e., replaced or undermined) by any malware entity. Hypervisor/virtual machine manager 16 can create or allocate protected memory 42. Hardware registers 46 may access protected MMIO regions 58 to determine the address of protected transfer buffers 60 for host controller 14. As data is received at host controller 14 from user equipment 12, the data may be sent to protected transfer buffers 60 in protected memory 42. Video filter driver 30 and/or audio filter driver 32 can access the data in protected transfer buffers 60 and the data may be encrypted or encoded using encoder module 28. Once the data is encrypted, it can pass to drivers 20 and through unprotected memory data buffers (e.g., kernel transfer buffers 54). From there, the data enters into kernel streaming (e.g., data stream 24) and is sent to decoder module 34 where the data is decrypted or decoded and sent to application 36. FIGURE 3 is a simplified block diagram illustrating an embodiment of electronic device 10, in accordance with at least one example embodiment. Electronic device 10 can include host controller 14, decoder module 34, application 36, video driver 50, kernel transfer buffers 54, a host controller driver 62, a hub driver 64, a common class generic parent driver 66, kernel streaming proxy 68, and frame buffers 70. Host controller driver 62, hub driver 64, and common class generic parent driver 66 operate or control user equipment 12 and host controller 14. Kernel streaming proxy 68 represents kernel streaming filters of kernel streaming mini drivers by assuming the characteristics of those kernel streaming filters. Kernel streaming proxy 68 then sends control down to kernel transfer buffers 54 while reflecting events from application 36. Kernel streaming proxy 68 may also let applications control and retrieve information from kernel streaming objects. Frame buffers 70 can buffer video (and audio) frames used by application 36. FIGURE 4 is a simplified block diagram illustrating an embodiment of electronic device 10, in accordance with at least one example embodiment. Electronic device 10 can include host controller 14, application 36, a rogue application 84, and an application control module 86. In an example, electronic device 10 may be infected with rogue application 84. Rogue application 84 may be malware (or malicious or malevolent software) used or created to disrupt computer operation, gather sensitive information, or gain access to private computer systems. Application control module 86 can be configured to control application 36. In this example, application control module 86 has its data and code protected by a hypervisor/virtual machine manager (e.g., hypervisor/virtual machine manager 16) to ensure malware is not able to circumvent any policy received from a policy server 72. Electronic device 10 can be in communication with policy server 72 through network 74. Using network 74, policy server 72 can provide (in an enterprise environment, for example) access control policies for electronic device 10. For example, based on the location of electronic device 10, policy server 72 may issue a policy to electronic device 10 that nothing is allowed access to data from user equipment 12 (e.g., the data is encrypted such that no application can access the data). In a specific example, electronic device 10 may be located in a secure area and user equipment 12 may be a camera or microphone. If user equipment 12 is generating video and/or audio data, the data is not accessible to any application. Such a policy can prevent rogue application 84 from obtaining video and/or audio data from user equipment 12. In another example, policy server 72 may issue a policy to electronic device 10 that only authorized applications are authorized to access the data generated by user equipment 12. In a specific example, user equipment may be a camera and if user equipment 12 is generating video and/or audio data, only an authorized application (such as an application for a video chat) can access the data. Network 74 represents a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information. Network 74 offers a communicative interface between electronic device 12 and policy server 72, and may be any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), wide area network (WAN), virtual private network (VPN), Intranet, Extranet, or any other appropriate architecture or system that facilitates network communications in a network environment. Network communications, which can be inclusive of packets, frames, signals, data, etc., can be sent and received according to any suitable communication messaging protocols. Suitable communication messaging protocols can include a multi-layered scheme such as Open Systems Interconnection (OSI) model, or any derivations or variants thereof (e.g., Transmission Control Protocol/Internet Protocol (TCP/IP), user datagram protocol/IP (UDP/IP)). The term 'data' as used herein, refers to any type of binary, numeric, voice, video, textual, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another in computing devices (e.g., electronic devices) and/or networks. FIGURE 5 is a simplified flowchart 500 illustrating example activities of mitigating unauthorized access to audio and video traffic. At 502, an application (e.g., application 36) initializes, and unprotected memory data buffers (e.g., kernel transfer buffers 54) are allocated. These unprotected memory data buffers may be part of kernel transfer buffers. At 504, protected memory data buffers (e.g., protected transfer buffers 60 in protected memory 42) are allocated. At 506, memory for DMA is allocated in the unprotected memory data buffers for OS kernel services. At 508, the addresses for the unprotected memory data buffers are stored in the allocated protected memory data buffers. At 510, protected MMIO regions (e.g., MMIO regions 58) are allocated. At 512, the addresses for the protected memory data buffers are stored in the allocated MMIO regions. FIGURE 6 is a simplified flowchart 600 illustrating additional example activities of mitigating unauthorized access to audio and video traffic. At 602, an application (e.g., application 36) initializes and user equipment (e.g., user equipment 12) for the application is initialized. At 604, the user equipment accesses protected MMIO regions (e.g., protected MMIO regions 58) to determine the address for protected memory data buffers (e.g., protected transfer buffers 60). At 606, the user equipment generates data and sends the data to the protected memory data buffers. At 608, the data in the protected memory data buffers is encoded/encrypted. For example, video data may be encoded/encrypted using encoder module 28 in video filter driver 30 and audio data may be encoded/encrypted using encoder module 28 in audio filter driver 32. At 610, the encoded/encrypted data is sent to unprotected memory data buffers (e.g., kernel transfer buffers 54) and copied to a decoder (e.g., decoder module 34). At 612, the decoder decodes the encoded/encrypted data and sends the data to the application. FIGURE 7 is a simplified block diagram associated with an example ARM ecosystem SOC 700 of the present disclosure. In at least one embodiment, electronic device 10, shown and described herein, may be configured in the same or similar manner as exemplary ARM ecosystem SOC 700. At least one example implementation of the present disclosure can include an integration of the feature of mitigating unauthorized access to data traffic and an ARM component. For example, the example of FIGURE 7 can be associated with any ARM core (e.g., A-9, A- 15, etc.). Further, the architecture can be part of any type of tablet, smartphone (inclusive of Android™ phones, i-Phones™), i-Pad™, Google Nexus™, Microsoft Surface™, personal computer, server, video processing components, desktop computer, laptop computer (inclusive of any type of notebook), Ultrabook™ system, any type of touch-enabled input device, etc. In this example of FIGURE 7, ARM ecosystem SOC 700 may include multiple cores 706- 707, an L2 cache control 708, a bus interface unit 709, an L2 cache 710, a graphics processing unit (GPU) 715, an interconnect 702, a video codec 720, and a liquid crystal display (LCD) VF 725, which may be associated with mobile industry processor interface (MIPI)/ high definition multimedia interface (HDMI) links that couple to an LDC. ARM ecosystem SOC 700 may also include a subscriber identity module (SIM) I/F 730, a boot read-only memory (ROM) 735, a synchronous dynamic random access memory (SDRAM) controller 740, a flash controller 745, a serial peripheral interface (SPI) master 750, a suitable power control 755, a dynamic RAM (DRAM) 760, and flash 765. In addition, one or more example embodiments can include one or more communication capabilities, interfaces, and features such as instances of Bluetooth™ 760, a 3G modem 775 (or 4G/5G/nG), a global positioning system (GPS) 780, and an 802.1 1 WiFi 785. In operation, the example of FIGURE 7 can offer processing capabilities, along with data protection to enable computing of various types (e.g., mobile computing, high-end digital home, servers, wireless infrastructure, etc.). In addition, such an architecture can enable any number of software applications (e.g., Android™, Adobe® Flash® Player, Java Platform Standard Edition (Java SE), JavaFX, Linux, Microsoft Windows Embedded, Symbian and Ubuntu, etc.). In at least one embodiment, the core processor may implement an out-of-order superscalar pipeline with a coupled low-latency level-2 cache. FIGURE 8 is a simplified block diagram illustrating potential electronics and logic that may be associated with any of the mitigation operations discussed herein. In at least one example embodiment, system 800 can include a touch controller 802, one or more processors 804, system control logic 806 coupled to at least one of processor(s) 804, system memory 808 coupled to system control logic 806, non- volatile memory and/or storage device(s) 832 coupled to system control logic 806, display controller 812 coupled to system control logic 806, display controller 812 coupled to a display device 810, power management controller 818 coupled to system control logic 806, and/or communication interfaces 816 coupled to system control logic 806. System control logic 806, in at least one embodiment, can include any suitable interface controllers to provide for any suitable interface to at least one processor 804 and/or to any suitable device or component in communication with system control logic 806. System control logic 806, in at least one embodiment, can include one or more memory controllers to provide an interface to system memory 808. System memory 808 may be used to load and store data and/or instructions, for example, for system 800. System memory 808, in at least one embodiment, can include any suitable volatile memory, such as suitable dynamic random access memory (DRAM) for example. System control logic 806, in at least one embodiment, can include one or more I/O controllers to provide an interface to display device 810, touch controller 802, and non-volatile memory and/or storage device(s) 832. Non-volatile memory and/or storage device(s) 832 may be used to store data and/or instructions, for example within software 828. Non-volatile memory and/or storage device(s) 832 may include any suitable non-volatile memory, such as flash memory for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disc drives (HDDs), one or more compact disc (CD) drives, and/or one or more digital versatile disc (DVD) drives for example. Power management controller 818 may include power management logic 830 configured to control various power management and/or power saving functions. In at least one example embodiment, power management controller 818 is configured to reduce the power consumption of components or devices of system 800 that may either be operated at reduced power or turned off when the electronic device is in a closed configuration. For example, in at least one embodiment, when the electronic device is in a closed configuration, power management controller 818 performs one or more of the following: power down the unused portion of the display and/or any backlight associated therewith; allow one or more of processors ) 804 to go to a lower power state if less computing power is required in the closed configuration; and shutdown any devices and/or components that are unused when an electronic device is in the closed configuration. Communications interface(s) 816 may provide an interface for system 800 to communicate over one or more networks and/or with any other suitable device. Communications interface(s) 816 may include any suitable hardware and/or firmware. Communications interface(s) 816, in at least one example embodiment, may include, for example, a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem. System control logic 806, in at least one embodiment, can include one or more I/O controllers to provide an interface to any suitable input/output device(s) such as, for example, an audio device to help convert sound into corresponding digital signals and/or to help convert digital signals into corresponding sound, a camera, a camcorder, a printer, and/or a scanner. For at least one embodiment, at least one processor 804 may be packaged together with logic for one or more controllers of system control logic 806. In at least one embodiment, at least one processor 804 may be packaged together with logic for one or more controllers of system control logic 806 to form a System in Package (SiP). In at least one embodiment, at least one processor 804 may be integrated on the same die with logic for one or more controllers of system control logic 806. For at least one embodiment, at least one processor 804 may be integrated on the same die with logic for one or more controllers of system control logic 806 to form a System on Chip (SoC). For touch control, touch controller 802 may include touch sensor interface circuitry 822 and touch control logic 824. Touch sensor interface circuitry 822 may be coupled to detect touch input over a first touch surface layer and a second touch surface layer of a display (i.e., display device 810). Touch sensor interface circuitry 822 may include any suitable circuitry that may depend, for example, at least in part on the touch-sensitive technology used for a touch input device. Touch sensor interface circuitry 822, in one embodiment, may support any suitable multi-touch technology. Touch sensor interface circuitry 822, in at least one embodiment, can include any suitable circuitry to convert analog signals corresponding to a first touch surface layer and a second surface layer into any suitable digital touch input data. Suitable digital touch input data for at least one embodiment may include, for example, touch location or coordinate data. Touch control logic 824 may be coupled to help control touch sensor interface circuitry 822 in any suitable manner to detect touch input over a first touch surface layer and a second touch surface layer. Touch control logic 824 for at least one example embodiment may also be coupled to output in any suitable manner digital touch input data corresponding to touch input detected by touch sensor interface circuitry 822. Touch control logic 824 may be implemented using any suitable logic, including any suitable hardware, firmware, and/or software logic (e.g., non-transitory tangible media), that may depend, for example, at least in part on the circuitry used for touch sensor interface circuitry 822. Touch control logic 824 for at least one embodiment may support any suitable multi-touch technology. Touch control logic 824 may be coupled to output digital touch input data to system control logic 806 and/or at least one processor 804 for processing. At least one processor 804 for at least one embodiment may execute any suitable software to process digital touch input data output from touch control logic 824. Suitable software may include, for example, any suitable driver software and/or any suitable application software. As illustrated in FIGURE 8, system memory 808 may store suitable software 826 and/or non-volatile memory and/or storage device(s). Note that in some example implementations, the functions outlined herein may be implemented in conjunction with logic that is encoded in one or more tangible machine readable storage media (e.g., embedded logic provided in an application-specific integrated circuit (ASIC), in digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.), which may be non-transitory. In some of these instances, memory elements can store data used for the operations described herein. This can include the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), a DSP, an erasable programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) or an ASIC that can include digital logic, software, code, electronic instructions, or any suitable combination thereof. The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on at least one machine-accessible, machine readable, computer accessible, or computer readable medium, which is executable by a processing element. A machine-accessible/readable medium includes any transitory or non- transitory mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine- accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other forms of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non- transitory mediums that may receive information there from. Note that with the examples provided above, as well as numerous other examples provided herein, interaction may be described in terms of layers, protocols, interfaces, spaces, and environments more generally. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of components. It should be appreciated that the architectures discussed herein (and its teachings) are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the present disclosure, as potentially applied to a myriad of other architectures. It is also important to note that the blocks in the flow diagrams illustrate only some of the possible signaling scenarios and patterns that may be executed by, or within, the circuits discussed herein. Some of these blocks may be deleted or removed where appropriate, or these operations or activities may be modified or changed considerably without departing from the scope of teachings provided herein. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the present disclosure in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings provided herein. It is also imperative to note that all of the Specifications, protocols, and relationships outlined herein (e.g., specific commands, timing intervals, supporting ancillary components, etc.) have only been offered for purposes of example and teaching only. Each of these data may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply to many varying and non-limiting examples and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (7) of 35 U.S.C. section 1 12 as it exists on the date of the filing hereof unless the words "means for" or "step for" are specifically used in the particular claims; and (b) does not intend, by any statement in the Specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims. OTHER NOTES AND EXAMPLES Example Al is an apparatus for mitigating unauthorized access to data traffic, comprising: an operating system stack to allocate unprotected kernel transfer buffers; a hypervisor to allocate protected memory data buffers, where data is to be stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; and an encoder module to encrypt the data stored in the protected memory data buffers where the unprotected kernel transfer buffers receive a copy of the encrypted data. In Example A2, the subject matter of Example Al can optionally include where the hypervisor is configured to protect the protected memory data buffers using extended page tables so that the protected memory data buffers are not accessible to unauthorized software. In Example A3, the subject matter of any one of Examples A1-A2 can optionally include where authorized filter drivers can access the protected memory data buffers. In Example A4, the subject matter of any one of the Examples A2-A3 can optionally include an input/output memory management unit programmed by the hypervisor to control access to the protected memory data buffers, where the protected memory data buffers are not accessible by unauthorized user equipment. In Example A5, the subject matter of Example Al can optionally include where the unprotected kernel transfer buffers and the protected memory data buffers are to be allocated when an application that will use the data is initialized. In Example A6, the subject matter of any one of Examples A1-A5 can optionally include where the hypervisor is configured to allocate secure memory mapped input/output (MMIO) regions, where addresses for the protected memory data buffers are to be stored in the secure MMIO regions. In Example A7, the subject matter of Example A6 can optionally include where only authorized user equipment can access the secure MMIO regions. In Example A8, the subject matter of any one of Examples A1-A7 can optionally include where the data is at least one of video data and audio data, and where a policy is constructed to automatically control access to the data based on a location of the apparatus. In Example A9, the subject matter of any one of Examples A1-A8 can optionally include an input/output memory management unit to ensure that the data stored in the protected memory data buffers originated from an authorized source and was not subject to modifications or replay attacks by malware. In Example A 10, the subject matter of Example 9 can optionally include where the input/output memory management unit is configured to validate data integrity by verifying at least one cryptographic hash or at least one signature passed with the data from the authorized source. In Example Al 1 the subject matter of any one of Examples A1-A10 can optionally include where the unprotected kernel transfer buffers are used to copy the data to an application that requested the data. In Example A 12, the subject matter of any one of Examples Al-Al 1 can optionally include where the application that requested the data is configured to decrypt the data. In Example A13 the subject matter of any one of Examples A6-A12 can optionally include where the protected memory data buffers are protected by the hypervisor. Example CI is at least one machine readable storage medium having instructions stored thereon for mitigating unauthorized access to data traffic, the instructions when executed by a processor cause the processor to: allocate unprotected kernel transfer buffers; allocate protected memory data buffers, where data is stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; encrypt the data stored in the protected memory data buffers; and copy the encrypted data to the unprotected kernel transfer buffers. In Example C2, the subject matter of Example CI can optionally include where the protected memory data buffers are protected by a hypervisor using extended page tables so that the protected memory data buffers are not accessible to unauthorized software. In Example C3, the subject matter of any one of Examples C1-C2 can optionally include where authorized filter drivers can access the protected memory data buffers. In Example C4, the subject matter of any one of Examples C2-C3 can optionally include where the instructions, when executed by the processor, further cause the processor to control access to the protected memory data buffers using an input/output memory management unit programmed by the hypervisor, where the protected memory data buffers are not accessible by unauthorized user equipment. In Example C5, the subject matter of Example CI can optionally include where, the unprotected kernel transfer buffers and the protected memory data buffers are allocated when an application that will use the data is initialized. In Example C6, the subject matter of any one of Example C1-C5 can optionally include where the instructions, when executed by the processor, further cause the processor to allocate secure memory mapped input/output (MMIO) regions, where addresses for the protected memory data buffers are stored in the secure MMIO regions. In Example C7, the subject matter of Example C6 can optionally include where only authorized user equipment can access the secure MMIO regions. In Example C8, the subject matter of any one of Examples C1-C7 can optionally include where the data is at least one of video data and audio data, and where a policy is constructed to automatically control access to the data based on a location of the at least one machine readable storage medium. In Example C9, the subject matter of any one of Examples C1-C8 can optionally include where the instructions, when executed by the processor, further cause the processor to ensure that the data stored in the protected memory data buffers originated from an authorized source and was not subject to modifications or replay attacks by malware. In Example CIO, the subject matter of Example C9 can optionally include where the instructions, when executed by the processor, further cause the processor to validate data integrity by verifying at least one cryptographic hash or at least one signature passed with the data from the authorized source. In Example CI 1, the subject matter of any one of Examples CI -CIO can optionally include where the unprotected kernel transfer buffers are used to copy the data to an application that requested the data. In Example C12, the subject matter of any one of Examples Cl-Cl 1 can optionally include where the application that requested the data decrypts the data. In Example C13, the subject matter of any one of Examples C6-C12 can optionally include where the protected memory data buffers are protected by a hypervisor. Example Ml is a method for mitigating unauthorized access to data traffic, comprising: allocating unprotected kernel transfer buffers; allocating protected memory data buffers, where data is stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; encrypting the data stored in the protected memory data buffers; and copy the encrypted data to the unprotected kernel transfer buffers. In Example M2, the subject matter Example Ml can optionally include where the protected memory data buffers are protected by a hypervisor using extended page tables so that that protected memory data buffers are not accessible to unauthorized software. In Example M3, the subject matter of any one of Examples M1-M2 can optionally include where authorized filter drivers can access the protected memory data buffers. In Example M4, the subject matter of any one of Examples M2-M3 can optionally include controlling access to the protected memory data buffers using an input/output memory management unit programmed by the hypervisor, where the protected memory data buffers are not accessible by unauthorized user equipment. In Example M5, the subject matter of Example Ml can optionally include where, the unprotected kernel transfer buffers and the protected memory data buffers are allocated when an application that will use the data is initialized. In Example M6, the subject matter of any one of the Examples M1-M5 can optionally include allocating secure memory mapped input/output (MMIO) regions, where addresses for the protected memory data buffers are stored in the secure MMIO regions. In Example M7, the subject matter of Example M6 can optionally include where only authorized user equipment can access the secure MMIO regions. In Example M8, the subject matter of any one of the Examples M1-M7 can optionally include where the data is at least one of video data and audio data, and where a policy is constructed to automatically control access to the data based on a location of the protected memory data buffers. In Example M9, the subject matter of any one of Examples M1-M8 can optionally include ensuring that the data stored in the protected memory data buffers originated from an authorized source and was not subject to modifications or replay attacks by malware. In Example M10, the subject matter of Example M9 can optionally include validating data integrity by verifying at least one cryptographic hash or at least one signature passed with the data from the authorized source. In Example Mi l, the subject matter of any one of Examples M1-M10 can optionally include where the unprotected kernel transfer buffers are used to copy the data to an application that requested the data. In Example M12, the subject matter of any one of Examples Ml-Ml 1 can optionally include where the application that requested the data decrypts the data. In Example M13, the subject matter of any one of Examples M6-M12 can optionally include where the protected memory data buffers are protected by a hypervisor. Example El is an apparatus for mitigating unauthorized access to data traffic, comprising means for: allocating unprotected kernel transfer buffers; allocating protected memory data buffers, where data is stored in the protected memory data buffers before being copied to the unprotected kernel transfer buffers; encrypting the data stored in the protected memory data buffers; and copying the encrypted data to the unprotected kernel transfer buffers. In Example E2, the subject matter of Example El can optionally include where the protected memory data buffers are protected by a hypervisor using extended page tables so that that protected memory data buffers are not accessible to unauthorized software. In Example E3, the subject matter of any one of Examples E1-E2 can optionally include where authorized filter drivers can access the protected memory data buffers. In Example E4, the subject matter of any one of the Examples E2-E3 can optionally include further means for controlling access to the protected memory data buffers using an input/output memory management unit programmed by the hypervisor, where the protected memory data buffers are not accessible by unauthorized user equipment. In Example E5, the subject matter of Example El can optionally include where, the unprotected kernel transfer buffers and the protected memory data buffers are allocated when an application that will use the data is initialized. In Example E6, the subject matter of any one of Examples E1-E5 can optionally include further means for allocating secure memory mapped input/output (MMIO) regions, where addresses for the protected memory data buffers are stored in the secure MMIO regions. In Example E7, the subject matter of Example E6 can optionally include where only authorized user equipment can access the secure MMIO regions. In Example E8, the subject matter of any one of Examples E1-E7 can optionally include where the data is at least one of video data and audio data, and where a policy is constructed to automatically control access to the data based on a location of the apparatus. In Example E9, the subject matter of any one of Examples E1-E8 can optionally include further means for ensuring that the data stored in the protected memory data buffers originated from an authorized source and was not subject to modifications or replay attacks by malware. In Example E10, the subject matter of Example E9 can optionally include further means for validating data integrity by verifying at least one cryptographic hash or at least one signature passed with the data from the authorized source. In Example El l the subject matter of any one of Examples E1-E10 can optionally include where the unprotected kernel transfer buffers are used to copy the data to an application that requested the data. In Example E12, the subject matter of any one of Examples El-El 1 can optionally include where the application that requested the data decrypts the data. In Example E13 the subject matter of any one of Examples E6-E12 can optionally include where the protected memory data buffers are protected by a hypervisor. Example XI is a machine-readable storage medium including machine readable instructions, when executed, to implement a method or realize an apparatus as in any one of the Examples A1-A13 and Ml -Ml 3. Example Yl is an apparatus comprising means for performing of any of the Example methods Ml -Ml 3. In Example Y2, the subject matter of Example Yl can optionally include the means for performing the method comprising a processor and a memory. In Example Y3, the subject matter of Example Y2 can optionally include the memory comprising machine-readable instructions, that when executed cause the apparatus to perform any of the Example methods Ml -Ml 3. In Example Y4, the subject matter of any one of Examples Y1-Y3 can optionally include the apparatus being a mobile device or a computing system. |
In a processor, there are situations where instructions and some parts of a program may reside in a data cache prior to execution of the program. Hardware and software techniques are provided for fetching an instruction in the data cache after having a miss in an instruction cache to improve the processor's performance. If an instruction is not present in the instruction cache, an instruction fetch address is sent as a data fetch address to the data cache. If there is valid data present in the data cache at the supplied instruction fetch address, the data actually is an instruction and the data cache entry is fetched and supplied as an instruction to the processor complex. An additional bit may be included in an instruction page table to indicate on a miss in the instruction cache that the data cache should be checked for the instruction. |
A method of finding an instruction in a data cache that is separate from an instruction cache, the method comprising: determining that a fetch attempt at an instruction fetch address in the instruction cache for the instruction was not successful; determining that a check data cache attribute has been set to an active state in a page table entry associated with the instruction fetch address; selecting the instruction fetch address as a data fetch address in response to the check data cache attribute being in a active state; making a fetch attempt in the data cache for the instruction at the selected data fetch address; and setting an information present indication to an active state if the instruction was found in the data cache in response to the fetch attempt in the data cache. The method of claim 1 further comprising: setting a check data cache attribute active in the associated page table entry when generating instructions that are stored as data in the data cache. The method of claim 1 further comprising: generating data by a program whereby the data is to be used as instructions; and requesting by the program an operating system to set the check data cache attribute active in at least the associated page table entry. The method of claim 2 wherein the check data cache attribute is cleared for use by a different program. The method of claim 1 wherein the step of selecting the instruction fetch address further comprises: multiplexing the instruction fetch address and a data fetch address; and selecting the instruction fetch address for application to the data cache as the selected data fetch address, wherein the instruction fetch address is selected after determining that the instruction fetch attempt was not successful in the instruction cache. The method of claim 1 wherein the step of making a fetch attempt in the data cache further comprises: determining the instruction was found in the data cache; and fetching the instruction from the data cache. The method of claim 1 further comprising: determining the fetch attempt in the data cache was not successful; and informing an instruction memory control that the fetch attempt in the data cache was not successful. The method of claim 7 further comprising: fetching the instruction from a system memory. A processor complex comprising: an instruction cache; an instruction memory management unit having a page table with entries that have one or more check data cache attributes; a data cache; and a first selector to select an instruction fetch address or a data fetch address based on a selection signal in response to a check data cache attribute and a status indication of an instruction fetch operation in the instruction cache, the selection signal causing the instruction fetch address or the data fetch address to be applied to the data cache whereby instructions or data may be selectively fetched from the data cache. The processor complex of claim 9 wherein the selection signal of the first selector selects the data fetch address in response to a data access operation. The processor complex of claim 9 wherein the selection signal of the first selector selects the instruction fetch address if the status indication of an instruction fetch operation indicates the instruction was not found in the instruction cache and the check data cache attribute is set to an active state. The processor complex of claim 9 further comprising: a second selector to select an instruction out bus from the instruction cache or a data out bus from the data cache to be applied to a processor's instruction bus input. The processor complex of claim 12 wherein the second selector selects the data out bus from the data cache if the status indication of an instruction fetch operation indicates the instruction was not found in the instruction cache, the check data cache attribute is in an active state, and a status indication of a data fetch operation indicates data was found in the data cache at the instruction fetch address selected through the first selector. The processor complex of claim 12 wherein the second selector selects the instruction out bus if the status indication of an instruction fetch operation indicates the instruction was found in the instruction cache. The processor complex of claim 9 further comprising: a third selector to select a memory out bus from a system memory or a data out bus from the data cache to be applied to an instruction bus input of the instruction cache. The processor complex of claim 15 wherein the third selector selects the data out bus from the data cache if the status indication of an instruction fetch operation indicates the instruction was not found in the instruction cache, the check data cache attribute is in an active state, and a status indication of a data fetch operation indicates data was found in the data cache at the instruction fetch address selected through the first selector. A method for executing program code for fetching an instruction from a data cache, the method comprising: generating instructions that are part of the program code which are stored as data in a data cache; requesting an operating system to set a check data cache attribute active in at least one page table entry associated with the instructions; invalidating the instruction cache prior to execution of the program code that uses the generated instructions; fetching the instructions directly from the data cache in response to active check data cache attributes associated with the instructions if the instructions are not found in the instruction cache; and executing the program code. The method of claim 17 wherein the generating instructions step includes the operation of loading instructions into the data cache. The method of claim 17 wherein the invalidating the instruction cache further comprises: invalidating only a portion of the instruction cache at the addresses where the generated instructions are stored. The method of claim 17 wherein the page table is an instruction page table located in a memory management unit. |
CA 02635116 2011-01-25 74769-2107 EFFICIENT MEMORY HIERARCHY MANAGEMENT USING INSTRUCTION IN A DATA CACHE FIELD [00011 The present disclosure relates generally to techniques for fetching instructions from memory having an instruction cache and a data cache and, more specifically, to an improved approach for fetching an instruction after a miss in the instruction cache by directly fetching the instruction from the data cache if the instruction resides there. BACKGROUND [00021 Commonly portable products, such as cell phones, laptop computers, personal data assistants (PDAs) or the like, require the use of a processor executing programs, such as, communication and multimedia programs. The processing system for such products includes a processor and memory complex for storing instructions and data. For example, the instructions and data may be stored in a hierarchical memory consisting of multi-levels of caches, including, for example, an instruction cache, a data cache, and a system memory. The use of a separate instruction cache and a separate data cache is known as a Harvard architecture- Since the Harvard architecture isolates the instruction cache from the data cache, problems may arise when instructions are stored in the data cache. (00031 In general system processing with a Harvard architecture, there are situations which arise in which instructions may be stored in the data cache- For example, if a program is encrypted or in a compressed form, it must be decrypted/decompressed prior to enabling the program to run. The decryption/decompression process treats the encrypted/compressed program as data in order to process it and stores the decrypted/decompressed instructions as data in a data cache, for example, a level I data cache, on its way to system memory. The generation of instructions from Java byte CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815codes is another situation in which instructions are initially treated as data that are stored using the data path, including the data cache, to the system memory. The initial state of a program in which program instructions are being treated as data creates a coherence problem within the memory hierarchy, since at least some parts of a program may reside in the data cache prior to execution of the program. [0004] In order to resolve the coherence problem, a software approach is typically taken wherein the program or program segments in the data cache are moved to system memory under program control, the instruction cache is typically invalidated to clean the cache of any old program segments, and the instructions comprising the program are then fetched from the system memory. The movement of the instructions from the data cache to system memory and the fetching of the instructions from system memory prior to execution may take several cycles, reducing the processor's performance due to processing time overhead that must occur to access instructions initially residing on the data cache prior to the program running on the processor. SUMMARY [0005] Among its several aspects, the present disclosure recognizes that the overhead of dealing with instructions in a data cache may be limiting the performance of the processor and possibly limiting the quality of service that may be achieved. The present disclosure also recognizes that it may be desirable to access instructions that are residing in a data cache. [0006] Moreover, the present disclosure describes apparatus, methods, and computer readable medium for directly fetching an instruction from a data cache when that instruction was not found in the instruction cache, an instruction cache miss, and the instruction is determined to be in the data cache. By fetching the instruction directly CA 02635116 2010-04-06 74769-2107 from the data cache, after an instruction cache miss, the processor performance may be improved. [0007] To such ends, an embodiment of the present invention includes a method of finding an instruction in a data cache that is separate from an instruction cache. In such a method, it is determined that a fetch attempt missed in the instruction cache for the instruction at an instruction fetch address. The instruction fetch address is transformed to a data fetch address. Further, a fetch attempt in the data cache is made for the instruction at the transformed data fetch address. [0008] Another embodiment of the invention addresses a processor complex for fetching instructions. The processor complex may suitably include an instruction cache, a data cache, and a first selector. The first selector is used to select an instruction fetch address or a data fetch address. The selected fetch address is applied to a data cache whereby instructions or data may be selectively fetched from the data cache. According to one aspect of the present invention, there is provided a method of finding an instruction in a data cache that is separate from an instruction cache, the method comprising: determining that a fetch attempt at an instruction fetch address in the instruction cache for the instruction was not successful; determining that a check data cache attribute has been set to an active state in a page table entry associated with the instruction fetch address; selecting the instruction fetch address as a data fetch address in response to the check data cache attribute being in a active state; making a fetch attempt in the data cache for the instruction at the selected data fetch address; and setting an information present indication to an active state if the instruction was found in the data cache in response to the fetch attempt in the data cache. According to another aspect of the present invention, there is provided a processor complex comprising: an instruction cache; an instruction memory management unit having a page table with entries that have one or more check data cache attributes; a data cache; and a first selector to select an CA 02635116 2010-04-06 74769-2107 3a instruction fetch address or a data fetch address based on a selection signal in response to a check data cache attribute and a status indication of an instruction fetch operation in the instruction cache, the selection signal causing the instruction fetch address or the data fetch address to be applied to the data cache whereby instructions or data may be selectively fetched from the data cache. According to still another aspect of the present invention, there is provided a method for executing program code for fetching an instruction from a data cache, the method comprising: generating instructions that are part of the program code which are stored as data in a data cache; requesting an operating system to set a check data cache attribute active in at least one page table entry associated with the instructions; invalidating the instruction cache prior to execution of the program code that uses the generated instructions; fetching the instructions directly from the data cache in response to active check data cache attributes associated with the instructions if the instructions are not found in the instruction cache; and executing the program code. [0009] A more complete understanding of the present inventive concepts disclosed herein, as well as other features, will be apparent from the following Detailed Description and the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0010] Fig. 1 is a block diagram of an exemplary wireless communication system in which an embodiment of the disclosure may be employed; [0011] Fig. 2 is a functional block diagram of a processor and memory complex in which data cache operation is adapted for memory efficient operations of instruction fetching in accordance with an embodiment of the present invention; [0012] Fig. 3 is a flow chart of an exemplary method for fetching an instruction stored in a data cache, in order to reduce the miss handling overhead associated with the CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815instruction initially stored as data in the data cache in accordance with the present disclosure; [0013] Fig. 4 is a functional block diagram of a processor and memory complex which includes an instruction page table in which data cache operation is adapted for efficient instruction fetching in accordance with the present disclosure; [0014] Fig. 5 is a flow chart of an exemplary method for fetching an instruction stored in a data cache in accordance with the present disclosure; and [0015] Fig. 6 is a flow chart of an exemplary method for executing code that is generated as data and stored in a data cache in accordance with the present disclosure. DETAILED DESCRIPTION [0016] Inventive aspects of the present disclosure will be illustrated more fully with reference to the accompanying drawings, in which several embodiments of the disclosure are shown. The embodiment of this invention may, however, be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. [0017] It will be appreciated that the present disclosure may be embodied. as methods, systems, or computer program products. Accordingly, the present inventive concepts disclosed herein may take the form of a hardware embodiment, a software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present inventive concepts disclosed herein may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, CD-ROMs, optical storage devices, flash memories, or magnetic storage devices. CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815 [00181 Computer program code which may be compiled, assembled, and loaded to a processor may be initially written in a programming language such as C, C++, native Assembler, JAVA , Smalltalk, JavaScript , Visual Basic , TSQL, Perl, or in various other programming languages in accordance with the teachings of the present disclosure. Program code or computer readable medium refers to machine language code such as object code whose format is understandable by a processor. Software embodiments of the disclosure do not depend upon their implementation with a particular programming language. When program code is executed, a new task which defines the operating environment for the program code is created. [00191 Fig. 1 shows an exemplary wireless communication system 100 in which an embodiment of the disclosure may be employed. For purposes of illustration, Fig. 1 shows three remote units 120, 130, and 150 and two base stations 140. It will be recognized that typical wireless communication systems may have remote units and base stations. Remote units 120, 130, and 150 include hardware components, software components, or both as represented by components 125A, 125C, and 125B, respectively, which have been adapted to embody the disclosure as discussed further below. Fig. 1 shows forward link signals 180 from the base stations 140 to the remote units 120, 130, and 150 and. reverse link signals 190 from the remote units 120, 130, and. 150 to base stations 140. [00201 In Fig. 1, remote unit 120 is shown as a mobile telephone, remote unit 130 is shown as a portable computer, and remote unit 150 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be cell phones, handheld personal communication systems (PCS) units, portable data units such as personal data assistants, or fixed location data units such as meter reading equipment. Although Fig. 1 illustrates remote units according to the teachings of the disclosure, the CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815disclosure is not limited to these exemplary illustrated units. The disclosure may be suitably employed in any device having a processor with an instruction cache, a data cache, and a system memory. [00211 Fig. 2 is a functional block diagram of a processor and memory complex 200 in which normal data cache operation is adapted for more efficient instruction fetching as described further herein. The processor and memory complex 200 includes a processor 202, a level 1 (LI) instruction cache 204, an L1 instruction cache control unit 206, an L1 data cache 208, an L1 data cache control unit 210, a control section 211, and a system memory 212. The L1 instruction cache control unit 206 may include an instruction content addressable memory for instruction tag matching, as may be used in a set associative cache. The control section 211 includes multiplexing elements 220, 226, and 234, gating devices 232 and 238, and an inverter 240. Peripheral devices, which may connect to the processor complex, are not shown for clarity of discussion of the present disclosure. The processor and memory complex 200 may be suitably employed in components 125A-C for executing program code that is stored in the system memory 212. [00221 In order to fetch an instruction in the processor and memory complex 200, the processor 202 generates an instruction fetch address (IA) 214 of the desired instruction and sends the instruction fetch address to the L1 instruction cache control unit 206. The Ll instruction cache control unit 206 checks to see if the instruction is present in the L1 instruction cache 204. This check is accomplished, for example, through the use of an internal content addressable memory (CAM) in an associative search for a match to the supplied instruction fetch address. When the instruction is present, a match occurs and the L1 instruction cache control unit 206 indicates that the instruction is present in the instruction cache 204. If the instruction is not present, no match will be found in the CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815CAM associative search and the L1 instruction cache control unit 206 indicates that the instruction is not present in the instruction cache 204. [0023] If the instruction is present, the instruction at the instruction fetch address is selected from the instruction cache 204. The instruction is then sent on instruction out bus 216 through the multiplexing element 226 to the processor 202. [0024] If the instruction is not present in the instruction cache, an instruction cache miss signal (I$M=1) 218 is set active indicating a miss has occurred. Upon detecting a miss in the instruction cache, the processor and memory complex 200 attempts to fetch the desired instruction from the L1 data cache 208. To this end, multiplexing element 220 is enabled by the miss signal (I$M=1) 218 to select the instruction fetch address 214. The instruction fetch address 214 then passes through a multiplexing element 220 onto a Daddress bus 222 and is sent to the L1 data cache control unit 210 as a data fetch address. It is noted that the processor and memory complex 200 represents a logical view of the system, since, for example, the application of the instruction fetch address 214 onto the Daddress bus 222 may require an arbitration or a waiting period before access to the Daddress bus 222 may be obtained. The approach taken to multiplex the instruction fetch address 214 with the processor generated data address 223 may be varied. and is dependent upon the particular approach taken in the instruction cache and, data cache designs. [0025] The L1 data cache control unit 210 checks to see if there is a hit in the L1 data cache 208 at the supplied instruction fetch address, through an internal associative search, for example, on the supplied instruction fetch address. A hit indicates there is data present at the supplied instruction fetch address. This data is actually an instruction and the data cache entry is fetched from the L1 data cache 208 and placed on the data out bus 224. in order to supply the data fetched from the L1 data cache 208 as CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815an instruction to the processor, a multiplexing element 226 may be suitably employed. The data out bus 224 is selected by multiplexing element 226 placing the data fetched from the data cache onto the processor's instruction bus 228, when there is a miss in the instruction cache followed by a hit in the data cache at the instruction fetch address. The occurrence of the miss in the instruction cache, indicated by miss signal (I$M=l) 218 being active high, followed by the hit in the data cache at the same instruction fetch address, indicated by hit signal (D$H=1) 230 being active high, is logically represented by AND gate 232. The output of AND gate 232 is the selection signal 233 for the multiplexing element 226. The instruction found in the data cache is also multiplexed for loading into the instruction cache 204 by multiplexing element 234 using the selection signal 233 logically provided by AND gate 232. While the data out bus 224 is forwarding the instruction to the processor, the processor's read data input 236 is deactivated by AND gate 238 using the inverter 240 to provide an inverse of the selection signal 233. [0026] If it was determined there was a miss in the data cache at the supplied instruction fetch address, the instruction is not in the data cache and the instruction is fetched from the system memory 212. The hit signal (D$H=1) 230 is also sent to the L1 instruction cache control unit 206 to indicate by its inactive state that a miss occurred on the attempt to locate the instruction in the data cache 208. Note that other signaling means may be used. to indicate that a miss occurred. on the attempt to locate the instruction in the data cache 208. Since the instruction is not in the instruction cache 204 and not in the data cache 208 it must be fetched from the system memory 212. Once the instruction is obtained from the system memory 212 it is sent to the processor 202. Note, the paths from the system memory for supplying an instruction due to a miss CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815in the instruction cache or data cache and for supplying data due to a miss in the data cache are not shown in order to clearly illustrate the present disclosure. [0027] Fig. 3 is an exemplary flow chart of a method 300 for directly fetching an instruction in a data cache after having a miss in the instruction cache, in order to minimize the overhead commonly associated with handling the instruction initially stored as data in the data cache. Exemplary relationships between the steps of Fig. 3 and the elements of Fig. 2 are indicated by describing how elements from the processor and memory complex 200 may suitably cooperate to perform the steps of method 300. [0028] In order to fetch an instruction, an instruction fetch address is generated in step 304. For example, a processor, such as the processor 202, generates an instruction fetch address of the desired instruction and sends the instruction fetch address 214 to the L1 instruction cache controller 206. In step 308, it is determined whether there is an instruction cache hit or a miss. For example, the L l instruction cache controller 206 checks to see if the instruction is present in the instruction cache 204. If the instruction is present, its presence is indicated as a hit. If the instruction is present, the method 300 proceeds to step 312 and the instruction at the instruction fetch address is selected. In step 316, the instruction is sent to the processor. For example, the selected instruction is placed. on instruction out bus 216 and. sent to the processor 202 through multiplexing element 226. [0029] If the instruction is not present in the instruction cache as determined. in step 308, an indication is given that a miss has occurred and an attempt is made to fetch the instruction from the data cache in step 320. For example, the instruction fetch address 214 is sent through multiplexing element 220 as a data fetch address 222 to the data cache 208. In step 324, a check is made, for example, by the L 1 data cache controller 210 to see if there is valid data present at the supplied instruction fetch address. If there CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815 is valid data present in the data cache at the supplied instruction fetch address, the data actually is an instruction and the data cache entry is fetched in step 328. In step 316, the data fetched from the data cache is sent as an instruction to the processor. For example, the data fetched on data out bus 224 from the data cache 208 is sent through multiplexing element 226 and supplied as an instruction to the processor 202 on instruction bus 228. [00301 In step 324, if there was a miss in the data cache at the supplied instruction fetch address, the instruction is not in the data cache and in step 332 the instruction is fetched from the system memory. For example, the data cache hit signal D$H=1 230 is sent to the Ll instruction cache control unit 206 to indicate by its inactive state that a miss occurred on the attempt to locate the instruction in the data cache 208. Since the instruction is not in the instruction cache 204 and not in the data cache 208 it must be fetched from the system memory 212. Once the instruction is obtained from the system memory 212, the instruction is sent to the processor 202, as indicated in step 316. [00311 Fig. 4 is a functional block diagram of a processor and memory complex 400 which includes an instruction page table in which normal data cache operation is adapted for efficient operation of instruction fetching in accordance with the present disclosure. The processor and memory complex 400 includes a processor 402, a level 1 (L1) instruction cache 404, an instruction memory management unit (IMMU) and cache control (IMMU/$Control) 406, an L1 data cache 408, a data memory management unit (DMMU) and cache control (DMMU/$Control) 410, a control section 411, and a memory hierarchy 412. The TMMU/$Control 406 may include, for example, a virtual- to-physical instruction address translation process. The control section 411 includes multiplexing elements 432, 438, and 448, gating devices 428, 444, and 452, and an inverter 454. Peripheral devices, which may connect to the processor complex, are not CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815shown for clarity of discussion of the present disclosure. The processor and memory complex 400 may be suitably employed in components 125A-C for executing program code that is stored in the system memory 412. [0032] The instruction cache may use a translation look aside buffer (TLB) that contains an instruction page table in order to improve the instruction cache's performance. The instruction page table having, for example, a list of physical page numbers associated with virtual page numbers and additional information associated with each page number entry. An instruction page table entry is created when a page of memory in the instruction address range is loaded in the instruction cache or the data cache. The loading of a page of memory may occur under the supervision of an operating system (OS). In operation, the instruction page table is examined for a match with a virtual page number supplied to the TLB. While a TLB having an instruction page table is described herein as a part of the instruction MMU and cache control 406, it will be recognized that alternative approaches may be used. [0033] In order to fetch an instruction in the processor and memory complex 400, the processor 402 generates an instruction fetch address (IA) 414 for the desired instruction and sends the instruction fetch address to the IMMU/$Control 406. An appropriate entry in an instruction page table, such as page table 416 located in the IMMU/$Control 406, is selected based on a supplied page number that is part of the IA 414. The instruction address based on the selected page table entry is combined with a page address, also part of the TA 414, generating an instruction address (GA) 418 that is applied internally to the L1 instruction cache 404. The entry selected from the page table 416 includes additional information stored with that entry. One of the additional bits of information that may be stored with each page table entry is a check data cache attribute, labeled as D bit 420. CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815[00341 The D bit is set to a "1" when the entry in the instruction page table is created due to loading a page of instructions into the data cache or when generating instructions that are stored in a page in the data cache during processing. The D bit is typically set by the operating system (OS) to indicate that a page's contents may be used as both data and instructions. In an exemplary scenario, a program, generating data that will be used as instructions, calls the OS to request that the appropriate pages be marked by setting the D bit in the associated page table entries. In another scenario, a program may also request pages from the OS that are already set up with the D bit set. The D bit does not necessarily need to be explicitly cleared. If a program specifies that the data cache may contain instructions by causing the appropriate D bit or D bits to be set, then that specification may be valid through the life of the program. The D bit or D bits may then later be cleared when the page table is used for a different process. [00351 The IMMU/$Control 406 checks to see if the instruction is present in the instruction cache 404. If the instruction is present, this presence is indicated as a hit. If the instruction is present, the instruction at the instruction fetch address is selected from the instruction cache 404. The instruction is then sent on instruction out bus 422 through multiplexing element 438 to the processor 402. If the instruction is not present, an indication is given by the IMMU/$Control 406 that a miss has occurred. and. an instruction cache miss signal (I$M=1) 424 is set active indicating a miss has occurred. [00361 Upon detecting a miss in the instruction cache in conjunction with the selected. D bit being set to a "1 ", the processor and memory complex 400 attempts to fetch the desired instruction from the L1 data cache 408. This attempt may suitably be accomplished, for example, by using the selected D bit in a gating function. The D bit 420 from the selected page table entry is output as D bit signal 426. The D bit signals 426 is, for example, ANDed, by AND gate 428, with the miss indication (1$M=l) 424. CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815The AND gate 428 output 430 is then used by multiplexing element 432 to select the generated instruction address (GA) 418 or a data address 433 from the processor 402. When selected, the GA 418 passes through multiplexing element 432 onto Daddress bus (DA) 434 and is sent to the data MMU and cache control 410 to determine if the instruction resides in the data cache 408 at the data fetch address. It is noted that the processor and memory complex 400 represents a logical view of the system, since, for example, the application of the generated instruction address 418 onto the Daddress bus 434 may require an arbitration or a waiting period before access to the Daddress bus 434 may be obtained. The approach taken to multiplex the generated instruction address 418 with the processor generated data address 433 may be varied and is dependent upon the particular approach taken in the instruction cache and data cache designs. [00371 The data cache then checks to see if there is valid data present at the supplied instruction fetch address. If there is valid data present at the supplied instruction fetch address, the data actually is an instruction and the data cache entry is fetched from the L1 data cache 408 and placed on the data out bus 436. In order to supply the data cache entry as an instruction to the processor, a multiplexing element 438 is used, for example. The multiplexing element 438 is enabled. to pass the data out bus 436 onto the processor's instruction bus 440 when there is a miss in the instruction cache and the selected. D bit is set to a "1" followed. by a hit in the data cache at the instruction fetch address. The occurrence of the miss in the instruction cache, indicated by miss signal (i$M=1) 424 being active high, and the D bit signal 426 set to a "1 ", followed by the hit in the data cache at the generated instruction address, indicated by hit signal (D$H=1) 442 being active high, is logically represented by AND gate 444. The AND gate 444 output is the selection signal 446 for the multiplexing element 438. The instruction on CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815the data out bus is also multiplexed for loading into the instruction cache by multiplexing element 448 using the selection signal 446. While the L1 data cache data out bus 436 is forwarding the instruction to the processor 402, the data out bus 436 is gated off for transfers to the processor's read data input 450 by AND gate 452 using an inverse of the selection signal 446 provided by the inverter 454. [0038] If it was determined there was a miss in the data cache at the supplied instruction fetch address, the instruction is not in the data cache and the instruction is fetched from the system memory 412. The hit signal (D$H=1) 442 is also sent to the IMMU/$Control 406 to indicate by its inactive state that a miss occurred on the attempt to locate the instruction in the data cache 408. Once the instruction is obtained from the system memory 412, it is sent to the processor 402. Note the paths from the memory hierarchy for supplying an instruction due to a miss in the instruction cache or data cache and for supplying data due to a miss in the data cache are not shown, but any of a wide variety of connection approaches may be employed consistent with the application and the processor employed. [0039] Fig. 5 is an exemplary flow chart of a method 500 for fetching an instruction in a data cache after having a miss in the instruction cache and a check data cache attribute indicates the data cache should. be checked. for the instruction. Exemplary relationships between the steps of Fig. 5 and. the elements of Fig. 4 are indicated. by referring to exemplary elements from the processor and memory complex 400 which may suitably be employed to carry out steps of the method 500 of Fig. 5. [0040] In order to fetch an instruction, an instruction fetch address for the desired instruction is generated in step 502. For example, a processor, such as the processor 402 generates an instruction fetch address and sends the instruction fetch address 414 to the L1 instruction cache controller 406. The instruction fetch address may be a virtual CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815 address made up of a page number 504 and a page address 506. In step 508, an appropriate entry in an instruction page table, such as instruction page table 416, is selected based on the supplied page number 504. The address generated based on the selected page table entry is combined in step 509 with the page address 506 to produce an instruction cache address. [0041] The entry selected from the instruction page table 416 includes the additional information stored with that entry. One of the additional bits of information that may be stored with each page table entry is a check data cache attribute, such as the bit labeled as the D bit 420. This attribute is selected in step 510. [0042] In step 512, it is determined whether there is an instruction cache hit or a miss. For example, the instruction cache checks to see if the instruction is present. If the instruction is present, its presence is indicated as a hit. If the instruction is present, the method 500 proceeds to step 514 and the instruction at the instruction fetch address is selected. In step 516, the instruction is sent to the processor. For example, the selected instruction is placed on instruction out bus 422 and sent through multiplexing element 438 to the instruction bus 440 of the processor 402. [0043] If the instruction is not present in the instruction cache as determined in step 512, an indication is given that a miss has occurred. and. the method. 500 proceeds to step 518. Instep 518, the D bit that was selected. instep 510 is checked. to see if it is set to a "1" indicating the data cache should, be checked. for the instruction. If the D bit was set to a "I", the processor attempts to fetch the instruction from the data cache in step 520. For example, the generated instruction fetch address 418 is sent as a data fetch address 434 to the data cache. [0044] In step 524, the data cache checks to see if there is valid data present at the supplied instruction fetch address. If there is valid data present at the supplied CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815instruction fetch address, the data actually is an instruction and the data cache entry is fetched in step 528. In step 516, the data fetched from the data cache is sent as an instruction to the processor. For example, the data fetched on data out bus 436 is sent through multiplexing element 438 and supplied as an instruction to the processor 402 on instruction bus 440. [0045] Returning to step 518, if it is determined in step 518 that the D bit was a "0", it is known that the instruction is not present in the data cache and the method 500 proceeds to step 522. The step 522 is also reached for the situation where there was a miss in the data cache at the supplied instruction fetch address, as determined in step 524. In either case, the instruction is known to not be present in the instruction cache or in the data cache and the instruction is fetched from system memory, as indicated in step 522. For example, system memory 412 will be accessed for the instruction. Once the instruction is obtained from the system memory 412, the instruction is sent to the processor 402, as indicated in step 516. [0046] Fig. 6 is an exemplary flow chart of a method 600 for executing program code that is generated as data and stored in a data cache. Program code following this method may be executed on a processor and memory complex having an instruction cache, a data cache, and a system memory, such as, those discussed. in connection with Figs. 2 and 4, and may be suitably employed in components 125A-C of Fig. 1. [0047] In step 602, a program generates code. Such generation may occur, for example, when a program generates executable code from a compressed program. The generated code is initially treated as data and stored in a data cache after it is generated. Prior to executing the program, an instruction cache is invalidated in step 604. The invalidation step ensures there are no instructions at the same address as the generated code. In step 606, the generated code is executed by the processor by fetching CA 02635116 2008-06-25 WO 2007/085011 PCT/US2007/060815instructions from the program address space in the instruction cache and may include instructions that are stored in the data cache. For those instructions stored in the data cache, the techniques of the present disclosure are followed allowing the data cache to be checked for instructions on an occurrence of a miss in the instruction cache. Upon finding an instruction in the data cache, the instruction is directly fetched from the data cache for execution on the processor. [0048] While the present disclosure has been disclosed in a presently preferred context, it will be recognized that the present teachings may be adapted to a variety of contexts consistent with this disclosure and the claims that follow. |
System and method for editing a graphical program. A graphical program is displayed on a display device. Multi-touch input is received to a multi-touch interface, where the multi-touch input specifies an edit operation in the graphical program. The edit operation is performed in the graphical program in response to the multi-touch input, and the edited graphical program is displayed on the display device. |
Claims We Claim: 1. A computer-accessible memory medium that stores program instructions executable by a processor to implement: displaying a graphical program on a display device, wherein the graphical program comprises a plurality of interconnected nodes that visually indicate functionality of the graphical program; receiving multi-touch input to a multi-touch interface, wherein the multi-touch input specifies an edit operation in the graphical program; performing the edit operation in the graphical program in response to the multi-touch input; and displaying the edited graphical program on the display device. 2. The computer-accessible memory medium of claim 1, wherein the multi-touch input specifies or manipulates a graphical program element in the graphical program. 3. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises a pinching or reverse pinching motion applied to a graphical program element; and wherein the edit operation comprises resizing the graphical program element. 4. The computer-accessible memory medium of claim 3, wherein the graphical program element comprises a frame for containing one or more other graphical program elements, and wherein said resizing the graphical program element comprises resizing the frame. 5. The computer-accessible memory medium of claim 1 ,wherein the multi-touch input comprises two touchpoints applied respectively to two graphical program elements; and wherein the edit operation comprises wiring the two graphical program elements together. 6. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises double tapping two touchpoints applied respectively to two graphical program elements; and wherein the edit operation comprises wiring the two graphical program elements together. 7. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises two or more touchpoints applied respectively to two or more graphical program elements; and wherein the edit operation comprises selecting the two or more graphical program elements for a subsequent operation to be performed on the two or more graphical program elements. 8. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises three or more touchpoints defining a convex hull around one or more graphical program elements; and wherein the edit operation comprises selecting the one or more graphical program elements for a subsequent operation to be performed on the one or more graphical program elements. 9. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises a rotation motion applied to one or more graphical program elements; and wherein the edit operation comprises rotating the one or more graphical program elements. 10. The computer-accessible memory medium of claim 1 , wherein the graphical program includes a graphical subprogram, wherein the graphical subprogram is represented by a graphical program node in the graphical program; wherein the multi-touch input comprises tapping or double tapping two or more touchpoints on the graphical program node; and wherein the edit operation comprises expanding the graphical program node to the graphical subprogram. 11. The computer-accessible memory medium of claim 1 , wherein the graphical program includes a graphical subprogram, wherein the graphical subprogram is represented by a graphical program node in the graphical program; wherein the multi-touch input comprises tapping or double tapping two or more touchpoints on the border of the graphical subprogram; and wherein the edit operation comprises collapsing the graphical subprogram to the representative graphical program node. 12. The computer-accessible memory medium of claim 1 , wherein the graphical program includes a graphical subprogram, wherein the graphical subprogram is represented by a graphical program node in the graphical program; wherein the multi-touch input comprises a reverse pinching motion applied to the graphical program node; and wherein the edit operation comprises expanding the graphical program node to the graphical subprogram. 13. The computer-accessible memory medium of claim 1 , wherein the graphical program includes a graphical subprogram, wherein the graphical subprogram is represented by a graphical program node in the graphical program;wherein the multi-touch input comprises a pinching motion applied to the graphical subprogram; and wherein the edit operation comprises collapsing the graphical subprogram to the representative graphical program node. 14. The computer-accessible memory medium of claim 1 , wherein the graphical program includes a graphical subprogram, wherein the graphical subprogram is represented by a graphical program node in the graphical program; wherein the multi-touch input comprises a multi-touch swipe applied to the graphical program node; and wherein the edit operation comprises expanding the graphical program node to the graphical subprogram. 15. The computer-accessible memory medium of claim 1 , wherein the graphical program includes a graphical subprogram, wherein the graphical subprogram is represented by a graphical program node in the graphical program; wherein the multi-touch input comprises a multi-touch reverse swipe applied to the graphical subprogram; and wherein the edit operation comprises collapsing the graphical subprogram to the representative graphical program node. 16. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises a reverse pinching motion applied to a graphical program node; and wherein the edit operation comprises increasing the graphical program node in size with respect to other nodes in the graphical program. 17. The computer-accessible memory medium of claim 1 ,wherein the multi-touch input comprises a pinching motion applied to a graphical program node; and wherein the edit operation comprises decreasing the graphical program node in size with respect to other nodes in the graphical program. 18. The computer-accessible memory medium of claim 1 , wherein the multi-touch input comprises a multi-touch swiping movement applied to a graphical program element; and wherein the edit operation comprises invoking a display of selectable operations applicable to the element. 19. The computer-accessible memory medium of claim 1 , wherein the multi-touch input is context sensitive, wherein the edit operation is based at least partially on a target graphical program element or region to which the multi- touch input is applied. 20. The computer-accessible memory medium of claim 1, wherein the multi-touch input specifies or manipulates a region in the graphical program. 21. The computer-accessible memory medium of claim 20, wherein the multi-touch input comprises a pinching or reverse pinching motion applied to a region in the graphical program; and wherein the edit operation comprises resizing the region in the graphical program; and wherein said resizing the region displaces one or more other graphical program elements or regions in the graphical program. 22. The computer-accessible memory medium of claim 1, wherein the multi-touch interface comprises a computer touch-pad. 23. The computer-accessible memory medium of claim 1, wherein the multi-touch interface comprises a computer touch-screen. 24. The computer-accessible memory medium of claim 1 , wherein the multi-touch input is performed in combination with a keyboard key press to form a combination multi-touch input; and wherein the edit operation invoked by the combination multi-touch input is different from that invoked by the multi-touch input alone. 25. A computer-implemented method for creating a graphical program, the method comprising: utilizing a computer to perform: displaying a graphical program on a display device, wherein the graphical program comprises a plurality of interconnected nodes that visually indicate functionality of the graphical program; receiving multi-touch input to a multi-touch interface, wherein the multi-touch input specifies an edit operation in the graphical program; performing the edit operation in the graphical program in response to the multi- touch input; and displaying the edited graphical program on the display device. |
TITLE: MULTI-TOUCH EDITING IN A GRAPHICAL PROGRAMMING LANGUAGE Field of the Invention [0001] The present invention relates to the field of graphical programming, and more particularly to a system and method for multi-touch editing in a graphical programming language. Description of the Related Art [0002] Graphical programming has become a powerful tool available to programmers. Graphical programming environments such as the National Instruments Lab VIEW product have become very popular. Tools such as Lab VIEW have greatly increased the productivity of programmers, and increasing numbers of programmers are using graphical programming environments to develop their software applications. In particular, graphical programming tools are being used for test and measurement, data acquisition, process control, man machine interface (MMI), supervisory control and data acquisition (SCADA) applications, modeling, simulation, image processing / machine vision applications, and motion control, among others. [0003] Computer touchscreens and touchpads have become increasingly popular for interacting with applications without using a computer keyboard or mouse, such as, for example, entering user input at checkout counters, operating smart phones, playing games on portable game machines, and manipulating files on a computer "desktop". Multi-touch screens or pads (and supporting software/firmware) facilitate multiple simultaneous points of contact, referred to as touchpoints, allowing for more complex operations to be performed, such as shrinking or expanding an onscreen display by "pinching" or "reverse pinching". [0004] However, prior art uses of touch functionality with regard to computer operations have typically been limited to gross object manipulation such as moving or otherwise organizing computer folders and files, launching programs, selecting menu items, and so forth. Summary of the Invention [0005] Various embodiments of a system and method for multi-touch editing in a graphical programming development environment are presented below. [0006] A graphical program may be displayed on a display device, e.g., of a computer system. The graphical program may be created or assembled by the user arranging on a display a plurality of nodes or icons and then interconnecting the nodes to create the graphical program. In response to the user assembling the graphical program, data structures may be created and stored which represent the graphical program. The nodes may be interconnected in one or more of a data flow, control flow, or execution flow format. The graphical program may thus comprise aplurality of interconnected nodes or icons which visually indicates the functionality of the program. As noted above, the graphical program may comprise a block diagram and may also include a user interface portion or front panel portion. Where the graphical program includes a user interface portion, the user may optionally assemble the user interface on the display. As one example, the user may use the LabVIEW graphical programming development environment to create the graphical program. The graphical programming development environment may be configured to support multi-touch editing operations, as will be described in more detail below. [0007] Multi-touch input may be received to a multi-touch interface, wherein the multi-touch input specifies an edit operation in the graphical program. As used herein, "multi-touch input" refers to user input to a multi-touch interface where there are multiple touchpoints active at the same time. In other words, the user may cause, utilize, or employ multiple simultaneous points of contact on the multi-touch interface. Note that the multi-touch interface may be a touch pad or a touch screen, as desired. In other words, the multi-touch interface may be or include a computer touch-pad and/or a computer touch-screen. Exemplary multi-touch input and edit operations are provided below. [0008] The edit operation may be performed in the graphical program in response to the multi- touch input. In other words, the edit operation specified by the multi-touch input may be performed in or on the graphical program, thereby generating an edited graphical program. In some embodiments, an indication of the multi-touch input may be displayed in the graphical program before or as the edit operation is performed. For example, each touchpoint may be indicated on the screen, e.g., by an icon, e.g., a dot, and whose size, color, or style, may be adjustable. Additionally, in some embodiments, additional graphical indicators related to the multi-touch input may be displayed. For example, in one embodiment, when the multiple touchpoints are first activated, e.g., prior to any movement, or possibly as the movement occurs, an indication of the associated edit operation may be displayed, e.g., arrows indicating movement options for moving the touchpoints. In one illustrative embodiment, in a multi-touch pinching or reverse pinching input, once the touchpoints are active, but prior to any movement, radial double headed arrows may be displayed at each touchpoint, indicating that the touchpoints may be moved inwardly or outwardly to contract or expand an element or other portion of the program. Similarly, double headed arrows perpendicular to the radials, i.e., may indicate a rotational option or effect. In other words, such indicators may indicate movement options and/or editeffects resulting from such movements. The indicators may displayed in any number of ways, e.g., as dashed lines, with or without arrow heads, animation, etc., as desired. [0009] The edited graphical program may then be displayed on the display device. Said another way, the result of the edit operation may be indicated in the displayed graphical program. [0010] In various embodiments, the multi-touch input may include any of various multi-touch operations, and the specified edit operation may be or include any of various graphical program edit operations. Below are described various exemplary multi-point inputs and graphical program edit operations, although it should be noted that the multi-point inputs and edit operations presented are exemplary only, and are not intended to limit the multi-point inputs and edit operations to any particular set of inputs and operations. Moreover, it should be further noted that any of the described multi-point inputs and edit operations may be used in any of various combinations as desired, and further, that any other multi-point inputs or edit operations are also contemplated. In other words, embodiments of the invention may include any of various types of multi-touch inputs (including sequences of such inputs) and associated graphical program edit operations. [0011] In some embodiments, the multi-touch input may be context sensitive, where the edit operation is based at least partially on a target graphical program element or region to which the multi-touch input is applied. In other words, the edit operation invoked by the multi-touch input may depend on the particular element(s) of the graphical program to which the input is applied, including blank space in the program. Thus, for example, tapping two graphical program elements simultaneously may invoke a wiring operation to connect the two elements, whereas tapping a single graphical program element may simply select that element, e.g., for a subsequent operation. Further, tapping a graphical program element that is a sub-program node (that represents a sub-program, called a sub-VI in Lab VIEW), may cause the sub-program represented by this element to "open up" or be displayed. In this manner, a given multi-touch input may invoke any of a plurality of edit operations, depending on the target of the input. [0012] Thus, various embodiments of the systems and methods disclosed herein may provide for multi-touch editing of graphical programs. Brief Description of the Drawings [0013] A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiment is considered in conjunction with the following drawings, in which:[0014] Figure 1A illustrates a computer system configured to execute a graphical program according to an embodiment of the present invention; [0015] Figure IB illustrates a network system comprising two or more computer systems that may implement an embodiment of the present invention; [0016] Figure 2 A illustrates an instrumentation control system according to one embodiment of the invention; [0017] Figure 2B illustrates an industrial automation system according to one embodiment of the invention; [0018] Figure 3 A is a high level block diagram of an exemplary system which may execute or utilize graphical programs; [0019] Figure 3B illustrates an exemplary system which may perform control and/or simulation functions utilizing graphical programs; [0020] Figure 4 is an exemplary block diagram of the computer systems of Figures 1A, IB, 2A and 2B and 3B; [0021] Figure 5 is a flowchart diagram illustrating one embodiment of a method for editing a graphical program using multi-touch input; [0022] Figure 6 illustrates an exemplary graphical program, according to one embodiment; [0023] Figures 7A-7G illustrate various exemplary multi-touch inputs, according to one embodiment; and [0024] Figures 8 A - 1 IB illustrate exemplary pairs of graphical programs before/after respective multi-touch invoked edit operations have been performed, according to one embodiment. [0025] While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. Detailed Description of the Invention Incorporation by Reference: [0026] The following references are hereby incorporated by reference in their entirety as though fully and completely set forth herein:[0027] U.S. Patent No. 4,914,568 titled "Graphical System for Modeling a Process and Associated Method," issued on April 3, 1990. [0028] U.S. Patent No. 5,481,741 titled "Method and Apparatus for Providing Attribute Nodes in a Graphical Data Flow Environment". [0029] U.S. Patent No. 6,173,438 titled "Embedded Graphical Programming System" filed August 18, 1997. [0030] U.S. Patent No. 6,219,628 titled "System and Method for Configuring an Instrument to Perform Measurement Functions Utilizing Conversion of Graphical Programs into Hardware Implementations," filed August 18, 1997. [0031] U.S. Patent Application Publication No. 20010020291 (Serial No. 09/745,023) titled "System and Method for Programmatically Generating a Graphical Program in Response to Program Information," filed December 20, 2000. [0032] U.S. Patent Application Serial No. 12/572,455, titled "Editing a Graphical Data Flow Program in a Browser," filed October 2, 2009. Terms [0033] The following is a glossary of terms used in the present application: [0034] Memory Medium - Any of various types of memory devices or storage devices. The term "memory medium" is intended to include an installation medium, e.g., a CD-ROM, floppy disks, or tape device; a computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc.; or a non- volatile memory such as a magnetic media, e.g., a hard drive, or optical storage. The memory medium may comprise other types of memory as well, or combinations thereof. In addition, the memory medium may be located in a first computer in which the programs are executed, and/or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution. The term "memory medium" may include two or more memory mediums which may reside in different locations, e.g., in different computers that are connected over a network. [0035] Carrier Medium - a memory medium as described above, as well as a physical transmission medium, such as a bus, network, and/or other physical transmission medium that conveys signals such as electrical, electromagnetic, or digital signals.[0036] Programmable Hardware Element - includes various hardware devices comprising multiple programmable function blocks connected via a programmable interconnect. Examples include FPGAs (Field Programmable Gate Arrays), PLDs (Programmable Logic Devices), FPOAs (Field Programmable Object Arrays), and CPLDs (Complex PLDs). The programmable function blocks may range from fine grained (combinatorial logic or look up tables) to coarse grained (arithmetic logic units or processor cores). A programmable hardware element may also be referred to as "reconfigurable logic". [0037] Program - the term "program" is intended to have the full breadth of its ordinary meaning. The term "program" includes 1) a software program which may be stored in a memory and is executable by a processor or 2) a hardware configuration program useable for configuring a programmable hardware element. [0038] Software Program - the term "software program" is intended to have the full breadth of its ordinary meaning, and includes any type of program instructions, code, script and/or data, or combinations thereof, that may be stored in a memory medium and executed by a processor. Exemplary software programs include programs written in text-based programming languages, such as C, C++, PASCAL, FORTRAN, COBOL, JAVA, assembly language, etc.; graphical programs (programs written in graphical programming languages); assembly language programs; programs that have been compiled to machine language; scripts; and other types of executable software. A software program may comprise two or more software programs that interoperate in some manner. Note that various embodiments described herein may be implemented by a computer or software program. A software program may be stored as program instructions on a memory medium. [0039] Hardware Configuration Program - a program, e.g., a netlist or bit file, that can be used to program or configure a programmable hardware element. [0040] Graphical Program - A program comprising a plurality of interconnected nodes or icons, wherein the plurality of interconnected nodes or icons visually indicate functionality of the program. The interconnected nodes or icons are graphical source code for the program. Graphical function nodes may also be referred to as blocks. [0041] The following provides examples of various aspects of graphical programs. The following examples and discussion are not intended to limit the above definition of graphical program, but rather provide examples of what the term "graphical program" encompasses:[0042] The nodes in a graphical program may be connected in one or more of a data flow, control flow, and/or execution flow format. The nodes may also be connected in a "signal flow" format, which is a subset of data flow. [0043] Exemplary graphical program development environments which may be used to create graphical programs include Lab VIEW®, DasyLab™, DiaDem™ and Matrixx/SystemBuild™ from National Instruments, Simulink® from the Math Works, VEE™ from Agilent, WiT™ from Coreco, Vision Program Manager™ from PPT Vision, SoftWIRE™ from Measurement Computing, Sanscript™ from Northwoods Software, Khoros™ from Khoral Research, SnapMaster™ from HEM Data, VisSim™ from Visual Solutions, ObjectBench™ by SES (Scientific and Engineering Software), and VisiDAQ™ from Advantech, among others. [0044] The term "graphical program" includes models or block diagrams created in graphical modeling environments, wherein the model or block diagram comprises interconnected blocks (i.e., nodes) or icons that visually indicate operation of the model or block diagram; exemplary graphical modeling environments include Simulink®, SystemBuild™, VisSim™, Hypersignal Block Diagram™, etc. [0045] A graphical program may be represented in the memory of the computer system as data structures and/or program instructions. The graphical program, e.g., these data structures and/or program instructions, may be compiled or interpreted to produce machine language that accomplishes the desired method or process as shown in the graphical program. [0046] Input data to a graphical program may be received from any of various sources, such as from a device, unit under test, a process being measured or controlled, another computer program, a database, or from a file. Also, a user may input data to a graphical program or virtual instrument using a graphical user interface, e.g., a front panel. [0047] A graphical program may optionally have a GUI associated with the graphical program. In this case, the plurality of interconnected blocks or nodes are often referred to as the block diagram portion of the graphical program. [0048] Node - In the context of a graphical program, an element that may be included in a graphical program. The graphical program nodes (or simply nodes) in a graphical program may also be referred to as blocks. A node may have an associated icon that represents the node in the graphical program, as well as underlying code and/or data that implements functionality of the node. Exemplary nodes (or blocks) include function nodes, sub-program nodes, terminal nodes, structure nodes, etc. Nodes may be connected together in a graphical program by connection icons or wires.[0049] Data Flow Program - A Software Program in which the program architecture is that of a directed graph specifying the flow of data through the program, and thus functions execute whenever the necessary input data are available. Data flow programs can be contrasted with procedural programs, which specify an execution flow of computations to be performed. As used herein "data flow" or "data flow programs" refer to "dynamically-scheduled data flow" and/or "statically-defined data flow" . [0050] Graphical Data Flow Program (or Graphical Data Flow Diagram) - A Graphical Program which is also a Data Flow Program. A Graphical Data Flow Program comprises a plurality of interconnected nodes (blocks), wherein at least a subset of the connections among the nodes visually indicate that data produced by one node is used by another node. A Lab VIEW VI is one example of a graphical data flow program. A Simulink block diagram is another example of a graphical data flow program. [0051] Graphical User Interface - this term is intended to have the full breadth of its ordinary meaning. The term "Graphical User Interface" is often abbreviated to "GUI". A GUI may comprise only one or more input GUI elements, only one or more output GUI elements, or both input and output GUI elements. [0052] The following provides examples of various aspects of GUIs. The following examples and discussion are not intended to limit the ordinary meaning of GUI, but rather provide examples of what the term "graphical user interface" encompasses: [0053] A GUI may comprise a single window having one or more GUI Elements, or may comprise a plurality of individual GUI Elements (or individual windows each having one or more GUI Elements), wherein the individual GUI Elements or windows may optionally be tiled together. [0054] A GUI may be associated with a graphical program. In this instance, various mechanisms may be used to connect GUI Elements in the GUI with nodes in the graphical program. For example, when Input Controls and Output Indicators are created in the GUI, corresponding nodes (e.g., terminals) may be automatically created in the graphical program or block diagram. Alternatively, the user can place terminal nodes in the block diagram which may cause the display of corresponding GUI Elements front panel objects in the GUI, either at edit time or later at run time. As another example, the GUI may comprise GUI Elements embedded in the block diagram portion of the graphical program.[0055] Front Panel - A Graphical User Interface that includes input controls and output indicators, and which enables a user to interactively control or manipulate the input being provided to a program, and view output of the program, while the program is executing. [0056] A front panel is a type of GUI. A front panel may be associated with a graphical program as described above. [0057] In an instrumentation application, the front panel can be analogized to the front panel of an instrument. In an industrial automation application the front panel can be analogized to the MMI (Man Machine Interface) of a device. The user may adjust the controls on the front panel to affect the input and view the output on the respective indicators. [0058] Graphical User Interface Element - an element of a graphical user interface, such as for providing input or displaying output. Exemplary graphical user interface elements comprise input controls and output indicators. [0059] Input Control - a graphical user interface element for providing user input to a program. An input control displays the value input by the user and is capable of being manipulated at the discretion of the user. Exemplary input controls comprise dials, knobs, sliders, input text boxes, etc. [0060] Output Indicator - a graphical user interface element for displaying output from a program. Exemplary output indicators include charts, graphs, gauges, output text boxes, numeric displays, etc. An output indicator is sometimes referred to as an "output control". [0061] Computer System - any of various types of computing or processing systems, including a personal computer system (PC), mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), television system, grid computing system, or other device or combinations of devices. In general, the term "computer system" can be broadly defined to encompass any device (or combination of devices) having at least one processor that executes instructions from a memory medium. [0062] Measurement Device - includes instruments, data acquisition devices, smart sensors, and any of various types of devices that are configured to acquire and/or store data. A measurement device may also optionally be further configured to analyze or process the acquired or stored data. Examples of a measurement device include an instrument, such as a traditional stand-alone "box" instrument, a computer-based instrument (instrument on a card) or external instrument, a data acquisition card, a device external to a computer that operates similarly to a data acquisition card, a smart sensor, one or more DAQ or measurement cards or modules in a chassis, an imageacquisition device, such as an image acquisition (or machine vision) card (also called a video capture board) or smart camera, a motion control device, a robot having machine vision, and other similar types of devices. Exemplary "stand-alone" instruments include oscilloscopes, multimeters, signal analyzers, arbitrary waveform generators, spectroscopes, and similar measurement, test, or automation instruments. [0063] A measurement device may be further configured to perform control functions, e.g., in response to analysis of the acquired or stored data. For example, the measurement device may send a control signal to an external system, such as a motion control system or to a sensor, in response to particular data. A measurement device may also be configured to perform automation functions, i.e., may receive and analyze data, and issue automation control signals in response. [0064] Subset - in a set having N elements, the term "subset" comprises any combination of one or more of the elements, up to and including the full set of N elements. For example, a subset of a plurality of icons may be any one icon of the plurality of the icons, any combination of one or more of the icons, or all of the icons in the plurality of icons. Thus, a subset of an entity may refer to any single element of the entity as well as any portion up to and including the entirety of the entity. Figure 1A - Computer System [0065] Figure 1A illustrates a computer system 82 configured to implement embodiments of the invention. One embodiment of a method for editing a graphical program using multi-touch operations is described below. [0066] As shown in Figure 1A, the computer system 82 may include a display device configured to display the graphical program as the graphical program is created and/or executed. For example, the display device may display a graphical user interface (GUI) of a graphical programming development environment application used to create, edit, and/or execute such graphical programs. The graphical program development environment may be configured to utilize or support multi-touch edit (and possibly display) operations for developing graphical programs. The display device may also be configured to display a graphical user interface or front panel of the graphical program during execution of the graphical program. The graphical user interface(s) may comprise any type of graphical user interface, e.g., depending on the computing platform. [0067] The computer system 82 may include at least one memory medium on which one or more computer programs or software components according to one embodiment of the present invention may be stored. For example, the memory medium may store one or more programs,e.g., graphical programs, which are executable to perform the methods described herein. Additionally, the memory medium may store a graphical programming development environment application used to create and/or execute graphical programs. The memory medium may also store operating system software, as well as other software for operation of the computer system. Various embodiments further include receiving or storing instructions and/or data implemented in accordance with the foregoing description upon a carrier medium. Figure IB - Computer Network [0068] Figure IB illustrates a system including a first computer system 82 that is coupled to a second computer system 90. The computer system 82 may be coupled via a network 84 (or a computer bus) to the second computer system 90. The computer systems 82 and 90 may each be any of various types, as desired. The network 84 can also be any of various types, including a LAN (local area network), WAN (wide area network), the Internet, or an Intranet, among others. In some embodiments, the graphical program development environment may be configured to operate in a distributed manner. For example, the development environment may be hosted or executed on the second computer system 90, while the GUI for the development environment may be displayed on the computer system 82, and the user may create and edit a graphical program over the network. In another embodiment, the development environment may be implemented as a browser-based application. For example, the user uses a browser program executing on the computer system 82 to access and download the development environment and/or graphical program from the second computer system 90 to create and/or edit the graphical program, where the development environment may execute within the user's browser. Further details regarding such browser-based editing of graphical programs are provided in U.S. Patent Application Serial No. 12/572,455, titled "Editing a Graphical Data Flow Program in a Browser," filed October 2, 2009, which was incorporated by reference above. [0069] The computer systems 82 and 90 may execute a graphical program in a distributed fashion. For example, computer 82 may execute a first portion of the block diagram of a graphical program and computer system 90 may execute a second portion of the block diagram of the graphical program. As another example, computer 82 may display the graphical user interface of a graphical program and computer system 90 may execute the block diagram of the graphical program. [0070] In one embodiment, the graphical user interface of the graphical program may be displayed on a display device of the computer system 82, and the block diagram may execute ona device coupled to the computer system 82. The device may include a programmable hardware element and/or may include a processor and memory medium which may execute a real time operating system. In one embodiment, the graphical program may be downloaded and executed on the device. For example, an application development environment with which the graphical program is associated may provide support for downloading a graphical program for execution on the device in a real time system. Exemplary Systems [0071] Embodiments of the present invention may be involved with performing test and/or measurement functions; controlling and/or modeling instrumentation or industrial automation hardware; modeling and simulation functions, e.g., modeling or simulating a device or product being developed or tested, etc. Exemplary test applications where the graphical program may be used include hardware -in-the-loop testing and rapid control prototyping, among others. [0072] However, it is noted that embodiments of the present invention can be used for a plethora of applications and is not limited to the above applications. In other words, applications discussed in the present description are exemplary only, and embodiments of the present invention may be used in any of various types of systems. Thus, embodiments of the system and method of the present invention is configured to be used in any of various types of applications, including the control of other types of devices such as multimedia devices, video devices, audio devices, telephony devices, Internet devices, etc., as well as general purpose software applications such as word processing, spreadsheets, network control, network monitoring, financial applications, games, etc. [0073] Figure 2A illustrates an exemplary instrumentation control system 100 which may implement embodiments of the invention. The system 100 comprises a host computer 82 which couples to one or more instruments. The host computer 82 may comprise a CPU, a display screen, memory, and one or more input devices such as a mouse or keyboard as shown. The computer 82 may operate with the one or more instruments to analyze, measure or control a unit under test (UUT) or process 150. [0074] The one or more instruments may include a GPIB instrument 112 and associated GPIB interface card 122, a data acquisition board 1 14 inserted into or otherwise coupled with chassis 124 with associated signal conditioning circuitry 126, a VXI instrument 1 16, a PXI instrument 118, a video device or camera 132 and associated image acquisition (or machine vision) card 134, a motion control device 136 and associated motion control interface card 138, and/or one ormore computer based instrument cards 142, among other types of devices. The computer system may couple to and operate with one or more of these instruments. The instruments may be coupled to the unit under test (UUT) or process 150, or may be coupled to receive field signals, typically generated by transducers. The system 100 may be used in a data acquisition and control application, in a test and measurement application, an image processing or machine vision application, a process control application, a man-machine interface application, a simulation application, or a hardware -in-the- loop validation application, among others. [0075] Figure 2B illustrates an exemplary industrial automation system 160 which may implement embodiments of the invention. The industrial automation system 160 is similar to the instrumentation or test and measurement system 100 shown in Figure 2A. Elements which are similar or identical to elements in Figure 2A have the same reference numerals for convenience. The system 160 may comprise a computer 82 which couples to one or more devices or instruments. The computer 82 may comprise a CPU, a display screen, memory, and one or more input devices such as a mouse or keyboard as shown. The computer 82 may operate with the one or more devices to perform an automation function with respect to a process or device 150, such as MMI (Man Machine Interface), SCADA (Supervisory Control and Data Acquisition), portable or distributed data acquisition, process control, advanced analysis, or other control, among others. [0076] The one or more devices may include a data acquisition board 114 inserted into or otherwise coupled with chassis 124 with associated signal conditioning circuitry 126, a PXI instrument 118, a video device 132 and associated image acquisition card 134, a motion control device 136 and associated motion control interface card 138, a fieldbus device 170 and associated fieldbus interface card 172, a PLC (Programmable Logic Controller) 176, a serial instrument 182 and associated serial interface card 184, or a distributed data acquisition system, such as the Fieldpoint system available from National Instruments, among other types of devices. [0077] Figure 3A is a high level block diagram of an exemplary system which may execute or utilize graphical programs. Figure 3 A illustrates a general high-level block diagram of a generic control and/or simulation system which comprises a controller 92 and a plant 94. The controller 92 represents a control system/algorithm the user may be trying to develop. The plant 94 represents the system the user may be trying to control. For example, if the user is designing an ECU for a car, the controller 92 is the ECU and the plant 94 is the car's engine (and possibly other components such as transmission, brakes, and so on.) As shown, a user may create agraphical program that specifies or implements the functionality of one or both of the controller 92 and the plant 94. For example, a control engineer may use a modeling and simulation tool to create a model (graphical program) of the plant 94 and/or to create the algorithm (graphical program) for the controller 92. [0078] Figure 3B illustrates an exemplary system which may perform control and/or simulation functions. As shown, the controller 92 may be implemented by a computer system 82 or other device (e.g., including a processor and memory medium and/or including a programmable hardware element) that executes or implements a graphical program. In a similar manner, the plant 94 may be implemented by a computer system or other device 144 (e.g., including a processor and memory medium and/or including a programmable hardware element) that executes or implements a graphical program, or may be implemented in or as a real physical system, e.g., a car engine. [0079] In one embodiment of the invention, one or more graphical programs may be created which are used in performing rapid control prototyping. Rapid Control Prototyping (RCP) generally refers to the process by which a user develops a control algorithm and quickly executes that algorithm on a target controller connected to a real system. The user may develop the control algorithm using a graphical program, and the graphical program may execute on the controller 92, e.g., on a computer system or other device. The computer system 82 may be a platform that supports real time execution, e.g., a device including a processor that executes a real time operating system (RTOS), or a device including a programmable hardware element. [0080] In one embodiment of the invention, one or more graphical programs may be created which are used in performing Hardware in the Loop (HIL) simulation. Hardware in the Loop (HIL) refers to the execution of the plant model 94 in real time to test operation of a real controller 92. For example, once the controller 92 has been designed, it may be expensive and complicated to actually test the controller 92 thoroughly in a real plant, e.g., a real car. Thus, the plant model (implemented by a graphical program) is executed in real time to make the real controller 92 "believe" or operate as if it is connected to a real plant, e.g., a real engine. [0081] In the embodiments of Figures 2A, 2B, and 3B above, one or more of the various devices may couple to each other over a network, such as the Internet. In one embodiment, the user operates to select a target device from a plurality of possible target devices for programming or configuration using a graphical program. Thus the user may create a graphical program on a computer and use (execute) the graphical program on that computer or deploy the graphical program to a target device(for remote execution on the target device) that is remotely located from the computer and coupled to the computer through a network. [0082] Graphical software programs which perform data acquisition, analysis and/or presentation, e.g., for measurement, instrumentation control, industrial automation, modeling, or simulation, such as in the applications shown in Figures 2A and 2B, may be referred to as virtual instruments. Figure 4 - Computer System Block Diagram [0083] Figure 4 is a block diagram representing one embodiment of the computer system 82 and/or 90 illustrated in Figures 1A and IB, or computer system 82 shown in Figures 2A or 2B. It is noted that any type of computer system configuration or architecture can be used as desired, and Figure 4 illustrates a representative PC embodiment. It is also noted that the computer system may be a general purpose computer system, a computer implemented on a card installed in a chassis, or other types of embodiments. Elements of a computer not necessary to understand the present description have been omitted for simplicity. [0084] The computer may include at least one central processing unit or CPU (processor) 160 which is coupled to a processor or host bus 162. The CPU 160 may be any of various types, including an x86 processor, e.g., a Pentium class, a PowerPC processor, a CPU from the SPARC family of RISC processors, as well as others. A memory medium, typically comprising RAM and referred to as main memory, 166 is coupled to the host bus 162 by means of memory controller 164. The main memory 166 may store the graphical program development environment configured to utilize or support multi-touch edit (and possibly display) operations, and graphical programs developed thereby. The main memory may also store operating system software, as well as other software for operation of the computer system. [0085] The host bus 162 may be coupled to an expansion or input/output bus 170 by means of a bus controller 168 or bus bridge logic. The expansion bus 170 may be the PCI (Peripheral Component Interconnect) expansion bus, although other bus types can be used. The expansion bus 170 includes slots for various devices such as described above. The computer 82 further comprises a video display subsystem 180 and hard drive 182 coupled to the expansion bus 170. The computer 82 may also comprise a GPIB card 122 coupled to a GPIB bus 112, and/or an MXI device 186 coupled to a VXI chassis 1 16. [0086] As shown, a device 190 may also be connected to the computer. The device 190 may include a processor and memory which may execute a real time operating system. The device190 may also or instead comprise a programmable hardware element. The computer system may be configured to deploy a graphical program to the device 190 for execution of the graphical program on the device 190. The deployed graphical program may take the form of graphical program instructions or data structures that directly represents the graphical program. Alternatively, the deployed graphical program may take the form of text code (e.g., C code) generated from the graphical program. As another example, the deployed graphical program may take the form of compiled code generated from either the graphical program or from text code that in turn was generated from the graphical program. Figure 5 - Flowchart of a Method for Editing a Graphical Program [0087] Figure 5 illustrates a method for edit a graphical program using multi-touch operations. The method shown in Figure 5 may be used in conjunction with any of the computer systems or devices shown in the above Figures, among other devices. In various embodiments, some of the method elements shown may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. As shown, this method may operate as follows. [0088] First, in 502 a graphical program may be displayed on a display device, e.g., of the computer system 82 (or on a different computer system). The graphical program may be created or assembled by the user arranging on a display a plurality of nodes or icons and then interconnecting the nodes to create the graphical program. In response to the user assembling the graphical program, data structures may be created and stored which represent the graphical program. The nodes may be interconnected in one or more of a data flow, control flow, or execution flow format. The graphical program may thus comprise a plurality of interconnected nodes or icons which visually indicates the functionality of the program. As noted above, the graphical program may comprise a block diagram and may also include a user interface portion or front panel portion. Where the graphical program includes a user interface portion, the user may optionally assemble the user interface on the display. As one example, the user may use the Lab VIEW graphical programming development environment to create the graphical program. The graphical programming development environment may be configured to support multi-touch editing operations, as will be described in more detail below. [0089] In an alternate embodiment, the graphical program may be created in 502 by the user creating or specifying a prototype, followed by automatic or programmatic creation of the graphical program from the prototype. This functionality is described in U.S. Patent ApplicationSerial No. 09/587,682 titled "System and Method for Automatically Generating a Graphical Program to Perform an Image Processing Algorithm", which is hereby incorporated by reference in its entirety as though fully and completely set forth herein. The graphical program may be created in other manners, either by the user or programmatically, as desired. The graphical program may implement a measurement function that is desired to be performed by the instrument. [0090] Figure 6 illustrates an exemplary graphical program 600, according to one embodiment. As may be seen, this example graphical program includes various interconnected graphical program nodes, including a node or structure 614 that includes a frame containing graphical program elements 604 that are to be executed per the node's configuration. For example, in one embodiment, the structure 614 may be a loop node, e.g., a graphical FOR loop or graphical WHILE loop, that specifies that the contained graphical code is to be executed in an iterative manner. Other examples of nodes or structures with frames include a graphical case statement, a graphical sequence structure, and a graphical conditional structure, among others. The exemplary graphical program of Figure 6, and variants thereof, will be used to illustrate various exemplary multi-touch inputs and corresponding (exemplary) edit operations, described below with reference to Figures 8 A - 1 IB. [0091] In 504, multi-touch input may be received to a multi-touch interface, wherein the multi- touch input specifies an edit operation in the graphical program. As used herein, "multi-touch input" refers to user input to a multi-touch interface where there are multiple touchpoints active at the same time. In other words, the user may cause, utilize, or employ multiple simultaneous points of contact on the multi-touch interface. Note that the multi-touch interface may be a touch pad or a touch screen, as desired. In other words, the multi-touch interface may be or include a computer touch-pad and/or a computer touch-screen. Exemplary multi-touch input and edit operations are provided below. [0092] In 506, the edit operation may be performed in the graphical program in response to the multi-touch input. In other words, the edit operation specified by the multi-touch input of 504 may be performed in or on the graphical program, thereby generating an edited graphical program. [0093] In some embodiments, an indication of the multi-touch input may be displayed in the graphical program before or as the edit operation is performed. For example, each touchpoint may be indicated on the screen, e.g., by an icon, e.g., a dot, and whose size, color, or style, maybe adjustable. Additionally, in some embodiments, additional graphical indicators related to the multi-touch input may be displayed. For example, in one embodiment, when the multiple touchpoints are first activated, e.g., prior to any movement, or possibly as the movement occurs, an indication of the associated edit operation may be displayed, e.g., arrows indicating movement options for moving the touchpoints. For example, in one illustrative embodiment, in a multi- touch pinching or reverse pinching input, once the touchpoints are active, but prior to any movement, radial double headed arrows may be displayed at each touchpoint, indicating that the touchpoints may be moved inwardly or outwardly to contract or expand an element or other portion of the program. Similarly, double headed arrows perpendicular to the radials, i.e., may indicate a rotational option or effect. In other words, such indicators may indicate movement options and/or edit effects resulting from such movements. The indicators may displayed in any number of ways, e.g., as dashed lines, with or without arrow heads, animation, etc., as desired. [0094] In 508, the edited graphical program may be displayed on the display device. Said another way, the result of the edit operation may be indicated in the displayed graphical program. [0095] In various embodiments, the multi-touch input may include any of various multi-touch operations, and the specified edit operation may be or include any of various graphical program edit operations. Figures 7A-7G - Exemplary Multi-touch Input [0096] Figures 7A - 7G illustrate various exemplary multi-touch inputs, although it should be noted that the inputs shown are meant to be illustrative only, and are not intended to limit the multi-touch inputs to any particular set. Note that in these examples, and in the example figures described below, touchpoints are indicated by shaded circles, each representing an active point on a touch surface, movements are indicated by arrows, and double tapping is indicated by concentric circles. [0097] For example, as shown, Figure 7A illustrates a two-point pinching motion, whereas Figure 7B illustrates a two-point reverse pinching motion. Figures 7C and 7D illustrate three- point pinching and reverse pinching, respectively, Figure 7E illustrates a two-point swipe, where, for example, the user touches the touch surface at two points (simultaneously) and makes a sideways movement or gesture. Figures 7F and 7G illustrate two-point tapping and two-point double-tapping, respectively. As another example, Figure 7H illustrates a multi-touch input comprising a 2-point press followed by a 2-point swipe, where the press is indicated with an "X" superimposed on the touch points. In other words, an "X" may indicate a "press", as opposed toa "tap". Other multi-touch inputs may be illustrated in a similar manner. For example a two- point triple-tap may be illustrated via three concentric circles per touch point, or arrows may indicate any of various directions, among others. [0098] Below are described various exemplary multi-point inputs and graphical program edit operations, although it should be noted that the multi-point inputs and edit operations presented are exemplary only, and are not intended to limit the multi-point inputs and edit operations to any particular set of inputs and operations. Moreover, it should be further noted that any of the described multi-point inputs and edit operations may be used in any of various combinations as desired, and further, that any other multi-point inputs or edit operations are also contemplated. In other words, any multi-touch inputs (including sequences of such inputs) and any associated graphical program edit operations are considered to be within the scope of the invention described herein. [0099] In some embodiments, the multi-touch input may specify or manipulate a graphical program element in the graphical program. [00100] For example, the multi-touch input may be or include a pinching or reverse pinching motion applied to a graphical program element, and the edit operation may be or include resizing the graphical program element. For example, in embodiments where the graphical program element includes a frame for containing one or more other graphical program elements, e.g., a graphical FOR loop, a graphical case statement, a graphical sequence structure, a graphical conditional structure, and so forth, as represented by the element 614 in the graphical program of Figure 6, the resizing of the graphical program element may include resizing the frame, e.g., to shrink or expand (respectively) the size of the frame to more effectively or efficiently contain the graphical program code contained therein. Figure 8A illustrates application of a reverse pinch multi-touch input 802 applied to the node 614, according to one embodiment, and Figure 8B illustrates an exemplary result of the corresponding edit operation 804, where the frame of the element 614 is shown expanded, e.g., to accommodate further nodes to be contained in the frame. [00101] In one embodiment, the pinching or reverse pinching motion may have an orientation that specifies the direction of the resizing operation. For example, in resizing an element, such as a loop structure that includes a rectangular frame, a horizontally oriented motion may resize the frame only in the horizontal direction, a vertically oriented motion may resize the frame only in the vertical direction, and a diagonally oriented motion may resize the frame in both directions, e.g., proportionally. Note that in some embodiments, the particular angle of a diagonal-likeorientation may specify a corresponding ratio in the resizing of the frame, i.e., may specify resizing in dimensional proportions per the angle. [00102] Generalizing the above, in some embodiments, other multi-touch inputs may be modified by or may be sensitive to the direction or angle of one or more vectors related to the input. For example, in one embodiment of a two-point swipe input (see, e.g., Figure 7E) to move or dismiss an element, the angle or direction of the swiping movement or "flick" (arrows) may specify the direction of movement, or even the operation performed on the element, e.g., flicking the element down may delete it from the program, whereas flicking the element upwards or sideways may move the element to a holding area or palette. [00103] As another example, the multi-touch input may be or include two touchpoints applied respectively to two graphical program elements, and the edit operation may include wiring the two graphical program elements together. Thus, for example, the user may "touch" two graphical program nodes, e.g., with two fingers, a finger and thumb, etc., and the nodes may be automatically wired, i.e., connected for data flow. [00104] Figure 9A illustrates an exemplary graphical program in which a two point multi-touch is applied to two graphical program elements 605 and 606 to invoke a connection between the two graphical program elements. Figure 9B illustrates the resulting edited graphical program, with new connection 904 shown between the two elements. In some embodiments, the wiring may be performed in response to an indication provided in addition to the initial "touch". For example, the connection may be made if the user remains touching the two elements for some duration, e.g., a second or more, or if the user makes a slight closing gesture, i.e., bringing the two touchpoints slightly closer, among others. In other words, the multi-touch input may involve additional aspects that complete or refine the specification of the edit operation. [00105] Note that the above wiring operation is meant to be exemplary only, and that other multi-touch input may be used to accomplish such interconnection of graphical program elements. For example, in another embodiment, the multi-touch input may include double tapping two touchpoints applied respectively to two graphical program elements, and the edit operation may be or include wiring the two graphical program elements together. In other words, for example, the user may double tap on two graphical program nodes simultaneously, and the nodes may be automatically wired together in response. [00106] In one embodiment, the multi-touch input may include two or more touchpoints applied respectively to two or more graphical program elements, and the edit operation may includeselecting the two or more graphical program elements for a subsequent operation to be performed on the two or more graphical program elements. In other words, the multi-touch input may be used to select multiple graphical program elements at the same time, thus setting up for application of a subsequent operation to be applied to all or each of them, e.g., a move or "drag and drop" operation, deletion, etc. The selection of graphical program elements may be indicated visually in the displayed graphical program, e.g., by high-lighting the selected elements, or via any other visual technique desired. [00107] As another example of a selection process, in one embodiment the multi-touch input may include three or more touchpoints defining a convex hull around one or more graphical program elements, and the edit operation may include selecting the one or more graphical program elements for a subsequent operation to be performed on the one or more graphical program elements. In other words, the multi-touch input may define a convex polygon, with each touchpoint defining a respective vertex, and any graphical program elements within may be selected. [00108] Once an element (or elements) has been selected, multi-touch input may operate to manipulate the element(s). For example, the multi-touch input may be or include a rotation motion applied to one or more graphical program elements, and the resulting edit operation may include rotating the one or more graphical program elements. Thus, for example, the user may "tap" on one or more elements, or select one or more elements via the "convex hull" technique described above (or via any other means), then twist or rotate the touchpoints to cause a corresponding rotation of the element(s). In some embodiments, the rotation may be quantized, e.g., only specified values of rotation may be allowed, e.g., 90 degree orientations, among others. [00109] In some embodiments, a graphical program node may represent another graphical program, e.g., a graphical subprogram, and multi-touch input may be used to expand or collapse the node to and from the graphical subprogram, e.g., to examine or edit the subprogram. In other words, the graphical program may include a graphical subprogram, where the graphical subprogram is represented by a graphical program node. Such a representative node may be referred to as a sub VI. Multi-touch input may be used to switch back and forth between the node and its corresponding graphical subprogram, i.e., to expand the node to its corresponding subprogram, and to collapse the subprogram back to the node. Note that in some embodiments, the expansion may be in situ, i.e., the subprogram may be displayed in-place in the graphical program, i.e., may replace the node in the display of the graphical program, while in otherembodiments, the display of the subprogram may be outside the graphical program, e.g., replacing the graphical program in the edit window, or in a different, e.g., newly spawned, edit window. [00110] For example, in one exemplary embodiment, the multi-touch input may include tapping two or more touchpoints on a graphical program node that represents a graphical subprogram, and the edit operation may include expanding the graphical program node to the graphical subprogram. In a similar embodiment, the multi-touch input may include double tapping two or more touchpoints on a graphical program node that represents a graphical subprogram, and the edit operation may include expanding the graphical program node to the graphical subprogram. These techniques may also be used to collapse a graphical subprogram back to its corresponding or representative graphical program node, e.g., by multi-touch tapping or double tapping on the graphical subprogram, e.g., on the border or frame of the subprogram, e.g., by multi-touch tapping or double tapping on opposite corners of the subprogram, and so forth. [00111] Figure 10A illustrates an exemplary graphical program in which graphical program element (node) 608 is a subprogram node (e.g., a sub VI) to which a two-touch double tap multi- touch input 1002 is applied. Figure 10B illustrates the same graphical program, but where the graphical program element 608 has been expanded in situ to its corresponding block diagram 1004. [00112] Alternatively, or additionally, in another exemplary embodiment, the multi-touch input may include a reverse pinching motion applied to a graphical program node that represents a graphical subprogram, and the edit operation may include expanding the graphical program node to the graphical subprogram. Conversely, the multi-touch input may include a pinching motion applied to a graphical subprogram, and the edit operation may include collapsing the graphical subprogram to its representative graphical program node. [00113] In a further embodiment, the multi-touch input may include a multi-touch swipe applied to a graphical program node (that represents a graphical subprogram), and the edit operation may include expanding the graphical program node to the graphical subprogram. Conversely, the multi-touch input may include a multi-touch reverse swipe applied to a graphical subprogram, and the edit operation may include collapsing the graphical subprogram to the representative graphical program node. [00114] In other embodiments, any other multi-touch input may be used to expand or collapse subprograms and their nodes, as desired, the above techniques being exemplary only.[00115] In another exemplary embodiment, the multi-touch input may include a reverse pinching motion applied to a graphical program node, and the edit operation may include increasing the graphical program node in size with respect to other nodes in the graphical program. In other words, the edit operation may magnify the node (icon) in-place. This may be useful when the node icon is highly detailed, or when the display resolution is high, but the icon size is small. Conversely, the multi-touch input may include a pinching motion applied to a graphical program node, and the edit operation may include decreasing the graphical program node in size with respect to other nodes in the graphical program. [00116] In some embodiments, this "magnification" of the graphical program node may be combined with the above expansion operation applied to nodes that represent graphical subprograms. For example, in an embodiment where the node represents a graphical subprogram, the edit operation may magnify the node up to some specified size or ratio, after which the node may be automatically expanded to its corresponding graphical subprogram, and conversely, the reverse pinching motion may collapse the subprogram to the node, then shrink the node. [00117] In a related embodiment, multi-touch input, e.g., reverse pinching, multi-touch tap or double tap, etc., may be used to invoke expansion of a graphical case/switch node, where expanding the node (possibly displaying the top case) may result in display of all the cases, e.g., side by side, as a grid, etc. Conversely, multi-touch pinching may collapse the cases back to the node (e.g., top case). [00118] In a further embodiment, the multi-touch input may include a multi-touch "flick", where the user touches an element with two or more digits and flicks the element in some direction. The edit operation may include moving the flicked element in the direction of the flick. For example, in one embodiment, the rate or speed of the flicking motion may determine the distance the element moves. In some embodiments, the elements may be given an inertia/friction-like property, where, as the element moves, it slows down until coming to rest. In other embodiments, the multi-touch flick may invoke other edit operations. For example, in one embodiment, flicking the element may delete it from the graphical program. In another embodiment, flicking an element may send it to a temporary holding area or palette. For example, the user may wish to use the element, but may not wish to clutter the current edit area at the moment. Once the user is ready to use the element, it may be retrieved from the area orpalette. This may allow a user to set an element aside for later use while retaining any configuration applied to that element. [00119] In another embodiment, the multi-touch input may include a multi-touch swiping movement applied to a graphical program element, e.g., a node (or region, etc.), and the edit operation may include invoking a display of selectable operations applicable to the element, e.g., may invoke a pop-up menu or palette, similar to a "right-click" with a pointing device. Figure 11A illustrates a two-touch swipe 1202 applied to graphical program element 606 to invoke a pop-up menu for the element, and Figure 11B illustrates display of the invoked menu, whereby the user may configure or otherwise operate on the graphical program element. [00120] As another example, in one embodiment, the multi-touch input may include a press / hold / swipe gesture on a node, e.g., the user may press two fingers on a node, wait some specified period of time, and then swipe the fingers without releasing on the node, which may invoke a different operation than a standard two finger swipe on the node, e.g., may invoke a context menu or perform some other manipulation of the node. [00121] In some embodiments, the multi-touch input may be context sensitive, where the edit operation is based at least partially on a target graphical program element or region to which the multi-touch input is applied. In other words, the edit operation invoked by the multi-touch input may depend on the particular element(s) of the graphical program to which the input is applied, including blank space in the program. Thus, for example, tapping two graphical program elements simultaneously may invoke a wiring operation to connect the two elements, whereas tapping a single graphical program element may simply select that element, e.g., for a subsequent operation. In this manner, a given multi-touch input may invoke any of a plurality of edit operations, depending on the target of the input. [00122] For example, as noted above, the multi-touch input may specify or manipulate a region in the graphical program. In one embodiment, the multi-touch input may include a pinching or reverse pinching motion (with two or more simultaneous touchpoints) applied to a region in the graphical program, and the edit operation may include resizing the region in the graphical program. This may be useful, for example, for inserting additional elements into an existing program. In one embodiment, resizing the region may displace one or more other graphical program elements or regions in the graphical program. In other words, expanding a region may cause graphical program elements proximate to the original region to be moved outward to make room for the expanded region. Of course, the movement of these "peripheral" elements mayresult in movement of additional elements, where the effect may ripple outward until the graphical program elements are appropriately arranged. Conversely, in an embodiment where a region has been shrunk (or where one or more elements have been deleted), elements surrounding the original region may be adjusted accordingly, e.g., moved into the region, etc. [00123] In further embodiments, the multi-touch input may be combined with additional or auxiliary input to specify other edit operations. For example, in one embodiment, the multi- touch input may be performed in combination with a keyboard key press to form a combination multi-touch input, and the edit operation invoked by the combination multi-touch input may be different from that invoked by the multi-touch input alone. Moreover, the same multi-touch input may be combined with different key presses to invoke different respective edit operations. [00124] Note that the various combinations of multi-touch inputs, key presses (possibly including multiple keys, e.g., "control-shift-pinching motion"), and context, provides a great number of distinct input/edit operation pairings whereby a wide variety of edit operations may be performed on a graphical program. Moreover, in some embodiments, one or more of the particular pairings may be user configurable. For example, a GUI may be provided whereby the user may select from all available multi-touch inputs, including available auxiliary inputs, and may associate the selection with any available edit operations, as desired. As an example of such configuration, the user may specify whether a particular multi-touch swipe associated with a specified edit operation is a left-to-right swipe or a right-to-left swipe. Any other aspects of the inputs and/or edit operations may be configurable as desired. [00125] It should also be noted that in various embodiments, further distinctions may be made (and possibly configured) regarding the particular number of simultaneous touchpoints involved in the multi-touch input. For example, a two-touchpoint pinching motion may be distinct from a three- or a four-touchpoint pinching motion. Moreover, in further embodiments, the relative positions of the multiple touchpoints may be interpreted as distinct inputs. For example, a three- touchpoint input where the three touchpoints are spread out may be interpreted differently from one in which two of the three touchpoints are close together and the third is spread out. Thus, for example, a pinching (or reverse pinching) move with two fingers together and another finger separate from them on a node or structure may operate to change the scale of the node or structure relative to the other nodes on the diagram, whereas a similar motion but where the three fingers are roughly equidistant may invoke some other edit function, e.g., may zoom the entire block diagram.[00126] Such distinctions, and their configurability, may thus further expand the palette of multi- touch inputs available for use in editing or otherwise manipulating graphical programs. [00127] In some embodiments, multi-touch input may be used to control display of the graphical program, i.e., to control graphical program display operations. In other words, multi-touch input may be received to the multi-touch interface, where the multi-touch input specifies a display operation for the graphical program. The display operation for the graphical program may be performed in response to the other multi-touch input, and the graphical program may be displayed in accordance with the display operation. [00128] Thus, for example, in one embodiment, the multi-touch input may include a multi-touch swiping move, e.g., a multi-finger swipe, and the display operation may include scrolling the graphical program. For example, swiping to the right may cause the block diagram to move to the right in the display window, thus scrolling left (or vice versa). Note that in some embodiments, the swiping and resultant scrolling needn't be orthogonal to the window frame. In other words, in some embodiments, a forty-five degree swipe may result in a commensurate, e.g., forty- five degree, motion or scrolling operation. This feature may be particularly useful for easily navigating large (2-dimensional) block diagrams. [00129] In another exemplary embodiment, the multi-touch input may include a pinching or reverse pinching motion (using two or more digits) applied to a region in the graphical program, and the edit operation may include zooming display of the graphical program out or in. Thus, for example, in one embodiment, to zoom in or magnify the display of the graphical program, the user may touch the touch-surface (multi-touch interface) with two or more fingers or digits bunched together, then spread them to invoke the zoom operation. Conversely, the user may touch the touch-surface with two or more fingers or digits (or other touch implements) spread, then draw them together to zoom out or reduce the image of the graphical program. [00130] It should be noted that in various embodiments, the multi-touch input may be performed with two or more digits from a single hand, from two hands, e.g., two index fingers, or even from multiple users, or, instead of fingers, may be performed via multiple styluses (styli), or a combination of both, as desired. In other words, the multi-touch input may be from any sources desired. Note, too, that as used herein, the term "finger" may refer to any digit, e.g., may include the thumb. [00131] Additionally, in some embodiments, multiple multi-touch edit sessions may be performed simultaneously on a single graphical program. For example, in an embodiment wherea large graphical program is displayed and edited on a multi-user touch-sensitive work surface, such as a touch-tab le/display, multiple users may apply various of the above described inputs and operations at the same time, where the table localizes each user's inputs, e.g., based on geometrical considerations, and thus operates as multiple independent editors operating on the same program. [00132] Thus, various embodiments of the systems and methods disclosed herein may provide for multi-touch editing of graphical programs. [00133] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
A semiconductor wafer having a via test structure is provided which includes a semiconductor substrate having a plurality of semiconductor devices. A dielectric layer deposited over the semiconductor substrate has second and fourth channels unconnected to the plurality of semiconductor devices. A via dielectric layer deposited over the channel dielectric layer has first and second vias and third and fourth vias respectively open to opposite ends of the second channel and the fourth channel. A second dielectric layer over the via dielectric layer has first, third, and fifth channels respectively connected to the first via, the second and third vias, and the fourth via. The first channel, the first via, the second channel, the second via, the third channel, the third via, the fourth channel, the fourth via, and the fifth channel are connected in series and the first and fifth channel are probed to determine the presence or absence of voids in the vias. |
1. A method of manufacturing a semiconductor wafer having a via test structure comprising:providing a semiconductor substrate having a plurality of semiconductor devices; depositing a first dielectric layer over the semiconductor substrate; forming a plurality of openings in the first dielectric layer, depositing a first barrier layer to line the plurality of openings; depositing a first conductor core to fill the openings; planarizing the first barrier layer and the first conductor core to form second and fourth channels unconnected to the plurality of semiconductor devices; depositing a via dielectric layer over the first dielectric layer; forming first and second via openings and third and fourth via openings in the via dielectric layer respectively open to opposite ends of the second channel and the fourth channel; depositing a second dielectric layer over the via dielectric layer; forming first, third, and fifth channel openings in the via dielectric layer respectively open to the first via opening, the second and third via openings, and the fourth via opening; depositing a second barrier layer to line the first, second, third, and fourth via openings and the first, third, and fifth channel openings; depositing a second conductor core to fill the first, second, third, and fourth via openings and the first, third, ar fifth channel openings; and planarizing the second barrier layer and the second conductor core to form first, third, and fifth channels having the first channel, the first via, the second channel, the second via, the third channel, the third via, the fourth channel, the fourth via, and the fifth channel connected in series whereby the first and fifth channels are probed to determine the presence or absence of voids in the vias. 2. The method of manufacturing a semiconductor wafer as claimed in claim 1 including:depositing a capping layer over the second dielectric layer and the first, third, and fifth channels; and processing the capping layer to expose the first and fifth channels whereby the first and fifth channel are exposed to be probed. 3. The method of manufacturing a semiconductor wafer as claimed in claim 1 including probing the via test structure using a process selected from electromigration test, wafer level reliability test, and a combination thereof.4. The method of manufacturing a semiconductor wafer as claimed in claim 1 wherein depositing the first and second barrier layers use a material selected from a group consisting of tantalum, titanium, tungsten, alloys thereof, and combinations thereof.5. The method of manufacturing a semiconductor wafer as claimed in claim 1 wherein depositing the first and second conductor cores use a material selected from a group consisting of copper, aluminum, silver, gold, alloys thereof, and combinations thereof.6. A method of manufacturing a semiconductor wafer having a via test structure comprising:providing a silicon substrate having a plurality of semiconductor devices; depositing a device oxide layer on the semiconductor substrate; depositing a first oxide layer on the device dielectric layer; forming a plurality of openings in the first oxide layer; depositing a first barrier layer to line the plurality of openings; depositing a first seed layer to line the first barrier layer; depositing a first conductor core to fill the plurality of openings to form second and fourth channels unconnected to the plurality of semiconductor devices; planarizing the first seed layer, the first barrier layer, and the first conductor core to be coplanar with the first oxide layer; depositing a via oxide layer over the first oxide layer; forming first and second via openings and third and fourth via openings in the via oxide layer respectively open to opposite ends of the second channel and the fourth channel; depositing a second oxide layer over the via oxide layer; forming first, third and fifth channel openings provided in the second oxide layer respectively open to the first via opening, the second and third via openings, and the fourth via opening; depositing a second barrier layer to line the first, second, third, and fourth via openings and the first, third, and fifth channel openings; depositing a second seed layer to line the second barrier layer, depositing a second conductor core to fill the first, second, third, and fourth via openings and the first, third, and fifth channel openings; and planarizing the second seed layer, the second barrier layer, and the second conductor core to be coplanar with the second oxide layer to form first, third, and fifth channels having the first channel, the first via, the second channel, the second via, the third channel the third via, the fourth channel, the fourth via, and the fifth channel connected in series whereby the first and fifth channels are probed to determine the presence or absence of voids in the first, second, third, and fourth vias. 7. The method of manufacturing a semiconductor wafer as claimed in claim 6 including:depositing a capping layer over the second oxide layer and the first, third, and fifth channels; and depositing photoresist, patterning, exposing, developing, and etch to expose the first and fifth channels whereby the first and fifth channel are exposed to be probed. 8. The method of manufacturing a semiconductor wafer as claimed in claim 6 including probing the via test structure using a process selected from electromigration test, wafer level reliability, and a combination thereof.9. The method of manufacturing a semiconductor wafer as claimed in claim 6 wherein depositing the first and second barrier layers use a material selected from a group consisting of tantalum, titanium, tungsten, alloys thereof, and combinations thereof.10. The method of manufacturing a semiconductor wafer as claimed in claim 6 wherein depositing the first and second seed and the first and second conductor cores use a material selected from a group consisting of copper, aluminum, silver, gold, alloys thereof, and combinations thereof. |
CROSS REFERENCE TO RELATED APPLICATIONSThis is a divisional of application Ser. No. 09/730,984 filed Dec. 5, 2000, now U.S. Pat. No. 6,498,384 B1.TECHNICAL FIELDThe present invention relates generally to semiconductors and more specifically to a testing method for semiconductor wafers.BACKGROUND ARTIn the manufacture of integrated circuits, after the individual devices such as the transistors have been fabricated in and on the semiconductor substrate, they must be connected together to perform the desired circuit functions. This interconnection process is generally called "metallization" and is performed using a number of different photolithographic, deposition, and removal techniques.In one interconnection process, which is called a "dual damascene" technique, two channels of conductor materials are separated by interlayer dielectric layers in vertically separated planes perpendicular to each other and interconnected by a vertical connection, or "via", at their closest point. The dual damascene technique is performed over the individual devices which are in a device dielectric layer with the gate and source/drain contacts, extending up through the device dielectric layer to contact one or more channels in a first channel dielectric layer.The first channel formation of the dual damascene process starts with the deposition of a thin first channel stop layer. The first channel stop layer is an etch stop layer which is subject to a photolithographic processing step which involves deposition, patterning, exposure, and development of a photoresist, and an anisotropic etching step through the patterned photoresist to provide openings to the device contacts. The photoresist is then stripped. A first channel dielectric layer is formed on the first channel stop layer. Where the first channel dielectric layer is of an oxide material, such as silicon oxide (SiO2), the first channel stop layer is a nitride, such as silicon nitride (SiN), so the two layers can be selectively etched.The first channel dielectric layer is then subject to further photolithographic process and etching steps to form first channel openings in the pattern of the first channels. The photoresist is then stripped.An optional thin adhesion layer is deposited on the first channel dielectric layer and lines the first channel openings to ensure good adhesion of subsequently deposited material to the first channel dielectric layer. Adhesion layers for copper (Cu) conductor materials are composed of compounds such as tantalum nitride (TaN), titanium nitride (TiN), or tungsten nitride (WN).These nitride compounds have good adhesion to the dielectric materials and provide good barrier resistance to the diffusion of copper from the copper conductor materials to the dielectric material. High barrier resistance is necessary with conductor materials such as copper to prevent diffusion of subsequently deposited copper into the dielectric layer, which can cause short circuits in the integrated circuit.However, these nitride compounds also have relatively poor adhesion to copper and relatively high electrical resistance.Because of the drawbacks, pure refractory metals such as tantalum (Ta), titanium (Ti), or tungsten (W) are deposited on the adhesion layer to line the adhesion layer in the first channel openings. The refractory metals are good barrier materials, have lower electrical resistance than their nitrides, and have good adhesion to copper.In some cases, the barrier material has sufficient adhesion to the dielectric material that the adhesion layer is not required, and in other cases, the adhesion and barrier material become integral. The adhesion and barrier layers are often collectively referred to as a "barrier" layer herein.For conductor materials such as copper, which are deposited by electroplating, a seed layer is deposited on the barrier layer and lines the barrier layer in the first channel openings. The seed layer, generally of copper, is deposited to act as an electrode for the electroplating process.A first conductor material is deposited on the seed layer and fills the first channel opening. The first conductor material and the seed layer generally become integral, and are often collectively referred to as the conductor core when discussing the main current-carrying portion of the channels.A chemical-mechanical polishing (CMP) process is then used to remove the first conductor material, the seed layer, and the barrier layer above the first channel dielectric layer to form the first channels. When a layer is placed over the first channels as a final layer, it is called a "cap" layer and a "single" damascene process is completed. When the layer is processed further for placement of additional channels over it, the layer is a via stop layer.The via formation step of the dual damascene process begins with the deposition of a thin via stop layer over the first channels and the first channel dielectric layer. The via stop layer is an etch stop layer which is subject to photolithographic processing and anisotropic etching steps to provide openings to the first channels. The photoresist is then stripped.A via dielectric layer is formed on the via stop layer. Again, where the via dielectric layer is of an oxide material, such as silicon oxide (SiOx), the via stop layer is a nitride, such as silicon nitride (SiN), so the two layers can be selectively etched. The via dielectric layer is then subject to further photolithographic process and etching steps to form the pattern of the vias. The photoresist is then stripped.A second channel dielectric layer is formed on the via dielectric layer. Again, where the second channel dielectric layer is of an oxide material, such as silicon oxide, the via stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. The second channel dielectric layer is then subject to further photolithographic process and etching steps to simultaneously form second channel and via openings in the pattern of the second channels and the vias. The photoresist is then stripped.An optional thin adhesion layer is deposited on the second channel dielectric layer and lines the second channel and the via openings.A barrier layer is then deposited on the adhesion layer and lines the adhesion layer in the second channel openings and the vias.Again, for conductor materials such as copper and copper alloys, which are deposited by electroplating, a seed layer is deposited by an electroless deposition process such as physical vapor deposition (PVD) or ionized metal plasma (IMP) deposition on the barrier layer and lines the barrier layer in the second channel openings and the vias.A second conductor material is deposited on the seed layer and fills the second channel openings and the vias.A CMP process is then used to remove the second conductor material, the seed layer, and the barrier layer above the second channel dielectric layer to form the first channels. When a layer is placed over the second channels as a final layer, it is called a "cap" layer and the "dual" damascene process is completed.The layer may be processed further for placement of additional levels of channels and vias over it.The use of the single and dual damascene techniques eliminates metal etch and dielectric gap fill steps typically used in the metallization process. The elimination of metal etch steps is important as the semiconductor industry moves from aluminum (Al) to other metallization materials, such as copper, which are very difficult to etch.As the size of semiconductor device is decreased in order to increase speed and reduce cost, the vias shrink in size and increase in aspect ratio, the depth to width ratio. As the vias become smaller and narrower, it becomes more difficult to assure proper formation of the vias without voids. Since a single via with a void can result in the failure of an entire integrated circuit, it is highly desirable to be able to test the vias for the absence of voids.Currently, step coverage and physical continuity are measured by taking transmission electron microscope (TEM) measurements of the cross-sections of the channels in a semiconductor device. This requires the destruction of the semiconductor device in order to make the measurement.Another method involves the use of test structures having two vias which are connected at the bottoms and tested across the tops of the vias. These test structures are neither sensitive nor reliable for physical continuity testing. Solutions for nondestructively testing and physical continuity testing have been long sought by, but have eluded, those skilled in the art. Further, it would be desirable to measure electrical continuity, but there are no currently existing solutions to meet this desire.DISCLOSURE OF THE INVENTIONThe present invention provides a semiconductor wafer having a via test structure which includes a semiconductor substrate having a plurality of semiconductor devices. A dielectric layer deposited over the semiconductor substrate has second and fourth channels unconnected to the plurality of semiconductor devices. A via dielectric layer deposited over the channel dielectric layer has first and second vias and third and fourth vias respectively open to opposite ends of the second channel and the fourth channel. A second dielectric layer over the via dielectric layer has first, third, and fifth channels respectively connected to the first via, the second and third vias, and the fourth via. The first channel, the first via, the second channel, the second via, the third channel, the third via, the fourth channel, the fourth via, and the fifth channel are connected in series and the first and fifth channel are probed to determine the presence or absence of voids in the vias.The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 (PRIOR ART) is a plan view of aligned channels with a connecting via;FIG. 2 (PRIOR ART) is a cross-section of FIG. 1 along line 2-2;FIG. 3 is a plan-view of the test structure of the present invention; andFIG. 4 is a cross-section of FIG. 3 along line 3-3.BEST MODE FOR CARRYING OUT THE INVENTIONReferring now to FIG. 1 (PRIOR ART), therein is shown a plan view of a semiconductor wafer 100 with a silicon semiconductor substrate (not shown) having as interconnects first and second channels 102 and 104 connected by a via 106. The first and second channels 102 and 104 are respectively disposed in first and second channel dielectric layers 108 and 110. The via 106 is an integral part of the second channel 104 and is disposed in a via dielectric layer 112.The term "horizontal" as used in herein is defined as a plane parallel to the conventional plane or surface of a wafer, such as the semiconductor wafer 100, regardless of the orientation of the wafer. The term "vertical" refers to a direction perpendicular to the horizontal as just defined. Terms, such as "on", "above", "below", "side" (as in "sidewall"), "higher", "lower", "over", and "under", are defined with respect to the horizontal plane.Referring now to FIG. 2 (PRIOR ART), therein is shown a cross-section of FIG. 1 (PRIOR ART) along line 2-2. A portion of the first channel 102 is disposed in a first channel stop layer 114 and is on a device dielectric layer 116. Generally, metal contacts are formed in the device dielectric layer 116 to connect to an operative semiconductor device (not shown). This is represented by the contact of the first channel 102 with a semiconductor contact 118 embedded in the device dielectric layer 116. The various layers above the device dielectric layer 116 are sequentially: the first channel stop layer 114, the first channel dielectric layer 108, a via stop layer 120, the via dielectric layer 112, a second channel stop layer 122, the second channel dielectric layer 110, and a next channel stop layer 124 (not shown in FIG. 1).The first channel 102 includes a barrier layer 126, which could optionally be a combined adhesion and barrier layer, and a seed layer 128 around a conductor core 130. The second channel 104 and the via 106 include a barrier layer 132, which could also optionally be a combined adhesion and barrier layer, and a seed layer 134 around a conductor core 136. The barrier layers 126 and 132 are used to prevent diffusion of the conductor materials into the adjacent areas of the semiconductor device. The seed layers 128 and 134 form electrodes on which the conductor material of the conductor cores 130 and 136 is deposited. The seed layers 128 and 134 are of substantially the same conductor material as the conductor cores 130 and 136 and become part of the respective conductor cores 130 and 136 after the deposition.The deposition of the barrier layer 132 is such that it fills the bottom of the via 106 at barrier layer portion 138 so as to effectively separate the conductor cores 130 and 136.In the past, for copper conductor material and seed layers, highly resistive diffusion barrier materials such as tantalum nitride (TaN), titanium nitride (TiN), or tungsten nitride (WN) were used as barrier materials to prevent diffusion.Referring now to FIG. 3, therein is shown a plan view of a semiconductor wafer 200 with a silicon semiconductor substrate (not shown) having a via test structure 201 of in-line channels interconnected by vias. A cappling layer is shown removed to simplify the figure. A first channel 204A is connected at one end by a via 206A to one end of a second channel 202A. The opposite end of the second channel 202A is connected by a via 206B to one end of a third channel 204B. The far end of the third channel 204B is connected by a via 206C to one end of a fourth channel 202B. The far end of the fourth channel 202B is connected by a via 206D to a fifth channel 204C. The second and third channels 202A and 204B are disposed in a first channel dielectric layer 208. The first, third, and fifth channels 204A, 204B, and 204C, respectively, are disposed in a second channel dielectric layer 210. Vias 206A, 206B, 206C, and 206D are also in-line and disposed in a via dielectric layer 212.Referring now to FIG. 4, therein is shown a cross-section of FIG. 3 along line 4-4. The via test structure 201 on the semiconductor wafer 200 is shown as deposited on a device dielectric layer 216 during the deposition of a conventional interconnect structure for an integrated circuit chip.Essentially, the device dielectric layer 216 has a stop layer 214 and the first channel dielectric 208 deposited on thereon. The first channel dielectric layer 208 and the stop layer 214 are photolithographically processed and etched to form openings for the second and fourth channels 202A and 202B. Subsequently, a barrier layer 226, a seed layer 228, and a conductor core 230 are deposited. A chemical-mechanical polishing (CMP) process is used to planarize the conductor core 230, the seed layer 228, and the barrier layer 226 to be coplanar with the first channel dielectric 208 and to form the second and fourth channels 202A and 202B.Subsequently, a via stop layer 220 and a via dielectric layer 212 are deposited over the first channel dielectric 208 and the second and fourth channels 202A and 202B. The via stop layer 220 is photolithographically processed and etched for via openings, but the via dielectric layer 212 is not.A second channel stop layer 222 is then deposited over the via dielectric layer 212 followed by the deposition of a second channel dielectric layer 210. The second channel dielectric layer 210 and the second channel stop layer 222 are photolithographically processed for the simultaneous formation of the second level of channels and the vias.The barrier layer 232, the seed layer 234, and the conductor core 236 are sequentially deposited and subject to CMP to become coplanar with the top surface of the second channel dielectric 210 and to simultaneously form the first, third, and fifth channels 204A, 204B, and 204C, respectively, and the first, second, third, and fourth vias 206A, 206B, 206C, and 206D, respectively.Subsequently, a capping layer 224 is deposited over the second channel dielectric layer 210 and the first, third, and fifth channels 204A, 204B, and 204C, respectively. The capping layer 224 is then covered with photoresist, patterned, developed, and etched to expose the first and fifth channels 204A and 204C.It will be noted that the same vias and channels as used for the semiconductor device connected portions of the integrated circuit are formed in the above processes.By placing probes 240 and 242 of a testing meter into contact with the exposed portions of the first and fifth channels 204A and 204C, various electrical measurements may be made. A calibration measurement is first made of the resistance of the via test structure 201 of the semiconductor wafer 200 having completely filled vias with no voids as determined by TEM measurements; i.e., a reference measurement of "standard" vias. Subsequently, production semiconductor wafers need only be probed and resistance measurements taken, nondestructively, and compared to this reference resistance measurement to determine the existence of voids.As would be evident to those skilled in the art, the via test structure 201 becomes more sensitive to detect voids caused by the process as the chain becomes longer with more channels and vias. The long series of channels and vias can capture the voiding statistically since the probability of one or more of the vias having a void is high in a process which may be subject to voiding. A void in any one of the plurality of vias may be sensed.A package level electromigration (EM) test or a fast wafer level reliability (WLR) isothermal test can be used to detect the voids. Further, a good process can serve as the baseline for comparison of test results on "non-optimized" processes.The via test structure 201 can also be used as a WLR test monitor in the fabrication facility during the manufacturing phase to periodically monitor the fabrication process.While only four vias are shown in the Best Mode, it would be evident that the sensitivity increases with the number of vias being tested. Further, while the channels are shown in-line, it would be evident that the primary criterion is that the vias be connected in series.It would be understood by those skilled in the art that the present invention is particularly useful with copper conduction via since tungsten and aluminum conductor vias have high resistances which make test measurements difficult.In the best mode, the barrier layers are of materials such as tantalum (Ta), titanium (Ti), tungsten (W), nitrides thereof, and a combination thereof. The seed layers and/or conductor cores are of materials such as copper (Cu), copper-base alloys, aluminum (Al), aluminum-base alloys, gold (Au), gold-base alloys, silver (Ag), silver-base alloys, and a combination thereof. The dielectric layers are of silicon dioxide or a low dielectric material such as HSQ, Flare, etc. The stop layers are of materials such as silicon nitride or silicon oxynitride.While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense. |
Low complexity (16 bit arithmetic) video compression has 8x8 block with transforms using 8x8 integer matrices and quantization with look up table scalar plus constant right shift for all quantization steps. Inverse quantization also a look up table scalar plus right shift dependent upon the quantization step and inverse transform using the 8x8 integer matrices. |
1.An 8 × 8 block transform method, including:(a) Use an 8 × 8 transformation matrix to transform an 8 × 8 sample data matrix into an 8 × 8 intermediate matrix:(b) scaling the intermediate matrix using a scaling matrix; and(c) Shifting the elements of the scaled intermediate matrix by N bits to generate a transformation matrix, where N is an integer in the range of 13 to 16.2.The method of claim 1, wherein:(a) The elements of the scaling matrix are shifted by 19-N bits of the elements of the second scaling matrix.3.A block transform method including:(a) multiplying an n × n sample data matrix by an n × n transform matrix and n × n transpose of the transform matrix to generate a coefficient matrix;(b) choose an integer N less than Nmax;(c) shifting the rounded coefficient matrix by N bits to obtain a shifted coefficient matrix;(d) providing a scaling matrix corresponding to the Nmax;(e) shifting the rounded scaling matrix by a maximum value of N-N bits to obtain a shifted scaling matrix;(f) scaling the coefficient matrix of the displacement with the scaling matrix of the displacement.4.The method according to claim 3, wherein(a) n is equal to 8; and(b) The maximum N is equal to 19.5.The method according to claim 3, wherein(a) The transformation matrix is:6.The method according to claim 3, wherein(a) Nmax is 19 and the scaling matrix is: |
8× 8 transform and quantizationTechnical fieldThe present invention relates to digital graphics and video signal processing, and more particularly to block transform and / or quantization plus inverse quantization and / or inverse transform.Background techniqueThere are a variety of application software for digital video communication and storage, and corresponding international standards have been and continue to be developed. Low bit-rate communications such as video telephony and conferences, coupled with large video file compression such as animation, have resulted in multiple video compression standards such as H.261, H.263, MPEG-1, MPEG-2, and AVS. These compression methods rely on discrete cosine transform (DCT) or analog transform plus quantization of transform coefficients to reduce the number of bits that need to be encoded.The DCT-based compression method decomposes the picture into macroblocks, where each macroblock contains four 8 × 8 luminance blocks plus two 8 × 8 chroma blocks, but other block sizes and transformation variables can also be used. Figure 2 describes the functional blocks of DCT-based video coding. To reduce the bit rate, 8 × 8 DCT is used to convert 8 × 8 blocks (luminance and chrominance) into the frequency domain. Next, 8 × 8 blocks of DCT coefficients are quantized, scanned into a 1-D sequence, and encoded by using variable length coding (VLC). For predictive coding involving motion compensation (MC), inverse quantization and IDCT are needed for the feedback loop. With the exception of MC, all functional blocks in Figure 2 operate on an 8x8 block basis. The rate control unit in FIG. 2 is responsible for generating a quantization step (qp) within the allowed range and according to the target bit rate and the degree of buffer fullness, to control the DCT coefficient quantization unit. Of course, a larger quantization step means more disappearances and / or smaller quantization coefficients, which means fewer and / or shorter codewords and therefore smaller bit rates and files.There are two types of encoded macroblocks. INTRA-encoded macroblocks are encoded independently of previous reference frames. In an INTER-encoded macroblock, a motion-compensated prediction block from a previous reference frame is first generated for each block (of the current macroblock), and then a prediction error block (that is, a difference block between the current block and the prediction block) is decoded.For INTRA-encoded macroblocks, the first (0,0) coefficients in an INTRA-encoded 8 × 8 DCT block are called DC coefficients, and the remaining coefficients of 63 DCT coefficients in the block are called AC coefficients; while for INTER For encoded macroblocks, all 64 DCT coefficients in an INTER-encoded 8 × 8 DCT block are considered AC coefficients. The DC coefficient may be quantized with a fixed value of the quantization step, and the AC coefficient has a quantization step adjusted according to a bit rate control that encodes the bits that have been used in a picture and the allocation that will be used Compare the number of bits. In addition, a quantization matrix (e.g., in MPEG-4) allows varying quantization steps in DCT coefficients.Specifically, the 8 × 8 two-dimensional DCT is defined as:Where f (x, y) is the input 8 × 8 sample block, and F (u, v) is the output 8 × 8 transform block, where u, v, x, y = 0, 1, ... 7; andNote that this transformation has the form of 8 × 8 matrix multiplication, F = D ′ × f × D,Where D is an 8 × 8 matrix with u and x elementsThe transformation is performed in double precision, and the final transformation coefficient is rounded to an integer value.Next, the quantization of the transform coefficient is defined asWhere QP is a quantization factor calculated in double precision from the quantization step, and qp is used as an index, for example: QP = 2qp / 8. The quantized coefficients are rounded to integer values and encoded.The corresponding inverse quantization becomes:F ′ (u, v) = QF (u, v) * QPDouble values are rounded to integer values.Finally, the inverse transform (re-established sample block) is:Similarly, double values are rounded to integer values.Various alternative methods, such as the H.264 and AVS standards, simplify double-precision methods by using integer transforms and / or blocks of different sizes. Specifically, an 8 × 8 integer transform matrix T8 × 8 is defined, which has elements similar to the 8 × 8 DCT transform coefficient matrix D. Then, f8 × 8 and F8 × 8 indicating the input 8 × 8 sample data matrix (pixel or redundant block) and the output 8 × 8 transform coefficient block are used to define the positive 8 × 8 integer transform as:F8 × 8 = T′8 × 8 × f8 × 8 × T8 × 8“×” represents an 8 × 8 matrix multiplication, and the 8 × 8 matrix T′8 × 8 is a transposed T8 × 8 of the 8 × 8 matrix.The quantization of the transform coefficient may be an index of the above quantization step or a look-up table having an integer number of entries may be used. The inverse quantization writes down the quantization. And the inverse transform also uses T8 × 8, which is similar to the transposition of DCT using D and its transposition for forward and inverse transform.However, the computational complexity of these alternatives must still be reduced.Summary of the inventionThe present invention provides a low-complexity 8 × 8 transform for image / video processing by distinguishing bit shifts and rounding.The preferred embodiment method provides a 16-bit operation suitable for use in video coding with motion compensation.BRIEF DESCRIPTION OF THE DRAWINGSFigures 1a-1b are flowcharts.Figure 2 illustrates motion-compensated video compression with DCT transform and quantization.Figure 3 shows the method comparison.detailed description1.OverviewThe preferred embodiment low complexity method provides a simplified 8x8 positive transform applied to the 16-bit AVS method. The method has been applied to video compression operating on 8 × 8 blocks of (dynamically compensated) pixels with a DCT transform and quantization of DCT coefficients (where the quantization can vary widely). As illustrated in Figure 2, the fullness feedback from the bitstream buffer can determine a quantization factor, which typically varies in the range of 1 to 200-500. Figures 1a-1b show the transformation / quantization of encoded and decoded streams.Preferred embodiments The system implements preferred embodiments with the following: a digital signal processor (DSP) or a general-purpose programmable processor or an application-specific circuit or system on a chip (SoC), for example, all on the same chip with DSP and RISC processor controlled by RISC processor. In particular, a digital camera (DSC) with video chip capability or a cellular phone with video capability may include the preferred embodiment method. The stored program may be located on the on-board ROM or on an external flash EEPROM for a DSP or programmable processor to perform signal processing of the method of the preferred embodiment. Analog-to-digital converters and digital-to-analog converters provide coupling to the real world, and regulators and demodulators (plus antennas for the air interface) provide couplings for transmitting waveforms.2.AVSFirst, consider AVS transform, quantization, and inversion; the preferred embodiment method will provide a simplification of the AVS positive transform.(a) AVS positive transformationAVS positive 8 × 8 transform uses the following 8 × 8 transform matrix, T8 × 8, for matrices with 8 × 8 sample data matrix (image pixels or dynamically redundant blocks) plus 8 × 8 scan matrix SM8 × 8 matrix Multiplication, used to scan the resulting matrix elements. The transformation matrix is:And the scan matrix SM8 × 8 = {SMi, j: i, j = 0,1,2, .. 7} is:The change process is as follows. First, let f8 × 8- {fi, j: i, j = 0,1,2, ..., 7} instruct to input an 8 × 8 sample data matrix and let F8 × 8 = {Fi, j: i, j = 0, 1, 2, ..., 7} indicates an 8 × 8 output DCT coefficient matrix. AVS positive change has two steps and uses an intermediate 8 × 8 matrix X8 × 8:X8 × 8 = {T′8 × 8 × f8 × 8 × T8 × 8}> 5Fi, j = sign (Xi, j) * ((| Xi, j | * SMi, j + 218) >> 19) i, j = 0,1,2, ..., 7 The following symbols are here and Used below:T8 × 8t is the transpose of the transformation matrix T8 × 8X8 × 8 = {Xi, j: i, j = 0,1,2, ..., 7) is the intermediate matrix after matrix multiplication, where the transformation matrix and its transposition plus a rounded bit shift, such as Above show· × is matrix multiplication* Is scalar multiplication· | X | is the absolute value of xSign (x) is defined as·is right-rounded by n-bit matrices: more specifically, for a matrixM8 × 8 = {Mi, j: i, j = 0,1,2, ..., 7} operation m8 × 8 = M8 × 8is defined bym8 × 8 = {mi, j: i, j = 0,1,2, ..., 7}, where mi, j = (Mi, j + 2n-1) >> n· >> indicates a right shift, applied to a number when represented as a binary symbol (eg, two's complement).Therefore, the transformation matrix T8 × 8 is similar to the 8 × 8 DCT matrix and SM8 × 8 is a scaling adjustment.(b) AVS quantificationAVS quantization supports 64 quantization steps, qp = 0, 1, ..., 63, and uses the following quantization table Q_TAB [64]:Therefore, the quantization factor Q_TAB [gp] is mainly 215-qp / 8 and the quantization of the transformation matrix F8 × 8 is:QFi, j = sign (Fi, j) * (| Fi, j | * Q_TAB [qp] + α * 215) >> 15i, j = 0,1,2, ...,7Where a is a quantization control parameter, for example, 1/3 for an INTRA-encoded macroblock and 1/6 for an INTER-encoded macroblock. These quantized coefficients are encoded.(c) AVS inverse quantizationAVS inverse quantization of the 8 × 8 quantized DCT coefficient block QF8 × 8 = {QFi, j: i, j = 1, 2, ..., 7} is defined as:F′i, j = (QFi, j * IQ_TAB [qp] + 2IQ_SHIFT [qp]) >> IQ_SHIFT [qp] i, j = 0,1,2, ...,7Where is the inversely quantized DCT coefficient block and the IQ_TAB and IQ_SHIFT tables are defined as:Note that IQ_TAB [qp] is a 16-bit positive integer (unsigned bit) that has the most significant bit (MSB) equal to 1 for all qp, and IQ_SHIFT [qp] is in the range of 7-14.(d) Inverse AVS transformationAVS inverse 8 × 8 transform uses the same 8 × 8 transform matrix T8 × 8 and its transposition multiplication:Among them, f′8 × 8 = {f′i, j: i, j = 1,2, ..., 7} is a newly established 8 × 8 sample data matrix.3.First preferred embodimentTo reduce the complexity of the transform and quantization of the AVS in Part 2, the preferred embodiment provides an improved forward transform to use the quantization, inverse quantization, and inverse transform in Part 2 together. The preferred embodiment method simplifies the calculation by eliminating a signal () operation and limiting the bit shift, so that a 16-bit-based processor can operate more efficiently. That is, only the positive transform is changed, and in section 4 there is a comparison of the AVS in the section with the preferred embodiment method.(a) Positive transformation of the preferred embodimentRecall that the AVS forward transformation described in Part 2 is:X8 × 8 = {T′8 × 8 × f8 × 8 × T8 × 8}> 5Fi, j = sign (Xi, j) * ((| Xi, j | * SMi, j + 218) >> 19) i, j = 0,1,2, ...,7The second step is computationally expensive, especially for 16-bit devices. To reduce complexity, the method of the preferred embodiment modifies the second step of the positive transformation to mainly divide the 19-bit displacement into N-bit displacements plus the 19-N-bit displacements in the scaling matrix:Where SMi, jN is defined as:Where SM8 × 8 = {SMi, j: i, j = 1,2, ..., 7} is the scaling matrix defined in the second part, and is a new scaling matrix.In this transformation, N is the number of shift bits and the performance is better as N increases (see the next section); but in order to reduce the complexity of a 16-bit processor, N should be less than or equal to 16.For example, for N = 16:Note that when N = 19, SM8 × 8 (N) is substantially equal to SM8 × 8 in Part 2, and each time N decreases by 1, the matrix elements are divided by 2 and finally rounded.Compared to the AVS positive transform described in Part 2, the preferred embodiment has much lower complexity because the elimination of the sign (x) operation of the memory access and all right shifts are within 16 bits. Therefore, the preferred embodiment method makes the AVS forward transform described in Part 2 more computationally cost-effective.(b) Preferred embodiment quantificationThe preferred embodiment method uses the same quantization as described in Part 2(c) Inverse quantization of the preferred embodimentThe preferred embodiment method uses the same inverse quantization as described in Part 2(d) Preferred embodiment inverse transformationThe preferred embodiment method uses the same inverse transform as described in Part 24.Experimental resultsSimulations were performed to test the efficiency of the simplified forward transform simplified by the preferred embodiment. In the table below, the column "reference point T & Q" shows the signal-to-noise ratio (SNR0) of the AVS transform plus quantization application followed by the inverse quantization and inverse transform described in Part 2. The "Simplified T & Q" column shows the difference between the signal-to-noise ratio (SNR1) and the same block SNR0 as for the application of the preferred embodiment positive transform of various N values together with AVS quantization followed by AVS inverse quantization and AVS inverse transform; that is, In these cases only the positive transform changes, everything else remains the same. Test all quantization steps (qp = 0,1,2, ... 63). Each qp is tested with 6000 random 8 × 8 blocks, and the pixel value is in the range of [-255: 255]. The SNR between the input sample block and its re-established block is calculated for each qp on all test sample blocks (see Figure 3). The results for N = 16, 15, 14, 13, 12, and 11 are listed in the table.As shown in the table, as long as N ≧ 13, the preferred embodiment simplified forward transform method performs almost the same transform as the AVS forward transform. However, when N≤12, significant loss starts to appear in the high end block (> 50dB region).Since the complexity of a 16-bit device is almost the same as long as N≤16, the simplified embodiment simplified conversion method (16≥N≥13) of the preferred embodiment provides the same compression efficiency as the current AVS transform design, but the computational complexity is lower.5.modifyThe preferred embodiment method can be modified in various ways while maintaining the characteristics of a simplified positive transform.For example, the rounding can be changed or ...? ? ? . |
An indication that an allocation unit of a memory sub-system has become unmapped can be received. In response to receiving the indication that the allocation unit of the memory sub-system has become unmapped, the allocation unit can be programmed with a data pattern. Data to be written to the unmapped allocation unit can be received. A write operation can be performed to program the received data at the unmapped allocation unit by using a read voltage that is based on the data pattern. |
1.A method comprising:receiving an indication that the allocation unit of the memory subsystem has become unmapped;in response to receiving the indication that the allocation unit of the memory subsystem has become unmapped, programming the allocation unit with a data pattern by a processing device;receiving data to be written to the unmapped allocation unit; andThe received data is programmed at the unmapped allocation cell by performing a write operation using a read voltage based on the data pattern.2.The method of claim 1, wherein the performing of the write operation comprises:performing a pre-read operation of the unmapped allocation unit by applying the read voltage to the unmapped allocation unit to retrieve the stored value;comparing the stored value with the value of the received data; andWhether to write the value of the received data at the allocation unit is determined based on the comparison of the stored value with the value of the received data.3.10. The method of claim 1, wherein the allocation unit comprises a plurality of memory cells, and wherein the programming of the allocation unit with the data pattern corresponds to programming the plurality of memory cells to be at a high voltage state, and wherein the read voltage is lower than the voltage level of the high voltage state.4.1. The method of claim 1, wherein the allocation unit comprises a plurality of memory cells, and wherein the programming of the allocation unit with the data pattern corresponds to programming the plurality of memory cells to be at a low voltage state, and wherein the read voltage is higher than the voltage level of the low voltage state.5.The method of claim 1, wherein the allocation unit is unmapped based on removing the allocation unit from a logical address space of a host system.6.The method of claim 5, further comprising:receiving an indication that the allocation unit is to change from unmapped to mapped based on the allocation unit being added to the logical address space of the host system, and wherein the allocation unit is added to the logical address in view of the space to receive the data for writing to the allocation unit.7.The method of claim 1, wherein the data pattern changes direction based on voltages of threshold voltage distributions of the distribution cells.8.A non-transitory computer-readable medium comprising instructions that, when executed by a processing apparatus, cause the processing apparatus to perform operations comprising:receiving an indication that the allocation unit of the memory subsystem has become unmapped;in response to receiving the indication that the allocation unit of the memory subsystem has become unmapped, programming the allocation unit with a data pattern;receiving data to be written to the unmapped allocation unit; andThe received data is programmed at the unmapped allocation cell by performing a write operation using a read voltage based on the data pattern.9.The non-transitory computer-readable medium of claim 8, wherein to perform the write operation, the operation further comprises:performing a pre-read operation of the unmapped allocation unit by applying the read voltage to the unmapped allocation unit to retrieve the stored value;comparing the stored value with the value of the received data; andWhether to write the value of the received data at the allocation unit is determined based on the comparison of the stored value with the value of the received data.10.9. The non-transitory computer-readable medium of claim 8, wherein the allocation unit comprises a plurality of memory cells, and wherein the programming of the allocation unit with the data pattern corresponds to programming the plurality of memory cells The cell is programmed to be in a high voltage state, and wherein the read voltage is lower than the voltage level of the high voltage state.11.9. The non-transitory computer-readable medium of claim 8, wherein the allocation unit comprises a plurality of memory cells, and wherein the programming of the allocation unit with the data pattern corresponds to programming the plurality of memory cells The cell is programmed to be in a low voltage state, and wherein the read voltage is higher than the voltage level of the low voltage state.12.9. The non-transitory computer-readable medium of claim 8, wherein the allocation unit is unmapped based on removing the allocation unit from a logical address space of a host system.13.The non-transitory computer-readable medium of claim 12, wherein the operations further comprise:receiving an indication that the allocation unit is to change from unmapped to mapped based on the allocation unit being added to the logical address space of the host system, and wherein the allocation unit is added to the logical address in view of the space to receive the data for writing to the allocation unit.14.9. The non-transitory computer-readable medium of claim 8, wherein the data pattern changes direction based on voltages of threshold voltage distributions of the distribution cells.15.A system comprising:memory components; anda processing device operatively coupled with the memory component to:receiving an indication to remove a group of memory cells of the memory subsystem from a logical address space used to access the memory subsystem;in response to receiving the indication, removing the group of memory cells of the memory subsystem from the logical address space; andThe group of memory cells that have been removed from the logical address space are programmed with voltage states.16.The system of claim 15, wherein the processing device further performs the following operations:receiving an indication to return the group of memory cells to the logical address space;receiving data to be written to the group of memory cells that has been returned to the logical address space; andThe received data is programmed at the group of memory cells by performing a write operation using a read voltage based on the voltage state.17.17. The system of claim 16, wherein to perform the write operation, the processing device is to:performing a pre-read operation for the group of memory cells by applying the read voltage to the group of memory cells to retrieve stored values;comparing the stored value with the value of the received data; andWhether to write the value of the received data at the group of memory cells is determined based on the comparison of the stored value with the value of the received data.18.16. The system of claim 15, wherein the voltage state corresponds to a high voltage state, and wherein the read voltage is lower than a voltage level of the high voltage state.19.16. The system of claim 15, wherein the voltage state corresponds to a low voltage state, and wherein the read voltage is higher than a voltage level of the low voltage state.20.16. The system of claim 15, wherein the voltage state changes direction based on a voltage of a threshold voltage distribution of the group of memory cells. |
Management of Unmapped Allocation Units for the Memory Subsystemtechnical fieldThe present disclosure relates generally to a memory subsystem, and more particularly, to the management of unmapped allocation units of the memory subsystem.Background techniqueThe memory subsystem may be a storage system, a memory module, or a mix of storage devices and memory modules. The memory subsystem may include one or more memory components that store data. The memory components may be, for example, non-volatile memory components and volatile memory components. In general, a host system may utilize a memory subsystem to store data at and retrieve data from memory components.Description of drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the disclosure.1 illustrates an example computing environment including a memory subsystem, according to some embodiments of the present disclosure.2 is a flowchart of an example method to program a high voltage state at an unmapped allocation cell and perform a write operation with a minimum pre-read voltage, in accordance with some embodiments.3 illustrates voltage states associated with data patterns and pre-read voltages in accordance with some embodiments of the present disclosure.4 is a flowchart of an example method to manage unmapped allocation cells based on data patterns and pre-read voltages, in accordance with some embodiments.5A illustrates a transformation between unmapped allocation units and mapped allocation units in accordance with some embodiments of the present disclosure.5B illustrates host system command-based translation between unmapped allocation units and mapped allocation units in accordance with some embodiments of the present disclosure.5C illustrates the transformation between unmapped allocation units and mapped allocation units based on wear leveling operations, in accordance with some embodiments of the present disclosure.6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.detailed descriptionAspects of the present disclosure relate to the management of unmapped allocation units of a memory subsystem. The memory subsystem may be storage devices, memory modules, or a mix of storage devices and memory modules. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more memory components. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.Conventional memory subsystems may store data at allocation units. Allocation units may be individual sectors or portions of a memory subsystem that are individually accessible by a host system. For example, an allocation unit may be used to store the smallest amount of data that can be retrieved from or written to the memory subsystem individually. In some embodiments, an allocation unit may be one or more memory units of a memory component included in a memory subsystem. Allocation units can be mapped or unmapped. A mapped allocation unit may refer to an allocation unit that has been assigned to the logical address space used by the host system. For example, the mapped allocation unit may currently be used to store and retrieve data from the host system. An unmapped allocation unit may refer to an allocation unit that is not currently assigned to the logical address space used by the host system. For example, an unmapped allocation unit may be a group of over-provisioned data blocks in the memory subsystem that are currently inaccessible to the host system. In some embodiments, an allocation unit may become unmapped when it is placed in an erased state.Allocation units can be switched between being mapped and unmapped. For example, an allocation unit may be unmapped during the initial operational life of the memory subsystem, and may then be designated as accessible by the host system and become mapped. In some embodiments, allocation units may be mapped and may become unmapped in response to host commands (eg, trim commands) or wear leveling operations performed by the memory subsystem. Later, the unmapped allocation unit may be returned as mapped in response to another host command or wear leveling operation.Write operations performed by the memory subsystem may utilize prefetch sub-operations. For example, the memory subsystem may utilize cross-point array memory, where a write operation may be an in-place write operation that may program memory cells of the cross-point array memory without erasing the memory cells. In such an in-place write operation, the prefetch sub-operation may retrieve the current state (eg, value) of the memory cell, and may change the value of the memory cell if the value to be written is different from the current value of the memory cell . For example, if a memory cell currently stores a value of '1', and a value of '1' is to be written, the value of '1' will not be reprogrammed to the memory cell because the stored value matches the requested value.Memory cells may use a low voltage (LV) state and a high voltage (HV) state to represent different bit values. For example, the LV state may represent a bit value of '0' and the HV state may represent a bit value of '1' (or vice versa). The presence of the LV state or the HV state (ie, the threshold voltage distribution) can be detected at the memory cell by applying a voltage (ie, the read threshold voltage) to the memory cell. However, due to the physical characteristics of the memory cells, the threshold voltage distribution of the memory cells can change or migrate over time. Therefore, applying a read threshold voltage to a memory cell as part of a pre-read operation can result in incorrect values being retrieved. For example, for one type of media component where the threshold voltage distribution shifts over time towards higher voltages, it may be incorrectly determined that the memory cells are in the HV state rather than the originally programmed LV state. Therefore, since the prefetch operation may return an incorrect value, the memory subsystem may determine not to program the memory cell with the new value if the new value to be programmed matches the value represented by the incorrect HV state. Consequently, the number of errors stored at the memory subsystem may increase.Aspects of the present disclosure address the above and other deficiencies by managing unmapped allocation units of a memory subsystem. For example, when an allocation unit becomes unmapped, the memory subsystem may perform a write operation at the allocation unit. The write operation may place the memory cells of the allocation unit in a certain state. For example, based on the characteristics of the media components (ie, the direction of threshold voltage transitions), data patterns (eg, high voltage states) can be programmed to each memory cell of the allocation unit. Subsequently, when the allocation unit is to become mapped, data from the host system can be programmed to the allocation unit. For example, data may be written to the distribution cells based on pre-read voltages that are lower than voltages in the data mode or high voltage state. For example, the read threshold voltage used during pre-read operations may utilize the lowest or lower read threshold voltage available to the memory subsystem.Advantages of the present disclosure include, but are not limited to, improved performance of the memory subsystem, as data retrieved from the memory subsystem may contain fewer errors. For example, for media component types whose threshold voltage distributions are shifted to higher voltages, when an allocation cell is unmapped, a data pattern corresponding to the high voltage state may be programmed to the memory cells of the allocation cell. Therefore, fewer errors can be retrieved for the pre-read sub-operation with a lower read threshold voltage because the lower read threshold voltage can more accurately detect high voltages applied to data patterns of unmapped allocation cells the existence of the state. Therefore, a write operation that uses the results of the pre-read sub-operation can program data accurately because the results of the pre-read sub-operation are used to determine whether to program the memory cells of the allocation cells. Thus, the performance of the memory subsystem can be improved because fewer error correction operations can be performed. For example, fewer read retry operations will be performed to retrieve data from allocation units. Therefore, more read and write operations can be performed by the memory subsystem.1 illustrates an example computing environment 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media, such as memory components 112A-112N. Memory components 112A-112N may be volatile memory components, non-volatile memory components, or a combination of such components. Memory subsystem 110 may be a storage device, a memory module, or a mix of storage devices and memory modules. Examples of storage devices include solid state drives (SSDs), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, and hard disk drives (HDDs) . Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and non-volatile dual inline memory modules (NVDIMMs).Computing environment 100 may include host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 illustrates one example of a host system 120 coupled to a memory subsystem 110 . Host system 120 uses memory subsystem 110 to, for example, write data to and read data from memory subsystem 110 . As used herein, "coupled to" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless, including, for example, electrical connections, Optical connections, magnetic connections, etc.Host system 120 may be a computing device, such as a desktop computer, laptop computer, web server, mobile device, embedded computer (eg, a computer contained in a vehicle, industrial equipment, or networked business device), or may contain memory and Such computing devices of processing devices. Host system 120 may include or be coupled to memory subsystem 110 such that host system 120 may read data from or write data to memory subsystem 110 . Host system 120 may be coupled to memory subsystem 110 through a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless, including, for example, electrical connections, Optical connections, magnetic connections, etc. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS), etc. . A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may also utilize an NVM Express (NVMe) interface to access memory components 112A-112N. The physical host interface may provide an interface for transferring control, address, data, and other signals between memory subsystem 110 and host system 120 .Memory components 112A-112N may include any combination of different types of non-volatile memory components and/or volatile memory components. Examples of non-volatile memory components include NAND-type flash memory. Each of memory components 112A-112N may include one or more arrays of memory cells, such as single-level cells (SLCs) or multi-level cells (MLCs) (eg, triple-level cells (TLCs) or Four Level Cell (QLC)). In some embodiments, a particular memory component may include both the SLC portion and the MLC portion of the memory cell. Each of the memory cells may store one or more bits of data (eg, blocks of data) used by host system 120 . Although non-volatile memory components such as NAND-type flash memory are described, memory components 112A-112N may be based on any other type of memory, such as volatile memory. In some embodiments, memory components 112A-112N may be, but are not limited to, random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase Change Memory (PCM), Magnetic Random Access Memory (MRAM), or Not (NOR) Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM), and Crosspoint Arrays of Nonvolatile Memory Cells. Cross-point arrays of non-volatile memory can perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. In addition, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be written to without pre-erasing the non-volatile memory cells. programming. Additionally, the memory cells of memory components 112A-112N may be grouped into memory pages or blocks of data, which may refer to cells of the memory component used to store data.A memory system controller 115 (hereinafter "controller") may communicate with memory components 112A-112N to perform operations, such as reading data, writing data, or erasing data at memory components 112A-112N, and other such operate. Controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. Controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or other suitable processor. Controller 115 may include a processor (processing device) 117 configured to execute instructions stored in local memory 119 . In the illustrated example, local memory 119 of controller 115 includes embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines to control the operation of memory subsystem 110 , which includes processing communications between memory subsystem 110 and host system 120 . In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 is illustrated as including the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the controller 115, and may instead rely on (eg, by an external host) or provided by a processor or controller separate from the memory subsystem) external control.In general, controller 115 may receive commands or operations from host system 120, and may convert the commands or operations into instructions or appropriate commands to achieve desired accesses to memory components 112A-112N. Controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block address and logical block addresses associated with memory components 112A-112N. Address translation between physical block addresses. Controller 115 may also include host interface circuitry to communicate with host system 120 via a physical host interface. Host interface circuitry may convert commands received from the host system into command instructions to access memory components 112A-112N, and responses associated with memory components 112A-112N into information for host system 120.Memory subsystem 110 may also include additional circuitry or components not illustrated. In some embodiments, memory subsystem 110 may include a cache or buffer (eg, DRAM) and address circuitry (eg, row and column decoders) that may receive addresses from controller 115 and respond to The addresses are decoded to access memory elements 112A-112N.Memory subsystem 110 includes an allocation unit component 113 that may be used to manage allocation units for memory subsystem 110 . In some embodiments, controller 115 includes at least a portion of dispensing unit assembly 113 . For example, controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, allocation unit component 113 is part of host system 120, an application program, or an operating system. In the same or alternative embodiments, parts of distribution unit assembly 113 are part of host system 120 , while other parts of distribution unit assembly 113 execute at controller 115 .Allocation unit component 113 may be used to manage allocation units of the memory subsystem. An allocation unit may include one or more memory cells (ie, groups of memory cells). Allocation units may become unmapped through operations from the host system or through operations performed by the memory subsystem (eg, wear leveling, garbage collection). In response to the allocation unit changing from being mapped to being unmapped, the write operation may program the data pattern at the allocation unit. For example, each memory cell of the allocation unit may be programmed with a data mode (eg, a high voltage state). Subsequently, when host data is to be programmed into the allocation cells, as part of the write operation, a pre-read sub-operation may be performed at the lower or lowest available read threshold voltage. Additional details regarding the operation of the distribution unit assembly 113 are described below.2 is a flowchart of an example method 200 to program a high voltage state at an unmapped allocation cell and perform a write operation with a minimum pre-read voltage, in accordance with some embodiments. Method 200 may be performed by processing logic, which may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, on a processing device) run or execute instructions), or a combination thereof. In some embodiments, method 200 is performed by distribution component 113 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, it is to be understood that the illustrated embodiments are only examples and that the illustrated processes may be performed in a different order, and that some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.As shown in FIG. 2, at operation 210, processing logic receives an indication that the allocation unit is unmapped. For example, an allocation unit may be unmapped during the initial operational lifetime of a memory subsystem that includes the allocation unit. In some embodiments, an allocation unit may transition from being mapped to becoming unmapped. For example, an allocation unit may be assigned a logical address of the logical address space used by the host system. Allocation units may be unmapped using wear leveling operations or host commands (ie, trim commands). A wear leveling operation may remove the allocation unit from the logical address space and assign another currently unmapped allocation unit to the logical address of the removed allocation unit. For example, an allocation unit may be removed from the logical address space when a threshold number of write operations have been performed at the allocation unit. A trim command may be an instruction from the host system to remove an allocation unit from the logical address space. For example, allocation units can be removed from being accessed through the logical addresses used by the host system. At operation 220, processing logic performs a write operation at the distribution cell to program the distribution cell to a high voltage state. For example, a certain data pattern may be written to one or more memory cells of an allocation unit. The data pattern may be a high voltage state stored at each memory cell of the allocation unit. At operation 230, processing logic receives data to be written to the allocation cells that have been programmed to a high voltage state. For example, data may be received from a host system. In some embodiments, an allocation unit may become available to the host system when the allocation unit becomes mapped to the logical address space of the host system. For example, a wear leveling operation may remove another allocation unit from the logical address space and may add the allocation unit to the logical address space.Further, at operation 240, processing logic writes the received data at the distribution unit based on a pre-read voltage that is lower than the voltage of the high voltage state. For example, a write operation to program received data at an allocation unit may utilize a read-ahead sub-operation that reads or retrieves the value of the memory cell of the allocation unit, and compares the retrieved value to the intended write. The value entered into the memory cell is compared. If the values match, the write operation will not program the memory cell because the intended value is already stored at the memory cell. Therefore, no voltage signal is applied to the memory cell to change the stored value. Otherwise, if the values do not match, the write operation will program the memory cell to update or change the value at the memory cell to match the intended value of the received data. Therefore, a voltage signal is applied to the memory cell to change the stored value. As previously discussed, the pre-read voltage may be lower than the voltage of the high voltage state. In some embodiments, the memory subsystem may perform a prefetch sub-operation with multiple different prefetch voltages. The lowest available pre-read voltage can be selected for the pre-read sub-operation. In some embodiments, a lower but not the lowest available pre-read voltage may be selected for the pre-read sub-operation. Utilizing a lower pre-read voltage may generate fewer errors when performing the pre-read sub-operation, so that the determination of whether to change the value stored at the memory cell may be more accurate.As described above, the data mode may be a high voltage state and utilize a lower pre-read voltage during the pre-read sub-operation. In some embodiments, the data mode may be a low voltage state, and a higher pre-read voltage may be utilized during the pre-read sub-operation. The higher pre-read voltage may be higher than the voltage level of the low voltage state. The data schema used may be based on or depend on the type of media component. For example, threshold voltage distributions between different types of media components may change or shift in different directions with respect to voltage. In some embodiments, the threshold voltage distribution of one type of media component may change or migrate over time toward higher voltages, while the threshold voltage distribution of another type of media component may change over time toward lower voltages or migrate. The data mode may be a high voltage state or a low voltage state based on the threshold voltage distribution for the direction of type change or migration of the media component. For example, a data pattern may correspond to a high voltage state if the threshold voltage distribution (eg, LV state and HV state) of a certain type of media component changes or migrates toward higher voltages over time. Otherwise, a data pattern may correspond to a low voltage state if the threshold voltage distribution (eg, LV state and HV state) of a certain type of media component changes or migrates toward lower voltages over time. Thus, the data pattern may be based on the characteristics of the media component (ie, the direction of threshold voltage shift).3 illustrates voltage states associated with data patterns and pre-read voltages in accordance with some embodiments of the present disclosure. In some embodiments, the memory cells can be placed at a particular voltage, and the particular pre-read voltage can be selected by the distribution component 113 of FIG. 1 .As shown in FIG. 3, memory cells can be programmed to a low voltage (LV) state or a high voltage (HV) state to represent different bit values (eg, '0' or '1', or vice versa). Additionally, the memory subsystem may perform prefetch sub-operations with multiple prefetch voltages. For example, as shown, the memory subsystem may utilize a lower pre-read voltage 310 , two medium pre-read voltages, and a higher pre-read voltage 320 . Using the higher pre-read voltage 320 may generate more errors than using the lower pre-read voltage 310 if the memory cells are programmed to a high voltage state. Alternatively, using the lower pre-read voltage 310 may generate more errors than using the higher pre-read voltage 320 if the memory cells are programmed to a low voltage state.4 is a flowchart of an example method 400 to manage unmapped allocation cells based on data patterns and pre-read voltages, according to some embodiments. Method 400 may be performed by processing logic, which may include hardware (eg, processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (eg, on a processing device) run or execute instructions), or a combination thereof. In some embodiments, method 400 is performed by distribution component 113 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, it is to be understood that the illustrated embodiments are only examples and that the illustrated processes may be performed in a different order, and that some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.As shown in FIG. 4, at operation 410, processing logic receives an indication that the allocation unit has become unmapped from the host system's logical address space. In some embodiments, allocation units may become unmapped in response to wear leveling operations or trim commands from the host system. For example, the allocation unit may be removed from the logical address space of the host system and replaced with another allocation unit as part of a wear leveling operation, or may be removed from the logical address space in response to a trim command from the host system. the allocation unit. At operation 420, processing logic programs the memory cells of the allocation unit with a data pattern in response to receiving an indication that the allocation unit has become unmapped. As previously described, the data mode can be a high voltage state or a low voltage state based on the type of media component. For example, each memory cell of the allocation unit may be programmed to be in a higher voltage state if the threshold voltage distribution of the memory cell migrates or changes toward a high voltage, or each memory cell of the allocation unit may be in a higher voltage state. The threshold voltage distribution is programmed to be in the low voltage state with a shift or change in the threshold voltage distribution towards lower voltages. At operation 430, the processing logic receives a subsequent notification that the allocation unit will utilize data to become mapped. For example, the allocation unit may be added to the logical address space to replace another allocation unit that would become unmapped and removed from the logical address space. For example, the allocation unit may be added to the logical address space when the wear leveling operation has unmapped another allocation unit and the data of the unmapped allocation unit is to be stored at the new allocation unit. In some embodiments, an allocation unit may be added to the logical address space to replace a previous allocation unit that was subjected to a trim command by the host system.As shown in FIG. 4, at operation 440, processing logic performs a write operation to write data at the allocation unit by using a pre-read voltage based on the data pattern. For example, a write operation may include a read-ahead sub-operation as previously described. The pre-read voltage for the pre-read sub-operation may be based on the voltage state of the data pattern. For example, if the data pattern of the memory cells programmed into the allocation unit is a low voltage state (eg, based on the media component type), then a higher or highest pre-read voltage may be selected for the pre-read sub-operation. Otherwise, if the data pattern of the memory cell programmed into the allocation cell is a high voltage state (eg, based on another media component type), a lower or lowest pre-read voltage may be selected for the pre-read sub-operation.5A illustrates a transformation between unmapped allocation units and mapped allocation units in accordance with some embodiments of the present disclosure. In some embodiments, management of allocation units may be performed by allocation component 113 of FIG. 1 .As shown in Figure 5A, allocation units may be unmapped upon initialization of the memory subsystem. For example, at state 510 during the initial operational lifetime of the memory subsystem, the allocation unit may be unmapped because the memory subsystem has not been used by any host system. In some embodiments, the allocation units of the memory subsystem can be programmed to a high voltage state or a low voltage state at the time of manufacture using a data pattern based on the type of media components used in the memory subsystem. When an allocation unit transitions to state 511 to become a mapped allocation unit, the lowest pre-read voltage may be used (or, depending on the type of media component, the highest pre-read voltage may be used). For example, when an allocation cell is first added to the logical address space, the lowest pre-read voltage may be selected for the first write operation that will write data at the allocation cell.5B illustrates host system command-based translation between unmapped allocation units and mapped allocation units in accordance with some embodiments of the present disclosure. In some embodiments, management of allocation units may be performed by allocation component 113 of FIG. 1 .As shown in FIG. 5B , the allocation unit may be mapped at state 520 . For example, the allocation unit has previously been added to the logical address space of the host system. Subsequently, the host system may issue trim commands that remove allocation units from the logical address space. For example, the trim command indicates that the allocation unit is unmapped. In response to removing an allocation unit from the logical address space, a write operation may be performed on the allocation unit to place the allocation unit in a high voltage state (or, depending on the type of media component, in a low voltage state). Thus, at state 521, the allocation unit may be unmapped and placed in a high voltage state. Then, when the allocation unit is to return to the logical address space, the data to be stored at the allocation can be written to the allocation unit by using the lowest pre-read voltage (or, depending on the type of media component, the highest pre-read voltage). Accordingly, at stage 522, the allocation unit may become mapped into the logical address space and may store data from the host system.5C illustrates the transformation between unmapped allocation units and mapped allocation units based on wear leveling operations, in accordance with some embodiments of the present disclosure. In some embodiments, management of allocation units may be performed by allocation component 113 of FIG. 1 .As shown in Figure 5C, at state 530, an allocation unit may be mapped as it is added to the host system dynamic logical address space. Subsequently, the memory subsystem may perform wear leveling operations to remove allocation units from the logical address space. For example, a wear leveling operation may remove allocation units from a logical address space, add new allocation units in the logical address space, and may store data from the removed allocation units at the new allocation units. At state 531, the distribution cell may be programmed to be in a high voltage state (or in a low voltage state depending on the type of media component) when the distribution cell is unmapped. Then, at state 532, when the allocation unit is returned to the logical address space (eg, as a result of being added in response to a subsequent wear leveling operation), then when the allocation unit becomes mapped, the allocation unit can be used by using the lowest pre-read voltage ( Or depending on the type of media component, use the highest pre-read voltage) to write data to the distribution unit.6 illustrates an example machine of a computer system 600 within which a set of instructions may be executable for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 600 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or may be used to execute Operation of the controller (eg, executing an operating system to perform operations corresponding to distribution unit component 113 of FIG. 1). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. A machine may operate in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network device, server, network router, switch or bridge, digital or non-digital circuitry, or capable of executing Any machine that specifies a set of instructions (sequential or otherwise) that specify an action to be performed by this machine. Furthermore, although a single machine is described, the term "machine" should also be considered to encompass any collection of machines that, individually or collectively, execute a set (or sets) of instructions to perform any one or more of the methods discussed herein.The example computer system 600 includes a processing device 602, main memory 604 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), Static memory 606 (eg, flash memory, static random access memory (SRAM), etc.) and data storage system 618 communicate with each other through bus 630 .Processing device 602 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , or a processor that implements a combination of instruction sets. Processing device 602 may also be one or more special purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. Computer system 600 may also include a network interface device 608 to communicate over network 620 .Data storage system 618 may include machine-readable storage media 624 (also referred to as computer-readable media) having stored thereon one or more sets of instructions 626 or software embodying any one or more of the methods or functions described herein. Instructions 626 may also reside entirely or at least partially within main memory 604 and/or within processing device 602 during execution thereof by computer system 600, which also constitute machine-readable storage media. Machine-readable storage medium 624 , data storage system 618 , and/or main memory 604 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 626 include instructions to implement functions corresponding to a dispense unit component (eg, dispense unit component 113 of FIG. 1 ). Although machine-readable storage medium 624 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally considered to be a self-consistent sequence of operations that produce a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the acts and processes of a computer system or similar electronic computing device that manipulate and transform data represented as physical (electronic) quantities within the registers and memory of the computer system to similarly represented as computer system memory or registers or other such Class information stores other data of physical quantities within the system.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the given purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in computer-readable storage media such as, but not limited to, any type of disk (including floppy disks, optical disks, CD-ROMs, and magneto-optical disks), read only memory (ROM), random access memory (RAM) , EPROM, EEPROM, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the described methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure has not been described with reference to any particular programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform operations in accordance with the present disclosure. Process. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory components, etc.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It should be apparent that various modifications may be made to the present disclosure without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
The invention relates to real-time trigger to dump an error log. In various embodiments, a technique can be provided to address debug efficiency for failures found on an operational system. The approach can make use of an existing pin on a memory device with added logic to respond to a trigger signal structured different from a signal that is normally sent to the existing pin on the memory device such that the memory device performs a normal or routine function of the memory device in response to the signal. In response to detecting one or more error conditions associated with the memory device, a system that interfaces with the memory device can generate the trigger signal to the memory device. In response to receiving the trigger signal, the memory device can dump an error log of the memory device to a memory component in the memory device. The error log can later be retrieved from the memory component for failure analysis. |
1.A memory device includes:A timing circuit system that determines the occurrence of the trigger signal received on the pin of the memory device; andA processor configured to execute instructions stored on one or more components in the memory device, the instructions, when executed by the processor, cause the memory device to perform operations, the operations including responding Upon the determination of the occurrence of the trigger signal, an error log associated with one or more error conditions is dumped to the memory of the memory device.2.The memory device according to claim 1, wherein the pin is a pin that receives a signal for performing a function of the memory device, wherein the signal is different from the trigger signal.3.3. The memory device of claim 2, wherein the trigger signal experiences a plurality of toggle triggers within a time corresponding to a specified length of time in which the signal is designated to be pulled low or designated to be pulled high.4.3. The memory device of claim 2, wherein the pin is a reset pin that receives a reset signal to identify a reset event of the memory device, wherein the trigger signal is different from the reset signal.5.4. The memory device of claim 4, wherein the trigger signal undergoes a plurality of toggles within a toggle period of approximately two hundred nanoseconds.6.The memory device of claim 1, wherein the operation includes the memory device completing an ongoing task and saving cached host data in response to the determination of the occurrence of the trigger signal.7.The memory device according to claim 1, wherein the instructions are stored in a dedicated portion of the memory device, the dedicated portion being separated from firmware that controls data management of the memory device for data storage.8.The memory device according to claim 7, wherein the dedicated part of the memory device is a part of a static random access memory or a read-only memory.9.The memory device according to claim 1, wherein the error log contains hardware information and firmware information.10.The memory device of claim 1, wherein the error log includes one or more of data timeout, data mismatch, fatal error, initialization timeout, and stuck system firmware identification.11.The memory device of claim 1, wherein the operation includes transmitting the error log dumped to the memory of the memory device from the memory to a host.12.A method for saving error logs of a memory device, the method comprising:Receiving a signal at the pin of the memory device;Based on the timing parameters of the signal, determine whether the signal is a trigger signal received on the pin of the memory device; andIn response to the determination that the signal is the trigger signal, the error log associated with one or more error conditions is dumped to the memory of the memory device.13.The method of claim 12, wherein determining whether the received signal is the trigger signal comprises determining whether the received signal is pulled low or pulled down corresponding to a non-error signal assigned to the pin Multiple two-state triggers are experienced within a specified period of time.14.The method according to claim 12, wherein the pin is a reset pin that receives a reset signal to identify a reset event of the system, and wherein the trigger signal is different from the reset signal.15.The method according to claim 12, wherein determining whether the signal is the trigger signal and dumping the error log are performed by a processor of the memory device executing instructions, wherein the instructions are stored in the memory In the dedicated part of the device, the dedicated part is separated from the firmware that controls data management for storing data in the memory device.16.A system interfacing with a memory device, the system comprising:A processor configured to execute instructions stored on one or more components in the system, the instructions, when executed by the processor, cause the system to perform operations, the operations including:Detecting one or more error conditions associated with the memory device;Generate a trigger signal with specified timing parameters; andIn response to the detection of the one or more error conditions, the trigger signal is transmitted to a pin of the memory device to trigger a dump of the error log in the memory device, the pin being allocated to all Functions of the memory device other than triggering the dump.17.The system according to claim 16, wherein the pin is a reset pin of the memory device, the reset pin is configured to receive a reset signal from the system to identify the reset pin of the memory device Reset event, wherein the reset signal is different from the trigger signal.18.18. The system of claim 17, wherein the trigger signal is structured to experience multiple two-state triggers within a time corresponding to a specified length of time in which the reset signal is pulled low or pulled high.19.The system of claim 16, wherein the error log includes one or more of data timeout, data mismatch, fatal error, initialization timeout, and stuck firmware identification.20.The system of claim 16, wherein the operation includes receiving the error log dumped to the memory of the memory device from the memory of the memory device. |
Real-time trigger for dumping error logsPriority applicationThis application claims the priority rights of U.S. Provisional Application No. 62/955,204 filed on December 30, 2019, which is incorporated herein by reference in its entirety.Technical fieldThe embodiments of the present disclosure generally relate to memory systems and systems that interact with the memory systems, and more specifically, to the management of error logs associated with the memory systems.Background techniqueThe memory device is usually provided as an internal semiconductor integrated circuit in a computer or other electronic device. There are many different types of memory, including volatile and non-volatile memory. Volatile memory requires power to maintain its data, and includes random access memory (RAM), dynamic random access memory (DRAM), or synchronous dynamic random access memory (SDRAM), and so on. Non-volatile memory can retain stored data when it is not powered, and includes flash memory, read-only memory (ROM), electrically erasable programmable ROM (EEPROM), erasable programmable ROM (EPROM), Resistance variable memory, such as phase change random access memory (PCRAM), resistive random access memory (RRAM), magnetoresistive random access memory (MRAM), or 3D XPointTM memory, etc.Flash memory is used as non-volatile memory for a wide range of electronic applications. Flash memory devices typically include one or more groups of single transistor floating gate or charge trap memory cells that allow high memory density, high reliability, and low power consumption. Two common types of flash memory array architectures include NAND and NOR architectures, which are named after the logical form in which the basic memory cell configuration of each is arranged. The memory cells of the memory array are usually arranged in a matrix. In an example, the gate of each floating gate memory cell in a row of the array is coupled to an access line (e.g., a word line). In the NOR architecture, the drain of each memory cell in a column of the array is coupled to a data line (e.g., a bit line). In the NAND architecture, the drain of each memory cell in a string of the array is coupled in series between the source line and the bit line in a source-to-drain manner.Summary of the inventionAn aspect of the present disclosure provides a memory device, wherein the memory device includes: a timing circuit system that determines the occurrence of a trigger signal received on a pin of the memory device; and a processor that is configured to execute storage An instruction on one or more components in the memory device, the instruction, when executed by the processor, causes the memory device to perform an operation, the operation including an operation in response to the occurrence of the trigger signal The determining is to dump error logs associated with one or more error conditions to the memory of the memory device.Another aspect of the present disclosure provides a method of saving an error log of a memory device, wherein the method includes: receiving a signal at a pin of the memory device; and determining whether the signal is A trigger signal received on the pin of the memory device; and in response to the determination that the signal is the trigger signal, dump the error log associated with one or more error conditions To the memory of the memory device.Another aspect of the present disclosure provides a system interfacing with a memory device, wherein the system includes: a processor configured to execute instructions stored on one or more components in the system, the instructions When executed by the processor, cause the system to perform operations, the operations including: detecting one or more error conditions associated with the memory device; generating a trigger signal with specified timing parameters; and responding to all The detection of the one or more error conditions, the trigger signal is transmitted to a pin of the memory device to trigger the dump of the error log in the memory device, the pin is allocated to the memory device The function other than triggering the dump.Description of the drawingsThe drawings, which are not necessarily drawn to scale, generally illustrate the various embodiments discussed in this document by way of example and not as a limitation.Figure 1 illustrates an example of an environment including a memory device according to various embodiments.2 and 3 illustrate schematic diagrams of examples of three-dimensional NAND architecture semiconductor memory arrays according to various embodiments.Figure 4 illustrates an example block diagram of a memory module according to various embodiments.Figure 5 is a block diagram illustrating an example of a machine on which one or more embodiments may be implemented in accordance with various embodiments.Figure 6 is a block diagram of an example system that includes a host operating with a memory device in a manner that triggers an error log dump in the memory device, according to various embodiments.Figure 7 illustrates the arrangement of several signals used for the operation of these devices between the host and the memory device according to various embodiments.FIG. 8 illustrates the timing of the reset signal for the arrangement of FIG. 7 according to various embodiments.FIG. 9 shows a table of reset timing parameters for the reset signal of FIG. 8 according to various embodiments.FIG. 10 illustrates an example of toggling of a hardware reset signal according to various embodiments.Figure 11 is a flowchart of features of an example method of saving an error log of a memory device according to various embodiments.Figure 12 is a flowchart of features of an example method of saving an error log in a memory device through a system interfacing with the memory device according to various embodiments.Detailed waysThe following detailed description refers to the accompanying drawings showing various embodiments that can be implemented by means of the drawings. These embodiments are described in sufficient detail to enable those skilled in the art to practice these and other embodiments. Other embodiments can be used, and structural, logical, mechanical, and electrical changes can be made to these embodiments. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. Therefore, the following detailed description should not be viewed in a restrictive sense.After a product such as a memory device is provided to consumers in a particular field of use, there may be continuous debugging of the product. During the on-site debugging of the product, it is difficult to capture all the key information when the problem occurs. This difficulty can be associated with a real-time synchronization mechanism between the host and the memory device. Due to this type of synchronization difficulty, it may not rely on the replication of problems that may be inefficient, time-consuming, and sometimes extremely difficult to reproduce. For example, when the link between the host and the memory device is lost, the host cannot immediately notify the device via a command that an error is occurring because the protocol link is disconnected.For Universal Flash Memory (UFSTM) devices, there may be MPHY/UniPro errors. UniPro (UniPro) is a relatively high-speed interface technology for interconnecting integrated circuits in mobile electronic devices and electronic devices with mobile-related components and electronic components affected by movement. M-PHY (MPHY) is a high-speed data communication physical layer standard developed by the MIPI Alliance, which is a global open membership organization that develops interface specifications for mobile electronic environments. The UFS electrical interface at the UFS interconnection layer in the layered communication architecture can handle the connection between the UFS host and the UFS device. The M-PHY specification and the UniPro specification form the basis of the interconnection of the UFS interface. UniPro can be used to monitor the bit error rate by communicating with the M-PHY physical layer. UniPro is a transmission protocol that also tracks the retry method or retransmission method of program retries. Since the UFS device is waiting for the command for the UFS host, the loss of the link is not immediately related to the loss of the received signal from the UFS host. In the case of link loss, the host cannot immediately notify the device that an error is occurring via commands.Error conditions can occur together with timeout failures. When the host detects that a command with respect to the device it is interfaced with has timed out, the device may be busy with task processing. The task processing may be the execution of instructions stored in the firmware of the device, so that it will not process notifications of error conditions at a similar time. The notification of error conditions can be queued with respect to task processing. Firmware-based processing may even get stuck, so that it will miss any upcoming host protocol signals. In addition, in either case, the device may lose the opportunity to update its error log. Error logs that are being actively updated are usually not maintained in non-volatile memory.An error condition can occur in which the host triggers a reset signal to which it is being interfaced to the device. The reset signal is a signal that forces the device to recover from an error state. However, this signal can be several seconds after the failure event occurs. The device may lose the opportunity to update its error log.Several current failure analysis techniques are based on code injection or specific vendor commands. These methods imply changes in the status of the device. These methods often perform re-testing through dedicated test firmware.In various embodiments, a real-time trigger from the host to the storage device can be generated, which can be used to notify the storage device to capture all required information in the error log of the storage device in time and dump the error log to save the information for later failure analysis . Dumping is a save operation. The dump may be implemented by saving to a dedicated non-volatile memory, which is allocated as a specific memory for the error log of the memory device. The dedicated non-volatile memory may be part of the memory of a memory device that stores user data or part of a static random access memory (SRAM) or ROM. Real-time triggering can be implemented as real-time hardware triggering. These technologies can allow for improved failure analysis capabilities through procedures that are not device-intrusive during the internal qualification phase and during host client platform issues. Through real-time triggering, the host can dump relevant device system information (both hardware and firmware) in case of error conditions such as but not limited to data timeout, data mismatch, fatal error, initialization timeout, and device firmware stuck condition.In various embodiments, the memory device may include timing circuitry that determines the occurrence of trigger signals received on pins of the memory device. A memory device may include several data storage memories, one or more processes, and other components that manage the memory device and access data stored in the memory device. The memory device may have a number of pins that interface with a device external to the memory device (such as a host device). One or more processors of the memory device can execute instructions stored on one or more components in the memory device. When the instructions are executed by one or more processors, the memory device may perform operations related to the instructions. The operation may include dumping error logs associated with one or more error conditions to the memory of the memory device in response to the determination of the occurrence of the trigger signal at the pin.The pin of the memory device that receives the trigger signal may be a pin that receives a signal for performing a conventional function of the memory device separated from dumping an error log. The signal used to perform the function can be called a function signal and is different from a trigger signal. The purpose of the pins allocated to the normal function signal of the memory device can also be used to receive a trigger signal to dump the error log of the memory device. This dual use of pins provides flexibility because the memory device can be structured without adding unconventionally used pins.The memory device may include timing circuitry that can be used to identify trigger signals other than function signals. There may be several mechanisms that can be implemented to determine whether the signal received at the pin assigned to the function signal is a trigger signal based on a comparison of the timing parameters of the received signal with the timing parameters defined for the trigger signal. For example, the trigger signal may experience multiple two-state triggers at a time corresponding to a specified length of time in which the function signal is specified to be pulled low or specified to be pulled high. In the case where the pin used to receive the trigger signal is a reset pin of the memory device used to receive a reset signal to identify the reset event of the memory device, a trigger signal can be generated and the signal will be received at the reset pin For the trigger signal, at the reset pin, the trigger signal undergoes multiple two-state triggers, and the period of the two-state trigger is approximately two hundred nanoseconds. Other cycles can be used. These timing parameters for the trigger signal are different from the functional reset signal normally received at the reset pin.In response to determining that a trigger signal occurs at a pin of the memory device, the memory device can complete the task in progress and save the cached host data. When it is predicted that the storage device will shut down, these actions can be performed through the storage device and the error log can be dumped. Instructions for defining actions that the memory device will take in response to receiving the trigger signal may be stored in a dedicated part of the memory device. The dedicated part may be separated from the firmware that controls data management of the memory device for data storage. The dedicated part of the memory device may be part of SRAM or ROM.The error log dumped to the memory of the memory device in response to receiving the trigger signal may contain hardware information and firmware information. For example, the error log may include information related to one or more of data timeout, data mismatch, fatal error, initialization timeout, and stuck system firmware identification. At some time after the error log is dumped to the memory of the storage device, failure analysis can be performed on the triggered dump. To perform this type of failure analysis, the error log dumped to the memory of the memory device can be transmitted from the memory to the host.The system (e.g., host system) that interfaces with the memory device may include components that detect one or more error conditions associated with the memory device. After detecting one or more error conditions, a trigger signal with specified timing parameters can be generated immediately. The generating may include retrieving stored signals with specified timing parameters. In response to detecting one or more error conditions, a trigger signal may be transmitted to a pin of the memory device to trigger the dump of the error log in the memory device. The pins may be pins allocated to the memory device for functions other than triggering the dump. The pin may be a reset pin of the memory device, wherein the reset pin is configured to receive a reset signal from the system to identify the reset event of the memory device. At different times of the same reset pin, the reset signal can be different from the trigger signal. The trigger signal can be structured to experience multiple two-state triggers for a time corresponding to a specified length of time in which the reset signal is pulled low or pulled high.The error log may include a number of error conditions detected by the system interfaced with the memory device, such as but not limited to data timeout detected by the system, for example, the system is comparing the data sent to the memory device with the data received from the memory device. One or more of the data mismatch detected when the memory device is operating together, the fatal error when the memory device is operating together, the initialization timeout regarding the initialization of the memory device by the system, and the recognition by the system that the firmware of the memory device is in a stuck state. Piece. The system can retrieve the error log from the memory device for failure analysis. The system may perform failure analysis or provide the information of the error log to another device, which may be located remotely from the system and receive the information from the memory.A memory device may include individual memory dies, which may, for example, include a storage area that includes one or more arrays of memory cells to implement one (or more) selected storage technologies. This type of memory die usually contains supporting circuitry for operating the memory array. Other examples sometimes commonly referred to as "managed memory devices" include assemblies of one or more memory dies associated with controller functionality configured to control the operation of one or more memory dies. This type of controller functionality can simplify interoperability with external devices such as "hosts". In this type of managed memory device, the controller functionality can be implemented on one or more dies that also incorporate the memory array or on separate dies. In other examples, one or more memory devices may be functionally combined with the controller to form a solid state drive (SSD) storage volume.Embodiments of the present disclosure may include examples of managed memory devices that implement NAND flash memory cells, which are referred to as "managed NAND" devices. However, these examples do not limit the scope of the present disclosure, and the scope of the present disclosure can be implemented in other forms of memory devices and/or other forms of storage technologies.By selecting the word line coupled to its gate, both NOR and NAND flash architecture semiconductor memory arrays are accessed via the decoder that activates a particular memory cell. In the NOR architecture semiconductor memory array, once activated, the selected memory cell has its data value placed on the bit line, so that different currents flow according to the programmed state of the specific cell. In the NAND structure semiconductor memory array, a high bias voltage is applied to the drain-side select gate (SGD) line. Drive the word lines coupled to the gates of the unselected memory cells of each group with a specified pass voltage (for example, Vpass) so that the unselected memory cells of each group operate as pass transistors (for example, with no The current is delivered in a manner limited by the value of the stored data). Current then flows from the source line through each serially coupled group to the bit line, limited only by the selected memory cell in each group, thereby placing the current encoded data value of the selected memory cell on the bit line .Each flash memory cell in a NOR or NAND architecture semiconductor memory array can be individually or collectively programmed to one or several programmed states. For example, a single-level cell (SLC) can represent one of two programming states (e.g., 1 or 0), thereby representing one data bit. Flash memory cells can also represent more than two programmed states, allowing higher density memory to be manufactured without increasing the number of memory cells, because each cell can represent more than one binary number (for example, more than one bit). ). Such cells may be referred to as multi-state memory cells, multi-digit cells, or multi-level cells (MLC). In some instances, MLC can refer to a memory cell that can store two data bits per cell (for example, one of the four programmed states), and a three-level cell (TLC) can refer to a memory cell that can store three data bits per cell. Bits (e.g., one of the eight programmed states) are memory cells, and a four-level cell (QLC) can store four data bits per cell. MLC is used herein in its broader context to refer to any memory cell that can store more than one bit of data per cell (i.e., can represent more than two programmed states).The managed memory device can be configured and operated in accordance with recognized industry standards. For example, the managed NAND device may be (as a non-limiting example) a UFS device or an embedded MMC device (eMMCTM) or the like. For example, in the case of the above example, it can be based on the Joint Electronic Device Engineering Design Association (JEDEC) standard (for example, JEDEC standard JESD223D, titled "JEDEC UFS Flash Storage 3.0", etc. / Or updated or subsequent versions of such standards) to configure UFS devices. Similarly, the identified eMMC device can be configured according to the JEDEC standard JESD84-A51 titled "JEDEC eMMC standard 5.1" and/or updated or subsequent versions of such standards.The SSD is particularly useful as a main storage device for a computer, and it has advantages over a conventional hard disk drive with moving parts with respect to, for example, performance, size, weight, strength, operating temperature range, and power consumption. For example, SSDs may have reduced seek time, latency, or other delays associated with disk drives (e.g., electromechanical, etc.). SSDs use non-volatile memory units such as flash memory units to avoid internal battery power requirements, thus allowing the drive to be more versatile and compact. Managed NAND devices can be used as main memory or auxiliary memory in various forms of electronic devices, and are commonly used in mobile devices.Both the SSD and the managed memory device can include a number of memory devices with a number of dies or logical units (for example, logical unit numbers or LUNs), and can include logic functions that perform operations on the memory device or interface with external systems One or more processors or other controllers. Such SSDs and managed memory devices may include one or more flash memory dies with multiple memory arrays and peripheral circuitry on them. A flash memory array may include multiple memory cell blocks organized into multiple physical pages. In some instances, the SSD may also include DRAM or SRAM (or other forms of memory die or other memory structures). Similarly, a managed NAND device can include one or more arrays of volatile and/or non-volatile memory separate from the NAND storage array and within or from the controller. Both SSD and managed NAND devices can receive commands from the host that are associated with memory operations such as read or write operations to transfer data (e.g., user data and associated completeness) between the memory device and the host. Data, such as error data and address data, etc.), or associated with an erase operation to erase data from the memory device.For example, mobile electronic devices (for example, smart phones, tablet computers, etc.), electronic devices for automotive applications (for example, automotive sensors, control units, driver assistance systems, passenger safety or comfort systems, etc.) and Internet-connected electrical equipment or Electronic devices of devices (for example, Internet of Things (IoT) devices, etc.) especially have varying storage requirements depending on the type of electronic device, usage environment, performance expectations, etc.An electronic device can be broken down into several main components: a processor (for example, a central processing unit (CPU) or other main processor); a memory (for example, one or more volatile or non-volatile RAM memory devices, such as DRAM, Mobile or low-power double data rate synchronous DRAM (DDR SDRAM), etc.); and storage devices (for example, non-volatile memory (NVM) devices, such as flash memory, ROM, SSD, MMC, or other memory card structures or combinations) Pieces etc.). In some examples, the electronic device may include a user interface (for example, a display, a touch screen, a keyboard, one or more buttons, etc.), a graphics processing unit (GPU), a power management circuit system, a baseband processor, or one or more transceivers.器circuits, etc.Figure 1 illustrates an example of an environment 100 that includes a host device 105 and a memory device 110 configured to communicate on a communication interface. The host device 105 or the memory device 110 may be included in a variety of products 150, such as IoT devices (for example, refrigerators or other electrical appliances, sensors, motors or actuators, mobile communication devices, automobiles, drones, etc.) to support Processing, communication or control of product 150.The memory device 110 includes a memory processing device 115 and a memory array 120, including, for example, a number of individual memory die (e.g., a stack of 3D NAND die). In 3D architecture semiconductor memory technology, vertical structures are stacked, thereby increasing the number of layers, physical pages, and thus increasing the density of memory devices (e.g., storage devices). In an example, the memory device 110 may be a discrete memory of the host device 105 or a storage device component. In other examples, the memory device 110 may be part of an integrated circuit (eg, a system on a chip (SOC), etc.) that is stacked with one or more other components of the host device 105 or otherwise contained together.One or more communication interfaces can be used to transfer data between the memory device 110 and one or more other components of the host device 105, such as Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect Express (PCIe) interface, universal Serial bus (USB) interface, UFS interface, eMMCTM interface, or one or more other connectors or interfaces. The host device 105 may include a host system, an electronic device, a processor, a memory card reader, or one or more other electronic devices external to the memory device 110. In some examples, the host device 105 may be a machine having some or all of the components discussed with reference to the machine 500 of FIG. 5.The memory processing device 115 may receive instructions from the host device 105 and may communicate with the memory array 120 to transfer data to one or more of the memory cells, planes, sub-blocks, blocks, or pages of the memory array 120 (e.g., write Data is transferred (e.g., read) from one or more of memory cells, planes, sub-blocks, blocks, or pages of the memory array 120. The memory processing device 115 may particularly include circuitry or firmware, which includes one or more components or integrated circuits. For example, the memory processing device 115 may include one or more memory control units, circuits, or components configured to control access on the memory array 120 and provide a translation layer between the host device 105 and the memory device 110. The memory processing device 115 may include one or more input/output (I/O) circuits, circuits, or interfaces to transfer data to or from the memory array 120. The memory processing device 115 may include a memory manager 125 and a controller 135 such as an array controller.The memory manager 125 may include, among other things, circuitry or firmware, such as multiple components or integrated circuits associated with various memory management functions. In some embodiments, the function of the memory manager 125 is implemented by the controller (or processor) executing firmware instructions. For example, in some examples, the memory manager 125 may be implemented at least in part by one or more processors that may be found in the processing device 615 of FIG. Or instructions in the memory in the data storage device 612. The management table 130 can be similarly stored on the memory processing device 115, and stored in any one of such memory device orientations. In other examples, the instruction and/or management table 130 may be stored in certain blocks of the NAND die stack 120 and loaded into the working memory of the memory processing device 115 during operation.Those skilled in the art will recognize that, in some instances, the components and functions of the memory manager 125 and the array controller 135 may be implemented by any combination of the components (or a subset thereof) described herein, such as FIG. 6 processing device 615 and data storage 612; and may contain additional hardware components.For the purpose of this description, example memory operations and management functions will be described in the context of NAND memory. Those skilled in the art will recognize that other forms of non-volatile memory may have similar memory operation or management functions. Such NAND management functions include wear leveling (e.g., garbage collection or recycling), error detection or correction, block retirement, or one or more other memory management functions. The memory manager 125 may parse or format host commands (for example, commands received from the host) into device commands (for example, commands associated with the operation of the memory array, etc.), or generate commands for the array controller 135 or the memory Device commands of one or more other components of the device 110 (for example, to implement various memory management functions).The memory manager 125 may include various information associated with one or more components of the memory device 110 (e.g., each associated with the memory array or one or more memory cells coupled to the memory processing device 115). Information) is a collection of management tables 130. For example, the management table 130 may include information about the block age, block erase count, error history, or one or more error counts (e.g., write operation error count) of one or more memory cell blocks coupled to the memory processing device 115. , Read bit error count, read operation error count, erase error count, etc.) information. In some instances, if the number of errors detected for one or more of the error counts is above a threshold, the bit error may be referred to as an uncorrectable bit error. Among other things, the management table 130 may maintain a count of correctable or uncorrectable bit errors.The array controller 135 may include, among other things, circuitry or components configured to control memory operations, the memory operations and the writing of data to one or more memory cells of the memory device 110 coupled to the memory processing device 115, from the One or more memory cells are associated with reading data or erasing the one or more memory cells. Memory operations may be based on, for example, host commands received from the host device 105 or generated internally by the memory manager 125 (e.g., associated with wear leveling, error detection or correction, etc.).The array controller 135 may include an error correction code (ECC) component 140, which may especially include an ECC engine or other circuitry configured to detect or correct and write data to one of the memory devices 110 coupled to the memory processing device 115 Errors associated with or reading data from the one or more memory cells. The array controller 135 may include a real-time trigger task component 111, which may include instructions for dumping the error log of the memory device 110 to the memory of the memory device 110 in response to detecting reception of a trigger signal to perform the dump. The memory processing device 115 can be configured to effectively detect and recover from error phenomena (for example, bit errors, operation errors, etc.) associated with various operations or data storage, while maintaining the host 105 and the memory. The integrity of the data transferred between the devices 110, or the integrity of the stored data (for example, using redundant RAID storage, etc.), and the removal (for example, retirement) of failed memory resources (for example, memory Cells, memory arrays, pages, blocks, etc.) to prevent future errors.The memory array 120 may include several memory cells arranged in, for example, several devices, planes, sub-blocks, blocks, or pages. As an example, a 48GB TLC NAND memory device may contain 18,592 bytes (B) of data per page (16,384+2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device. As another example, a 32GB MLC memory device (two data bits per cell (ie, 4 programmable states)) can contain 18,592 bytes (B) of data per page (16,384+2208 bytes), 1024 per block Pages, 548 blocks per plane, and 4 planes per device, but the required write time is half and the program/erase (P/E) cycle is twice compared to the corresponding TLC memory device. Other examples may include other numbers or arrangements. In some examples, the memory device or portion thereof may be selectively operated in SLC mode or in a desired MLC mode (e.g., TLC, QLC, etc.).In operation, data is generally written to or read from the NAND memory device 110 in pages, and erased in blocks. However, one or more memory operations (e.g., read, write, erase, etc.) can be performed on larger or smaller groups of memory cells as needed. The data transfer size of the NAND memory device 110 is generally referred to as a page, and the data transfer size of the host is generally referred to as a sector.Although a data page can contain several bytes of user data (for example, a data payload containing several data sectors) and its corresponding metadata, the page size often only refers to the number of bytes used to store user data . As an example, a data page with a page size of 4KB may contain user data of 4KB (for example, 8 sectors assuming a sector size of 512B) and several bytes corresponding to the user data (for example, 32B, 54B, 224B, etc.) ) Metadata, such as integrity data (such as error detection or correction code data), address data (such as logical address data, etc.), or other metadata associated with user data.Different types of memory cells or memory arrays 120 may provide different page sizes, or may require different amounts of metadata associated with them. For example, different memory device types may have different bit error rates, which may result in the need for different amounts of metadata to ensure the integrity of the data page (for example, a memory device with a higher bit error rate may require a higher bit error rate than a lower bit error rate). The error rate of the memory device is more bytes of error correction code data). As an example, MLC NAND flash devices may have a higher bit error rate than corresponding SLC NAND flash devices. Therefore, MLC devices may require more metadata bytes for error data than corresponding SLC devices.FIG. 2 illustrates an example schematic diagram of a 3D NAND architecture semiconductor memory array 200 that can be implemented as the memory array 120 of FIG. 1. The 3D NAND architecture semiconductor memory array 200 may include several memory cell strings (for example, the first to third A0 memory strings 205A0 to 207A0, the first to third An memory strings 205An to 207An, and the first to third B0 memory strings 205B. To 207B0, first to third Bn memory strings 205Bn to 207Bn, etc.), the memory cell strings are organized into blocks (for example, block A 201A, block B 201B, etc.) and sub-blocks (for example, sub-block A0201A0, sub-block An201An, Sub-block B0201B0, sub-block Bn201Bn, etc.). The memory array 200 represents a portion of a larger number of similar structures that would normally be found in a block, device, or other unit of a memory device.Each memory cell string includes several levels of charge storage transistors (for example, floating gate transistors, charge-trapping structures, etc.), which are stacked on the source line in the Z direction in a source-to-drain manner ( SRC) 235 or source-side select gate (SGS) (e.g., first to third A0SGS 231A0-233A0, first to third AnSGS231An-233An, first to third BOSGS 231B0-233B0, first to third BnSGS 231Bn-233Bn, etc.) and drain-side select gates (SGD) (for example, the first to third A0SGD 226A0-228A0, the first to third AnSGD 226An-228An, the first to third BOSGD 226B0-228B0, the first One to the third BnSGD 226Bn-228Bn, etc.). Each memory cell string in the 3D memory array may be arranged as a data line (for example, bit lines (BL) BL0-BL2 220-222) along the X direction and as a physical page along the Y direction.In a physical page, each level represents a row of memory cells, and each string of memory cells represents a column. A sub-block may include one or more physical pages. A block may include several sub-blocks (or physical pages) (for example, 128, 256, 384, etc.). Although it is described in this article that there are two blocks, each block has two sub-blocks, each sub-block has a single physical page, each physical page has three memory cell strings, and each string has 8 levels of memory cells, but In other examples, the memory array 200 may include more or fewer blocks, sub-blocks, physical pages, strings of memory cells, memory cells, or hierarchies. For example, each memory cell string may include more or fewer levels (e.g., 16, 32, 64, 128, etc.) as needed, and one or more additional levels of semiconductor material above or below the charge storage transistor (e.g., , Select the gate, data line, etc.). As an example, a 48GB TLC NAND memory device may contain 18,592 bytes (B) of data per page (16,384+2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device.Each memory cell in the memory array 200 includes an access line (e.g., word line (WL) WL00-WL70210A-217A, WL01-WL71210B-217B) coupled to (e.g., electrically connected to or otherwise operably connected to) Etc.), the access line is commonly coupled to the control gate (CG) across a specific level or part of the level as needed. Corresponding access lines can be used to access or control a specific level in the 3D memory array 200 and therefore a specific memory cell in a string. Various select lines can be used to access groups of select gates. For example, the A0SGD line SGDA0225A0 can be used to access the first to third A0SGD 226A0-228A0, the AnSGD line SGDAn225An can be used to access the first to third AnSGD 226An-228An, and the BOSGD line SGDB0225B0 can be used to access the first To the third BOSGD226B0-228B0, and the BnSGD line SGDBn225Bn can be used to access the first to third BnSGD 226Bn-228Bn. The gate selection line SGS0230A can be used to access the first to third A0SGS 231A0-233A0 and the first to third AnSGS231An-233An, and the gate selection line SGS1230B can be used to access the first to third BOSGS 231B0-233B0 and The first to third BnSGS 231Bn-233Bn.In an example, the memory array 200 may include several levels of semiconductor material (for example, polysilicon, etc.), which are configured to couple the CG or select gate (or CG or select gate) of each memory cell of the corresponding level of the array 200 Part of the gate). A combination of bit lines (BL) and select gates can be used to access, select, or control specific memory cell strings in the array, and one or more access lines (for example, word lines) can be used to access and select Or control one or more levels of specific memory cells in a specific string.FIG. 3 illustrates an example schematic diagram of a portion of a NAND architecture semiconductor memory array 300 that can be implemented as the memory array 120 of FIG. 1. The portion of the NAND architecture semiconductor memory array 300 may include strings (e.g., first to third strings 305-307) and layers (e.g., illustrated as corresponding word lines (WL) WL0-WL7 310) arranged in a two-dimensional (2D) array. -317, a plurality of memory cells 302 in a drain-side select gate (SGD) line 325, a source-side select gate (SGS) line 330, etc.), and a sense amplifier or device 360. For example, the memory array 300 may illustrate an example schematic diagram of a portion of a physical page of a memory cell of the 3D NAND architecture semiconductor memory device illustrated in FIG. 2.Each memory cell string is coupled to the source line (SRC) 335 using a corresponding source-side select gate (SGS) (for example, the first SGS 331 to the third SGS333), and uses the corresponding drain-side select gate (SGD) (For example, the first SGD 326 to the third SGD 328) are coupled to the corresponding data lines (for example, the first to third bit lines (BL) BL0-BL2 320-322). Although illustrated in the example of FIG. 3 as having 8 levels (for example, using word lines (WL) WL0 310 to WL7 317) and three data lines (BL0326 to BL2 328), other examples may include more Or a string of memory cells with fewer levels or data lines.In a NAND architecture semiconductor memory array such as the example memory array 300, the state of the selected memory cell 302 can be accessed by sensing current or voltage changes associated with a particular data line containing the selected memory cell. One or more drivers (e.g., through control circuits, one or more processors, digital logic, etc.) may be used to access the memory array 300. In an example, depending on the type of required operation to be performed on a specific memory cell or memory cell group, one or more drivers can drive a specific potential to one or more data lines (e.g., bit lines BL0-BL2) , Access lines (for example, word lines WL0-WL7) or select gates to activate specific memory cells or memory cell groups.To program or write data to the memory cell, a programming voltage (Vpgm) (e.g., one or more programming pulses, etc.) can be applied to the selected word line (e.g., WL4), and therefore, applied to the selected word line The control gate of each memory cell of (e.g., the first to third CG 341-343 of the memory cell coupled to WL4). For example, the programming pulse can start at or near 15V, and in some instances, can increase in magnitude during each programming pulse application. While the programming voltage is applied to the selected word line, a potential such as a ground potential (e.g., Vss) can be applied to the data line (e.g., bit line) and the substrate (and therefore the source) of the memory cell targeted for programming. The channel between the electrode and the drain), resulting in charge transfer from the channel to the floating gate of the target memory cell (for example, direct injection or Fowler-Nordheim (FN) tunneling, etc.).In contrast, the pass voltage (Vpass) can be applied to one or more word lines with memory cells that are not targeted for programming, or a prohibit voltage (for example, Vcc) can be applied to those that have memory cells that are not targeted for programming. The data lines (e.g., bit lines) of the memory cell, such that, for example, charge transfer from the channel to the floating gate of such non-target memory cell is prohibited. The transfer voltage may be variable depending on, for example, the proximity of the applied transfer voltage to the word line targeted for programming. The prohibited voltage may include a power supply voltage (Vcc), such as a voltage from an external source or power source (e.g., battery, AC-DC converter, etc.) relative to the ground potential (e.g., Vss).As an example, if a programming voltage (e.g., 15V or higher) is applied to a specific word line, such as WL4, then a transfer voltage of 10V can be applied to one or more other word lines, such as WL3, WL5, etc., to inhibit Programming of non-target memory cells, or holding values stored on such memory cells that are not targeted for programming. As the distance between the applied programming voltage and the non-target memory cell increases, the transfer voltage required to prevent the non-target memory cell from being programmed may decrease. For example, when a programming voltage of 15V is applied to WL4, a transfer voltage of 10V can be applied to WL3 and WL5, a transfer voltage of 8V can be applied to WL2 and WL6, and a transfer voltage of 7V can be applied to WL1 And WL7 etc. In other examples, the transfer voltage or the number of word lines, etc. may be higher or lower, or larger or smaller.The sense amplifier 360 coupled to one or more of the data lines (for example, the first, second, third, or fourth bit lines (BL0-BL2) 320-322) can sense the voltage on a specific data line Or current to detect the state of each memory cell in the corresponding data line.Between the application of one or more programming pulses (eg, Vpgm), a verify operation can be performed to determine whether the selected memory cell has reached its expected programming state. If the selected memory cell has reached its expected programmed state, it can be inhibited from further programming. If the selected memory cell has not reached its intended programming state, an additional programming pulse can be applied. If the selected memory cell has not reached its intended programmed state after a certain number of programming pulses (eg, the maximum number), the selected memory cell or the string, block, or page associated with such selected memory cell can be marked Is defective.In order to erase a memory cell or a group of memory cells (e.g., erasure is usually performed in blocks), the erase voltage (Vers) can be changed (e.g., using one or more bit lines, select gates, etc.) (e.g., usually Vpgm) is applied to the substrate (and therefore the channel between the source and drain) of the memory cell targeted for erasure, while the word line of the target memory cell is maintained at a potential such as ground potential (e.g., Vss), This results in charge transfer from the floating gate of the target memory cell to the channel (for example, direct injection or FN tunneling, etc.).4 illustrates an example block diagram of the memory device 400 that can be implemented in the memory device 110 of FIG. One or more circuits or components of multiple memory operations. The memory device 400 may include a row decoder 412, a column decoder 414, a sense amplifier 420, a page buffer 422, a selector 424, an I/O circuit 426, and a memory control unit 430.The memory cells 404 of the memory array 402 may be arranged in blocks such as the first block 402A and the second block 402B. Each block can contain sub-blocks. For example, the first block 402A may include a first sub-block 402A0 and a second sub-block 402An, and the second block 402B may include a first sub-block 402B0 and a second sub-block 402Bn. Each sub-block may include a number of physical pages, where each page includes a number of memory cells 404. Although described herein as having two blocks, where each block has two sub-blocks, and each sub-block has a number of memory cells 404, in other examples, the memory array 402 may include more or fewer blocks, Sub-blocks, memory cells, etc. In other examples, the memory cells 404 may be arranged in rows, columns, pages, sub-blocks, blocks, etc., and use, for example, the access line 406, the first data line 410, or one or more select gates, source lines, etc. To access.The memory control unit 430 may control the memory operation of the memory device 400 according to one or more signals or instructions received on the control line 432. The one or more signals or instructions include, for example, instructing a required operation (for example, writing, reading, etc.). One or more clock signals or control signals for fetching, erasing, etc.), or address signals (A0-AX) received on the address line 416. One or more devices external to the memory device 400 may control the value of the control signal on the control line 432 or the value of the address signal on the address line 416. Examples of devices external to the memory device 400 may include, but are not limited to, a host, a memory controller, a processor, or one or more circuits or components not illustrated in FIG. 4.The memory device 400 may use the access line 406 and the first data line 410 to transfer data to (eg, write or erase) or from (eg, read) one or more of the memory cells 404. The row decoder 412 and the column decoder 414 can receive and decode the address signal (A0-AX) from the address line 416, determine which memory cells 404 will be accessed, and provide the signal to, for example, the access line 406 described above ( For example, one or more of the plurality of word lines (WL0-WLm) or the first data line 410 (eg, one or more of the plurality of bit lines (BL0-BLn)).The memory device 400 may include a sensing circuit system such as a sense amplifier 420, which is configured to use the first data line 410 to determine (e.g., read) the value of data on the memory cell 404, or determine to be written to the memory The value of the data in cell 404. For example, in a selected string of memory cells 404, in response to a read current flowing through the selected string in the memory array 402 to the data line 410, one or more of the sense amplifiers 420 may read the selected string The logic level in the memory cell 404.One or more devices external to the memory device 400 can communicate with the memory device 400 using I/O lines (DQ0-DQN) 408, address lines 416 (A0-AX), or control lines 432. The I/O circuit 426 can use the I/O line 408 to transfer data values to and from the memory device 400, such as the page buffer 422 or the memory array 402, according to, for example, the control line 432 and the address line 416. The page buffer 422 may store data received from one or more devices external to the memory device 400, and then program the data into the relevant part of the memory array 402, or may store data read from the memory array 402, and then The data is transmitted to one or more devices external to the memory device 400.The column decoder 414 can receive address signals (A0-AX) and decode them into one or more column address signals (CSEL1-CSELn). The selector 424 (e.g., a selection circuit) may receive column selection signals (CSEL1 to CSELn), and select data in the page buffer 422 that represents the data value to be read from the memory cell 404 or to be programmed into the memory cell 404. The second data line 418 can be used to transfer selected data between the page buffer 422 and the I/O circuit 426.The memory control unit 430 may receive positive and negative power signals, such as a power supply voltage (Vcc) 434 and a negative power supply (Vss) 436 (for example, ground potential) from an external source or power source (for example, internal or external battery, AC-DC converter, etc.) ). In some examples, the memory control unit 430 may include a regulator 428 to internally provide a positive or negative power signal.Figure 5 illustrates a block diagram of an example machine 500 on which any one or more of the techniques (e.g., methods) discussed herein can be performed. In alternative embodiments, the machine 500 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 500 can operate with the capabilities of a server machine, a client machine, or both in a server-client network environment. In an example, the machine 500 may act as a peer-to-peer machine in a peer-to-peer (P2P) (or other distributed) network environment. The machine 500 can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile phone, a network appliance, an IoT device, an automobile system, or it can execute (sequentially or in other ways) designated by Any machine with instructions for actions taken by the machine. In addition, although only a single machine is described, the term "machine" will also be considered to include individually or collectively executing one (or more) instruction sets to perform any one or more of the methods discussed herein (such as cloud Computing, software as a service (SaaS), other computer cluster configuration) any machine set. The example machine 500 may be arranged to operate in the environment 100 of FIG. 1. The example machine 500 may include one or more memory devices having structures as discussed with respect to the memory array 200 of FIG. 2, the memory array 300 of FIG. 3, and the memory device 400 of FIG.As described herein, an instance may include logic, components, devices, packages, or mechanisms, or may operate through logic, components, devices, packages, or mechanisms. A circuit is a collection (e.g., collection) of circuits implemented in a tangible entity containing hardware (e.g., simple circuits, gates, logic, etc.). Circuit system members can be flexible over time and basic hardware changes. The circuit system contains components that can perform specific tasks individually or in combination during operation. In an example, the hardware of the circuit system can be permanently designed to perform a specific operation (e.g., hard-wired). In an example, the hardware of the circuit may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.), including physically modified (e.g., magnetically, electrically, movable particles that are constantly assembled). Place, etc.) to encode a computer-readable medium of instructions for specific operations. When connecting physical components, the basic electrical characteristics of the hardware components are, for example, changed from insulators to conductors or vice versa. The instructions enable participating hardware (e.g., an execution unit or a loading mechanism) to generate parts of a circuit system in the hardware via variable connections to perform parts of a specific task when in operation. Therefore, when the device is operating, the computer-readable medium is communicatively coupled to the other components of the circuit system. In one example, any one of the physical components can be used in more than one component of more than one circuit system. For example, in operation, the execution unit can be used for the first circuit of the first circuit system at one point in time, and be reused by the second circuit in the first circuit system, or be used by the second circuit at a different time. The third circuit in the system is reused.The machine (eg, computer system) 500 (eg, host device 105, memory device 110, etc.) may include a hardware processor 502 (eg, CPU, GPU, hardware processor core, or any combination thereof, such as memory controller 115, etc.), The main memory 504 and the static memory 506, some or all of which can communicate with each other via an interconnect (for example, a bus) 508. The machine 500 may additionally include a display device 510, an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse). In an example, the display device 510, the input device 512, and the UI navigation device 514 may be a touch screen display. The machine 500 may additionally include a storage device (for example, a driving unit) 521, a signal generating device 518 (for example, a speaker), a network interface device 520, and one or more sensors 516, such as a global positioning system (GPS) sensor, a compass, an acceleration Meter or other sensors. The machine 500 may include an output controller 528, such as a serial (e.g., USB, parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to connect with one or more peripheral devices ( For example, printers, card readers, etc.) communicate or control the one or more peripheral devices.The machine 500 may include a machine-readable medium 522 on which any one or more of the techniques or functions described herein are stored or used for any one or more of the techniques or functions described herein. One or more sets of data structures or instructions 524 (e.g., software) for multiple utilization. The instructions 524 may also reside completely or at least partially in the main memory 504, the static memory 506, or the hardware processor 502 while they are being executed by the machine 500. In an example, one or any combination of the hardware processor 502, the main memory 504, the static memory 506, or the storage device 521 may constitute the machine-readable medium 522. The instructions 524 may include instructions for data management regarding the error log. This type of data management may include dumping the error log to memory in response to receiving a trigger to dump the error log.Although the machine-readable medium 522 is illustrated as a single medium, the term "machine-readable medium" may include a single medium or multiple media configured to store one or more instructions 524 (e.g., a centralized or distributed database, or related Connected cache memory and server).The term "machine-readable medium" may include instructions capable of being stored, encoded, or transported for execution by the machine 500 and causing the machine 500 to perform any one or more of the techniques of the present disclosure, or capable of being stored, encoded, or transported by Any medium of data structure used by such instructions or associated with such instructions. Non-limiting examples of machine-readable media can include solid-state memory as well as optical and magnetic media. In an example, centralized machine-readable media includes machine-readable media having multiple particles of constant quality (e.g., stationary). Therefore, centralized machine-readable media does not temporarily propagate signals. Specific examples of centralized machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., EPROM, EEPROM) and flash memory devices; magnetic disks, such as internal hard disks and removable magnetic disks; magneto-optical disks; and Compact compact disc-ROM (CD-ROM) and digital versatile disc read-only memory (DVD-ROM) discs.Instructions 524 (for example, software, programs, operating system (OS), etc.) or other data are stored on the storage device 521 and can be accessed by the memory 504 for use by the processor 502. The memory 504 (e.g., DRAM) is generally fast but volatile, and is therefore different from the type of storage device 521 (e.g., SSD) suitable for long-term storage (including storage when in the "off" state) Storage device. Instructions 524 or data for the user or machine 500 are usually loaded in the main memory 504 for the processor 502 to use. When the memory 504 is full, the virtual space from the storage device 521 can be allocated to supplement the memory 504; however, because the storage device 521 is generally slower than the memory 504 and the writing speed is usually at most one-half of the reading speed, the virtual space The use of memory may greatly reduce the user experience due to storage device latency (compared to memory 504, such as DRAM). In addition, the use of the storage device 521 for virtual storage can greatly reduce the usable life of the storage device 521.Compared to virtual memory, virtual memory compression (for example, the kernel feature “ZRAM”) uses part of the memory as a compressed block for storage to avoid paging to the storage device 521. Paging is performed in the compressed block until it is necessary to write such data to the storage device 521. Virtual memory compression increases the available size of the memory 504 and reduces the loss of the storage device 521 at the same time.Storage devices or mobile storage devices optimized for mobile electronic devices traditionally include MMC solid-state storage devices (for example, micro-secure digital (microSDTM) cards, etc.). The MMC device includes several parallel interfaces (for example, an 8-bit parallel interface) with the host device, and is usually a detachable and detachable component from the host device. In contrast, the eMMCTM device is attached to the circuit board and is considered a component of the host device, and its read speed is comparable to SATA-based SSD devices. However, the demand for mobile device performance continues to increase in order to fully enable virtual or augmented reality devices, take advantage of increased network speeds, and so on. In response to this demand, the storage device has been converted from a parallel communication interface to a serial communication interface. The UFS device containing the controller and firmware uses a low-voltage differential signaling (LVDS) serial interface with a dedicated read/write path to communicate with the host device, thereby further promoting higher read/write speeds.The instruction 524 may further utilize any of a plurality of transport protocols (for example, Frame Relay, Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), etc.) The transmission medium is used to transmit or receive on the communication network 526 via the network interface device 520. Example communication networks may include local area networks (LAN), wide area networks (WAN), packet data networks (e.g., the Internet), mobile phone networks (e.g., cellular networks), plain old telephone (POTS) networks, and wireless data networks (e.g., electrical and electronic The Institute of Engineers (IEEE) 802.11 series of standards, known as the IEEE 802.16 series of standards), IEEE 802.15.4 series of standards, peer-to-peer (P2P) networks, and so on. In an example, the network interface device 520 may include one or more physical sockets (e.g., Ethernet, coaxial, or telephone sockets) or one or more antennas to connect to the communication network 526. In an example, the network interface device 520 may include multiple antennas to perform wireless communication using at least one of Single Input Multiple Output (SIMO), Multiple Input Multiple Output (MIMO), or Multiple Input Single Output (MISO) technologies. The term "transmission medium" should be considered to include any tangible medium capable of carrying instructions for execution by the machine 500, and includes tools that propagate digital or analog communication signals or other tangible media to facilitate communication of such software.6 is a block diagram of an embodiment of an example system 600 including a host 605 operating with a memory device 610, where the host 605 can detect one or more error conditions associated with the memory device 610, and in response to detecting one or more error conditions For an error condition, a trigger signal is transmitted to the pin of the memory device 610 to trigger the dumping of the error log 613 in the memory device 610. The error log 613 being actively updated may not be maintained in the non-volatile memory, but may be uploaded to the non-volatile memory component on a scheduled basis or on the basis of satisfying one or more criteria for such transfers. The example system 600 may be implemented with respect to the environment 100 of FIG. 1. The example system 600 may be implemented with respect to a memory device 610 having one or more individual memory device components, which have been discussed with respect to the memory array 200 of FIG. 2, the memory array 300 of FIG. 3, and the memory device 400 of FIG. Structure.In this example embodiment, the host 605 is coupled to the memory device 610 through a number of communication lines 620-1...620-I...620-K...620-M...620-N leading to the memory device 610. The communication line may be in the form of a directional line from the host 605 to the memory device 610, a directional line from the memory device 610 to the host 605, or a bidirectional line between the host 605 and the memory device 610. The number of communication lines and the form of communication lines may depend on the structure of the memory device 610. The memory device 610 may be structured according to standard specifications for its application.The host 605 can use the communication lines 620-1...620-I...620-K...620-M...620-N to interact with the memory device 610 to store user data to the memory device 610 without retrieving user data from the memory device 610. The communication line 620-1...620-I...620-K...620-M...620-N can be used to perform maintenance data, command-to-command response, signaling data, and other similar signals between the host 605 and the memory device 610 exchange. The communication lines 620-1...620-I...620-K...620-M...620-N coupling the host 605 and the memory device 610 can be implemented in several different ways. For example, the communication lines 620-1...620-I...620-K...620-M...620-N can be implemented according to a standard interface protocol related to the memory device type corresponding to the memory device 610. According to the function signal that the memory device 610 is expected to receive at the pins 609-1...609-I...609-K...609-M...609-N, the communication line 620-1...620-I...620-K can be connected …620-M…620-N are allocated to the pins 609-1…609-I…609-K…609-M…609-N of the memory device 610. The memory device 610 may have other pins for input and output signals of an external device other than the host 605.The memory device 610 may include a processing device 615 that manages the operation of the memory device 610. The processing device 615 may include or be structured as one or more processors such as, but not limited to, a CPU. The processing device 615 can be structured as one or more memory controllers. The processing device 615 may store instructions for operating on the memory device 610 as a data storage device to store user data to the data storage device 612 of the memory device 610 and retrieve user data from the data storage device 612. The instructions may be stored in the management memory 616 of the processing device 615 or stored in a component of the memory device 610 external to the processing device 615. The management memory 616 of the processing device 615 may include code executable by the processing device 615 to manage at least the data storage device 612. The management memory 616 may be structured as firmware containing instructions. Alternatively, the firmware 619 may reside in a non-volatile memory separate from the processing device 615 and having instructions executable by the processing device 615. The firmware 619 may include code having instructions executable by the processing device 615 to operate on the data storage device 612. The data storage device 612 may include one or more individual memory components. One or more individual memory components may be implemented as, but not limited to, individual NAND memory devices. One or more individual memory components of the data storage 612 can be implemented in several formats including, but not limited to, multiple memory dies. The memory device 610 may be structured as (but not limited to) an SSD, a UFS device, or an eMMC device. For example, the memory device 610 may be structured as a mobile storage device. The memory device 610 can be structured as a managed NAND system.In the example system 600, the processing device 615 is configured (eg, a hardware and/or software implementation) to perform operations related to the following: according to the methods described herein, including the methods associated with FIGS. 7-12, in response to The occurrence of the trigger signal to dump the error log received on the pin 609-K of the memory device 610 is determined, and the error log 613 associated with one or more error conditions is dumped to the memory 614 of the memory device 610. The memory 614 may be part of the data storage device 612. The pin 609-K is one of the pins 609-1...609-I...609-K...609-M...609-N of the memory device 610, which is coupled to the host 605 via the communication line 620-K. 620-K is one of communication lines 620-1...620-I...620-K...620-M...620-N.The dumping of the error log 613 can be handled by a component for triggering the task 611 in real time. The real-time trigger can be signaled to the real-time trigger task 611 through a hardware interrupt handled by an interrupt service routine (ISR). The real-time triggered ISR can call or restart the real-time trigger task 611 responsible for dumping the system state of the memory device 610. The real-time trigger task 611 can be independent of the task status of the firmware that manages the memory device 610. Although the error log 613 is shown as triggered in real time as described in task 611, the error log 613 may be written to other components passing through the memory device 610 immediately after the error associated with the operation of such components is determined. The instructions for handling the determination of the trigger signal and the dumping of the error log 613 may be stored in a dedicated instruction 618 separated from the management memory 616 and separated from the firmware 619 handling the access of the data storage device 612. These dump-related instructions may be executed by the processing device 615, for example, by a processor of a plurality of processors constituting the processing device 615. The real-time trigger task 611 can operate on the results from the timing circuitry 617 of the memory device 610, where the signal received at pin 609-K is provided to the timing circuitry 617. In other embodiments, the occurrence of the trigger signal may be based on another parameter, such as, but not limited to, the signal amplitude accompanying the timing circuitry 617 replaced or enhanced by the amplitude comparison circuitry.In this example embodiment, the pin 609-K is a pin assigned to receive a signal for performing a function of the memory device 610, where the signal for the function is different from the trigger signal. Timing circuitry 617 can be used to determine the timing parameters of the signal received at pin 609-K. For example, the timing circuitry 617 can be implemented to compare the timing parameters of the signal received at pin 609-K with the timing parameters of the functional signal assigned to pin 609-K. The timing circuitry 617 can be implemented to compare the timing parameters of the signal received at pin 609-K with the timing parameters defined for the trigger signal. The timing circuit system 617 can be implemented to compare the difference between the signal received at the pin 609-K and the function signal format or trigger signal format as a reference. In various embodiments, the trigger signal may be structured to experience multiple bi-state triggers at a time corresponding to a specified length of time, in which the signal is assigned to the functional signal assigned to pin 609-K. Pulled low or designated to be pulled high. For example, the pin 609-K may be a reset pin of the memory device 610 for receiving a reset signal to identify the reset event of the memory device 610. For example, since the trigger signal is different from the reset signal, the trigger signal can be structured to experience multiple two-state triggers within a two-hundred-nanosecond two-state trigger period, which can be less than the pull-down period. Or the pulse width of the reset signal being pulled high. Other two-state trigger periods can be used.In response to determining that the signal received at the pin 609-K is a trigger signal, the processing device 615 can execute an instruction in the dedicated instruction 618 to dump the error log 613 associated with one or more error conditions to the memory 614 . The operations in the dedicated instructions 618 may include operations in which the memory device 610 completes an ongoing task and saves cached host data. The dedicated instructions 618 are placed in a dedicated portion of the memory device 610, wherein the dedicated portion is separated from the firmware that controls the data management of the memory device 610 for data storage. The dedicated portion of the memory device may be a portion of SRAM, ROM, or a non-volatile portion of the data storage device 612. The error log being dumped to the memory 614 may contain hardware information and firmware information. The memory device 610 can transmit the content of the error log 613 to the host 605 for failure analysis.The behavior of the memory device 610 after receiving the trigger signal (that is, the notification of the error event signal) may include several actions. The error event signal can be a two-state trigger signal for a hardware error event. The memory device 610 can prevent the main firmware (firmware 619 or management memory 616, depending on the implementation) that relies on the management memory device 610 from responding to the hardware error event binary trigger, because the main firmware may be stuck at the time. In UFS applications, the hardware error event can be toggled by a timing parameter that is generated to distinguish it from the UFS RST_n reset signal. The hardware of the memory device 610 may be implemented so that the hardware immediately responds to the error event toggle trigger in the interrupt handling flow that wakes up (or jumps to) a special code such as a dedicated instruction 618, which is dedicated to handling errors Dump and bypass the main firmware. Dedicated error dump firmware (e.g., dedicated instructions 618) can be implemented in a small size, which allows it to permanently reside in SRAM or ROM after bootup. The dedicated error dump firmware can dump the predefined application specific integrated circuit (ASIC) registration (REG) address and the selected SRAM area containing the error log into an SLC block of the memory device 610. The operation of the error dump firmware may actually be invisible to the main firmware. After completing the work related to dumping the error log, the memory device 610 can choose to return the control of the processing device 615 to the main firmware or directly initiate a reset of the memory device 610.The host 605 may generate the trigger received at pin 609-K in response to detecting one or more error conditions associated with the memory device 610. In this example embodiment, the host 605 includes a host processor 604 that executes instructions stored in the host memory 606. The host processor 604 may be implemented as one or more processors. The error conditions associated with the memory device 610 may include data timeout, data mismatch, fatal error, initialization timeout, and stuck system firmware identification. For example, when a command from the host 605 is sent to the memory device 610, a response from the memory device 610 is expected within a specified amount of time. If the response is not received within a specified amount of time, then a data timeout may occur. Once an error condition is detected, the host processor 604 can execute the instructions stored in the host memory 606 to generate a trigger signal.Can generate trigger signals with specified timing parameters. These timing parameters can be set relative to the pin 609-K of the memory device 610 to which the trigger signal is sent. The timing parameter may be configured to distinguish the trigger signal from the functional signal received at pin 609-K of the memory device 610 assigned to it. The host 605 may generate a trigger signal to experience multiple bi-state triggers within a time corresponding to a specified length of time in which the assigned function signal is specified to be pulled low or specified to be pulled high at the pin 609-K. For example, the pin 609-K may be a reset pin of the memory device 610 that receives a reset signal to identify the reset event of the memory device 610. For example, since the trigger signal is different from the reset signal, the trigger signal can be structured to experience multiple two-state triggers within a two-hundred-nanosecond two-state trigger period, which can be less than the pull-down period. Or is pulled up to reset the pulse width of the signal. Other two-state trigger periods can be used. Since the trigger signal is a designated signal, the format of the signal can be saved. In this case, the trigger signal can be generated by storing the trigger signal from its save position. For example, the host 605 may include a circuit system for generating a trigger signal with specified timing parameters, which outputs a trigger signal when activated by an enable or on signal from the host processor 604.The host 605 sends a trigger signal to the memory device 610 to trigger the dumping of the error log 613 in the memory 614. This trigger signal is sent in response to the host 605 determining that one or more error conditions associated with the memory device 610 have occurred. The decision to transmit the trigger signal to the memory device 610 may be based on a comparison of the number or type of error conditions with one or more thresholds of allowable error conditions. Triggering the dump of the error log 613 provides a mechanism for saving the error log 613. The error log 613 may be transmitted from the memory device 610 to the host 605 for error analysis. The host 605 can perform error analysis. The host 605 may send a part of the error log 613 and information generated from the error log 613 to another system outside the host 605 and outside the memory device 610 for failure analysis. The transmission to this other system can be via a network or a combination of networks.FIG. 7 illustrates an arrangement 700 of several signals between the host 705 and the memory device 710 for the operation of these devices. The host 705 and the memory device 710 can be implemented in a manner similar to the host 605 and the memory device 610 of FIG. 6, respectively. The signals shown are reset signal (RST), reference clock (REF_CLK), data in signal (DIN_t/c), and data out signal (DOUT_t/c), but this type of signal is included between the host 705 and the memory device 710 Other signals are transmitted during the operation of the device's system. The RST signal can be a low-state active signal, designated as RST_n, which is activated in a negative state. The DIN and DOUT signals can be true/complementary signals, which means they are differential signals. Due to user panel layout restrictions, it may be difficult to define new hardware pins dedicated to real-time error notification. Real-time error notification and functions assigned to existing pins can share existing pins of memory device 710. In the embodiment of FIG. 7, the RST signal may share an existing pin of the memory device 710 with a trigger signal for real-time error notification.FIG. 8 illustrates the timing 840 of the reset signal of the arrangement of FIG. 7. In this example, the memory device 710 of FIG. 7 may be a UFS device, and the reset signal adopted by it is the UFS reset signal RST_N with the timing definition shown in FIG. 8. UFS devices are provided as non-limiting devices because other types of memory devices can be used in the manner taught herein. The label tRSTW from the JEDEC standard regarding RST_n is the time for the reset pulse width, and the label tRSTH is the time for the reset pulse to be high. In the UFS specification, RST_n is defined as valid by keeping the reset signal pulled down for more than 1μS and then keeping the reset signal pulled high for a length of more than 1μS. Ignore any two-state trigger activity less than 1μS. Regarding the reset pin of the memory device 710, a trigger signal for real-time error notification can be generated by a higher frequent toggling signal on the RST_n pin of the memory device 710 to indicate the occurrence of an error in a timely manner.FIG. 9 shows a table 943 of reset timing parameters for the reset signal of FIG. 8. tRSTW has a specified minimum value of 1μS, but does not have a specified maximum value. tRSTH has a specified minimum value of 1μS, and does not have a specified maximum value. The reset timing parameters also include filter parameters for RST_n, where tRSTF has a duration in which the signal is ignored or filtered. In table 943, tRSTF has a minimum value of 100ns, which means that high or low pulses less than 100 nanoseconds will be ignored or filtered out. There is no specified maximum value.Figure 10 illustrates an embodiment of an example hardware reset signal toggle. In this embodiment, the hardware reset signal toggle is customized for the UFS device associated with FIGS. 8 and 9. When a UFS error (such as link loss, command timeout, sleep exit error, etc.) is detected, the host 705 arranged as UFS in FIG. 7 can be pulled up several times and pull down the UFS RST_n of the device arranged as a UFS device in FIG. 7 The foot notifies the UFS device 710 of the occurrence of an error in real time. Due to this notification, the memory device 710 may try to summarize its work, save the cached host data, and dump debugging information into the memory of the memory device 710, which may be, but is not limited to, NAND flash. Timing 1040 is the UFS RST_n AC timing diagram, as shown in Figure 8. Timing 1050 is an error event trigger, which is accompanied by a two-state trigger associated with timing 1040. Within tRSTW, timing 1050 has two toggles from the pull-down of the signal. Within tRSTH, timing 1050 has two toggles from the high position associated with the pull-up following the pull-down of the RST_n signal. In various embodiments, pulling up/down of RST_n may be performed more than 5 times, where each toggle may have a period of approximately 200ns. This new error two-state trigger method can be implemented without affecting the traditional specification definition of UFS RST_n behavior. This method can be applied to devices other than UFS devices and architectures similar to or different from the architecture shown in FIG. 6.The error log dump of the memory device 710 of FIG. 7 may be synchronized with the data message (DMSG) log of the host 705. Think of the memory device 710 as a UFS device 710 and the host 705 as a UFS host 705. Most of the time, the failure unit can record several error log entries, which makes it difficult for the debugging process to identify which one is associated with a specific host failure associated with the trigger signal. Conventionally, the device error dump of the memory device 710 cannot be easily synchronized with the host DMSG log, but the host DMSG log contains a real-time stamp. This problem can be solved by sending a real-time clock (RTC) update to the UFS device 710 through the UFS host 705. After each startup, the UFS host 705 can send an RTC update to the UFS device 710 via the UFS device descriptor. The UFS device 710 can update the internal RTC timer and track error log activity. Whenever the UFS device 710 dumps its error log, it can dump every error log entry with an internal RTC stamp. The error log content including the RTC stamp can be sent to the host 705 for failure analysis, and the RTC stamp of each error log entry can be used to synchronize with the DSMG log of the host 705. This synchronization method can be applied to devices other than UFS devices and architectures similar to or different from the architecture shown in FIG. 6.The two-state trigger technique as taught herein can provide several enhancements to error notification related activities compared to conventional methods. This two-state trigger technology enables failure analysis in any stuck/error conditions. This allows dumping of all device system information, including both hardware information and firmware information. This provides a non-invasive failure analysis method in which no retesting is required. This type of two-state trigger technology that triggers error log dumping does not require the device to be de-soldering for failure analysis. In addition, this type of two-state trigger technology that triggers error log dumping is independent of the device protocol specification, such as eMMC or UFS, because the trigger signal can be defined in relation to the device protocol specification (as shown above for UFS devices).Figure 11 is a flowchart of features of an embodiment of an example method 1100 of saving an error log of a memory device. The example method 1100 may be implemented with respect to the environment 100 of FIG. 1, the example system 600 of FIG. 6, and the example arrangement 700 of FIG. The example method 1100 may be implemented with respect to one or more individual memory devices having structures as discussed with respect to the memory array 200 of FIG. 2, the memory array 300 of FIG. 3, and the memory device 400 of FIG.At 1110, a signal is received at a pin of the memory device. At 1120, through a determination based on the timing parameters of the signal, a determination is made whether the signal is a trigger signal received on a pin of the memory device. At 1130, in response to the determination that the signal is a trigger signal, the error log associated with the one or more error conditions is dumped to the memory of the memory device. The memory to which the error log is dumped can be a non-volatile memory.Method 1100 or variants of methods similar to method 1100 may include several different embodiments that may be combined depending on the application of such methods and/or the architecture of the system in which such methods are implemented. Such methods may include determining whether the received signal is a trigger signal by determining whether the received signal has experienced multiple toggle triggers within a time period corresponding to a specified length of time that is pulled low or pulled high for a non-error signal. The non-error signal can be a non-error signal assigned to the pin. For example, a memory device may include a number of pins that interface with an external entity such as a host, where, for example, the pins are allocated for specific tasks or functions of the system according to some standards for operating systems. The trigger signal can be applied to one of the specifically assigned pins, such as additional tasks associated with the pin. This additional task can be identified based on the difference between the timing parameters of the specific task or function of the system and the timing parameters of the trigger signal.The method 1100 or a variant of the method similar to the method 1100 may include that the pin is a reset pin that receives a reset signal to identify a reset event of the system, where the trigger signal is different from the reset signal. In addition, determining whether the signal is a trigger signal and dumping the error log may be executed by the processor of the system executing instructions, wherein the instructions are stored in a dedicated part of the system. The dedicated part may be arranged to be separate from the firmware that controls data management for the data in the storage system.In various embodiments, the memory device may include: a timing circuit system that determines the occurrence of a trigger signal received on a pin of the memory device; and a processor that is configured to execute one or more stored in the memory device The instructions on each component, when executed by the processor, cause the memory device to perform operations. The operation may include dumping error logs associated with one or more error conditions to the memory of the memory device in response to the determination of the occurrence of the trigger signal. The memory to which the error log is dumped can be a non-volatile memory. The error log can be located in one or more components of the memory device. The error log may be placed in a memory of a memory device arranged to store user data.Variations of such memory devices and their features as taught herein may include several different embodiments and features that may be combined depending on the application of such memory devices and/or the architecture in which such memory devices are implemented. Features of this type of memory device may include that the pin is a pin that receives a signal for performing the function of the memory device, where the signal is different from the trigger signal. The trigger signal may experience multiple two-state triggers within a time corresponding to a specified length of time in which the signal is specified to be pulled low or specified to be pulled high. Alternatively, the trigger signal may be in the activated state during the time in which the signal is not expected to be activated. The pin may be a reset pin that receives a reset signal to identify a reset event of the system, where the trigger signal is different from the reset signal. At the reset pin, the trigger signal can experience multiple bi-state triggers within a bi-state trigger cycle of approximately two hundred nanoseconds. Other two-state trigger periods can be used.Variations of this type of memory device and associated features as taught herein may include the operation of the memory device, including completing an ongoing task and saving cached host data in response to the determination of the occurrence of a trigger signal. In several examples, the host may send a trigger signal to the memory device so that the error log in the memory device can be dumped to the memory in the memory device before the power-off event is completed, and the power-off event may not be caused by the user. Power off when activated.Variations of such memory devices and associated features as taught herein may include instructions associated with dumping the error log of the memory device to the memory of the memory device to be stored in a dedicated portion of the memory device, wherein the dedicated portion may Separate from the firmware that controls the data management of the memory device used for data storage. The dedicated part of the memory device may be part of SRAM or ROM. The error log dumped to the memory of the memory device may contain hardware information and firmware information about the memory device or components of the memory device. The error log may include, but is not limited to, information about one or more of data timeout, data mismatch, fatal error, initialization timeout, and stuck system firmware identification.Variations of this type of memory device and associated features as taught herein may include the operation of the memory device, including the transmission of error logs dumped to the memory of the memory device from the memory to the host. This error log can be used for failure analysis of the memory device. The memory device may be structured to include components that perform any function associated with the method 1100 of saving an error log of the memory device or associated with a method similar to the method 1100.12 is a flowchart of features of an embodiment of an example method 1200 of saving an error log in a memory device through a system that interfaces with the memory device. The example method 1200 may be implemented with respect to the environment 100 of FIG. 1, the example system 600 of FIG. 6, and the example arrangement 700 of FIG. At 1210, the system detects one or more error conditions associated with the memory device.At 1220, a trigger signal with timing parameters is generated, and the timing parameters are defined for the trigger signal. The generation may be in response to the detection of one or more error conditions. The basic structure of the trigger signal can be generated by the system when the system is structured and stored in the components of the system. Then, when one or more error conditions associated with a given memory device are detected, the trigger signal structure can be generated by accessing the stored structure and preparing the trigger signal for transmission to the given memory device.At 1230, in response to the detection of one or more error conditions, a trigger signal is transmitted to a pin of the memory device to trigger a dump of the error log in the memory device. The emission of the trigger signal or the generation for emission may be responsive to the detection of one or more error conditions is related to the determination that the number of the one or more error conditions is greater than or equal to the threshold of the detection of the one or more error conditions. The setting of this threshold may be related to the determined time between the detection of error conditions. Error conditions may include, but are not limited to, data timeouts, data mismatches, fatal errors, initialization timeouts, stuck system firmware occurrences, and other error events associated with memory devices. Pins can be allocated for functions other than trigger dumping of the memory device. The pin may be a reset pin of the memory device, wherein the reset pin is configured to receive a reset signal from the system to identify a reset event of the memory device, wherein the reset signal is different from a trigger signal.Method 1200 or variants of methods similar to method 1200 may include several different embodiments that may be combined depending on the application of such methods and/or the architecture of the system in which such methods are implemented. Features of this type of method may include generating a trigger signal to be transmitted to a pin of the memory device, where the pin is assigned to receive a signal for performing a function of the memory device, where the function signal is different from the trigger signal. The trigger signal may be structured to experience multiple two-state triggers within a time corresponding to a specified length of time in which the function signal is specified to be pulled low or specified to be pulled high. Alternatively, the trigger signal in the activated state may be generated during the time in which the expected function signal is not activated. Such methods may include that the trigger signal is structured to experience multiple two-state triggers within a time corresponding to a specified length of time in which the reset signal is pulled low or pulled high.The method 1200 as taught herein or a variant of a method similar to the method 1200 may include generating and transmitting a trigger signal by a system interfacing with the memory device to dump the error log of the memory device to the memory, wherein the error log may include Information on several error events that have occurred in the memory device. The error log may contain one or more of data timeout, data mismatch, fatal error, initialization timeout, stuck firmware identification, and other error information. The method 1200 as taught herein or a variant of a method similar to the method 1200 may include receiving an error log from the memory device that is dumped to the memory of the memory device. Such methods may include performing failure analysis using the received error log by the system interfacing with the memory device. Alternatively, the error log may be transmitted from the system to another system for failure analysis, where the other system may be remote from the system interfaced with the memory device, where the communication is carried out on the communication network.In various embodiments, a system interfacing with a memory device may include: a processor configured to execute instructions stored on one or more components in the system, the instructions when executed by the processor , Causing the system to perform operations. The operations may include: detecting one or more error conditions associated with the memory device; generating a trigger signal with specified timing parameters; and in response to the detection of the one or more error conditions, transmitting the trigger signal to the memory device's trigger The pin is used to trigger the dump of the error log in the memory device, and the pin is allocated to the function of the memory device other than triggering the dump.Variations of such systems and their characteristics as taught herein to interface with memory devices may include several different embodiments and applications that may be combined depending on the application of such systems and/or the architecture in which such systems are implemented. feature. Features of this type of system may include a system arranged to emit a trigger signal to a pin, wherein the pin is a pin of the memory device that receives a signal for performing a function of the memory device, wherein the signal for performing the function Different from trigger signal. The signal for performing the function of the memory device may be regarded as the function signal for the memory device. The system may generate a trigger signal to experience multiple two-state triggers within a time corresponding to a specified length of time in which the function signal is designated to be pulled low or designated to be pulled high. Alternatively, the trigger signal may be in the activated state during the time period in which the expected function signal is not activated. The pin may be a reset pin of the memory device, wherein the reset pin is configured to receive a reset signal from the system to identify a reset event of the memory device, wherein the reset signal is different from the trigger signal. The system can structure the trigger signal to experience multiple two-state triggers within a time corresponding to a specified length of time in which the reset signal is pulled low or pulled high. The system can structure the trigger signal at the reset pin to experience multiple bi-state triggers within a bi-state trigger cycle of approximately two hundred nanoseconds. Other two-state trigger periods can be used.Variations of such systems and their features as taught herein to interface with memory devices may include error logs with one or more of data timeouts, data mismatches, fatal errors, initialization timeouts, and stuck firmware identification. The system operable to interface with the memory device may have an executable operation of receiving an error log dumped to the memory of the memory device from the memory of the memory device. The system interfacing with the memory device may be structured to include performing any function associated with the method 1200 of saving error logs in the memory device through a system operable to interface with the memory device or associated with a method similar to the method 1200 s component.The following are example embodiments of systems and methods according to the teachings herein.The example memory device 1 may include: a timing circuit system that determines the occurrence of a trigger signal received on a pin of the memory device; and a processor that is configured to execute one or more stored in the memory device An instruction on a component that, when executed by the processor, causes the memory device to perform an operation, the operation including the determination in response to the occurrence of the trigger signal that will be related to one or more The error log associated with the error condition is dumped to the memory of the memory device.Example memory device 2 may include the features of example memory device 1 and may include that the pin is a pin that receives a signal for performing a function of the memory device, where the signal is different from the trigger signal.The example memory device 3 may include the features of the example memory device 2 and the foregoing example memory devices and may include the trigger signal to experience a period of time corresponding to a specified length of time in which the signal is specified to be pulled low or specified to be pulled high. Multiple two-state triggers.The example memory device 4 may include the features of any of the example memory device 2 and the foregoing example memory devices and may include that the pin is a reset pin that receives a reset signal to identify a reset event of the memory device, where The trigger signal is different from the reset signal.Example memory device 5 may include the characteristics of example memory device 4 and any of the aforementioned example memory devices and may include that the trigger signal undergoes multiple toggle triggers within a toggle period of approximately two hundred nanoseconds.The example memory device 6 may include the features of any of the foregoing example memory devices and may include the memory device completing an ongoing task and saving cached host data in response to the determination of the occurrence of the trigger signal .The example memory device 7 may include the features of any of the foregoing example memory devices and may include the instructions stored in a dedicated portion of the memory device, and the dedicated portion and control pair of the memory device for data storage The data management is separated by firmware.The example memory device 8 may include the features of any of the example memory device 7 and the aforementioned example memory devices and may include that the dedicated portion of the memory device is a part of a static random access memory or a read-only memory.The example memory device 9 may include the features of any of the aforementioned example memory devices and may include the error log including hardware information and firmware information.The example memory device 10 may include the features of any of the foregoing example memory devices and may include the error log including one or more of data timeout, data mismatch, fatal error, initialization timeout, and stuck system firmware identification.The example memory device 11 may include the features of any of the aforementioned example memory devices and may include transmitting the error log dumped to the memory of the memory device from the memory to the host.In the example memory device 12, any one of the memory devices of the example memory devices 1 to 11 may include a memory device incorporated into an electronic system, and additionally includes a host processor and a communication bus extending between the host processor and the memory device .In the example memory device 13, any one of the memory devices of the example memory devices 1 to 12 may be modified to include any structure presented in another example memory device 1 to 12.In the example memory device 14, any of the memory devices of the example memory devices 1 to 13 may additionally include a machine-readable storage device configured to store instructions as a physical state, wherein the instructions may be used to execute one or Multiple operations.In example memory device 15, any of the systems of example memory devices 1 to 17 may be adapted and operated to perform operations according to any of the following example methods 1 to 7 of saving error logs of the memory device.An example method 1 of saving the error log of a memory device may include: receiving a signal at a pin of the memory device; based on the timing parameter of the signal, determining whether the signal is in the lead of the memory device A trigger signal received on the foot; and in response to the determination that the signal is the trigger signal, dump the error log associated with one or more error conditions to the memory of the memory device.Example method 2 of saving an error log of a memory device may include the features of example method 1 of saving an error log of a memory device and may include determining whether the received signal is the trigger signal includes determining whether the received signal corresponds to For the non-error signal assigned to the pin to be pulled low or pulled high for a specified length of time, multiple two-state triggers are experienced.Example method 3 of saving the error log of the memory device may include the features of any of the foregoing example methods of saving the error log of the memory device and may include that the pin receives a reset signal to identify a reset event of the system Reset pin, wherein the trigger signal is different from the reset signal.Example method 4 of saving an error log of a memory device may include the features of any of the foregoing example methods of saving an error log of a memory device and may include determining whether the signal is the trigger signal and dumping the error log through The processor of the memory device executes instructions for execution, wherein the instructions are stored in a dedicated portion of the memory device, the dedicated portion is separated from the firmware that controls data management for storing data in the memory device .In the example method 5 of saving the error log of the memory device, any one of the example methods 1 to 4 of saving the error log of the memory device may be an electronic device including a host processor and a communication interface extending between the host processor and the memory device. System execution.In the example method 6 of saving the error log of the memory device, any one of the example methods 1 to 5 of saving the error log of the memory device may be modified to include any other of the method examples 1 to 5 of saving the error log of the memory device The operation described in the method example.In the example method 7 of saving the error log of the memory device, any one of the example methods 1 to 6 of saving the error log of the memory device may be stored in one or more machine-readable storage devices as a physical state at least in part by using Instructions to implement.Example method 8 of saving an error log of a memory device may include the features of any of the aforementioned example methods 1 to 7 of saving an error log of a memory device and may include performing functions associated with any of the features of the example memory devices 1-14.An example machine-readable storage device 1 that stores instructions that, when executed by one or more processors, cause a machine to perform operations, the instructions may include performing functions or functions associated with any feature of the example memory devices 1 to 14 The instructions of the method associated with any feature of the example methods 1 to 8 of saving the error log of the memory device are executed.An example system 1 interfacing with a memory device may include a processor configured to execute instructions stored on one or more components in the system, and when the instructions are executed by the processor, Cause the system to perform operations, the operations including: detecting one or more error conditions associated with the memory device; generating a trigger signal with specified timing parameters; and responding to all of the one or more error conditions The detection, the trigger signal is transmitted to the pin of the memory device to trigger the dump of the error log in the memory device, and the pin is allocated to the memory device other than the trigger of the dump Function.The example system 2 interfacing with the memory device may include the features of the example system 1 interfacing with the memory device and may include the pin being the reset pin of the memory device, the reset pin being configured from The system receives a reset signal to identify a reset event of the memory device, wherein the reset signal is different from the trigger signal.The example system 3 interfaced with the memory device may include the features of any one of the example system 2 interfaced with the memory device and the aforementioned example system interfaced with the memory device, and may include the trigger signal structured to correspond to it. The reset signal undergoes multiple two-state triggers within a specified period of time when the reset signal is pulled low or pulled high.The example system 4 interfaced with the memory device may include the features of any of the foregoing example systems interfaced with the memory device and may include one of data timeout, data mismatch, fatal error, initialization timeout, and stuck firmware identification. Multiple.The example system 5 interfacing with a memory device may include the features of any of the foregoing example systems interfacing with a memory device and may include receiving a dump from the memory of the memory device to the memory of the memory device The said error log.In the example system 6 that interfaces with the memory device, any of the systems that interface with the memory devices of the example systems 1 to 5 that interface with the memory device may include the memory device incorporated into the electronic system and additionally include host processing And a communication bus extending between the host processor and the memory device.In the example system 7 interfacing with the memory device, any one of the example systems 1 to 6 interfacing with the memory device may be modified to be included in another example system 1 to 6 interfacing with the memory device. Of any structure.In the example system 8 interfaced with the memory device, any one of the devices of any one of the systems of the example systems 1 to 7 interfaced with the memory device may additionally include a machine configured to store instructions as a physical state. A storage device is read, where the instructions can be used to perform one or more operations of the device.In the example system 9 interfaced with the memory device, any of the systems of example systems 1 to 8 interfaced with the memory device can be adapted and operated to save errors in the memory device according to the system interfaced with the memory device. Log any of the following example methods 1 to 8 to perform the operation.Example method 1 of saving an error log in a memory device through a system interfacing with the memory device may include detecting one or more error conditions associated with the memory device; generating a trigger signal with a timing parameter that is specific to the trigger Signal defined; and in response to the detection of one or more error conditions, a trigger signal is transmitted to a pin of the memory device to trigger the dump of the error log in the memory device, the pin is allocated to the memory device except for the trigger switch Functions other than storage.The example method 2 of saving the error log in the memory device through the system interfacing with the memory device may include the features of the example method 1 of saving the error log in the memory device through the system interfacing with the memory device and may include the pin is A reset pin of the memory device, the reset pin is configured to receive a reset signal from the system to identify a reset event of the memory device, wherein the reset signal is different from the trigger signal .The example method 3 of saving the error log in the memory device through the system interfacing with the memory device may include the features of any of the foregoing example methods of saving the error log in the memory device through the system interfacing with the memory device and may include all The trigger signal is structured to experience multiple two-state triggers within a time corresponding to a specified length of time in which the reset signal is pulled low or pulled high.The example method 4 of saving the error log in the memory device through the system interfacing with the memory device may include the features of any of the foregoing example methods of saving the error log in the memory device through the system interfacing with the memory device and may include data One or more of timeout, data mismatch, fatal error, initialization timeout, and stuck firmware identification.The example method 5 of saving the error log in the memory device through the system interfacing with the memory device may include the features of any one of the foregoing example methods of saving the error log in the memory device through the system interfacing with the memory device and may include The memory of the memory device receives the error log dumped to the memory of the memory device.In the example method 6 of saving the error log in the memory device through the system interfacing with the memory device, any one of the example methods 1 to 4 of saving the error log in the memory device through the system interfacing with the memory device may be included by the host The processor and the electronic system of the communication interface extending between the host processor and the memory device execute.In the example method 7 of saving the error log in the memory device through the system interfacing with the memory device, any one of the example methods 1 to 6 of saving the error log in the memory device through the system interfacing with the memory device may be modified The operation described in any of the other method examples of the method 1 to 6 is included in the example method of saving the error log in the memory device through the system interfaced with the memory device.In the example method 8 of saving the error log in the memory device through the system interfacing with the memory device, any one of the example methods 1 to 7 of saving the error log in the memory device through the system interfacing with the memory device may be at least partially Ground is implemented by using instructions stored as a physical state in one or more machine-readable storage devices.The example method 9 of saving the error log in the memory device through the system interfacing with the memory device may include the features of any one of the aforementioned example methods 1 to 8 of saving the error log in the memory device through the system interfacing with the memory device, and It may include performing functions associated with any of the features of the example systems 1-8 that interface to the memory device.An example machine-readable storage device 2 that stores instructions stores instructions that, when executed by one or more processors, cause the machine to perform operations. The instructions may include execution and interface to the example systems 1 to 9 of the memory device. The function associated with any feature or the instruction to execute the method associated with any feature of the example methods 1 to 9 of saving an error log in the memory device through the system interfaced with the memory device.The method as taught herein provides the effectiveness of debugging for failures found on an operational system. The operational system can be, but is not limited to, a mobile phone. The method can use the existing pins on the memory device and the added logic to respond to the trigger signal, which is structured to be different from the following signal: the existing pin normally sent to the memory device The pins are normally provided to the memory device to perform common functions of the memory device. The trigger signal can be a dedicated debug input signal whose structure distinguishes it from the normal function signal sent to the existing pin on the memory device. The structural distinction may be based on that the trigger signal has a timing parameter that is different from a timing parameter of a normal function signal.In different examples, the components, controllers, processors, units, engines, or tables described herein may especially include physical circuitry or firmware stored on a physical device. As used herein, "processor device" means any type of computing circuit, such as, but not limited to, a microprocessor, microcontroller, graphics processor, digital signal processor (DSP) or any other type of processor or The processing circuit includes a group of processors or multi-core devices.As used herein, manipulating a memory cell includes reading from, writing to, or erasing a memory cell. The operation of placing a memory cell in a predetermined state is referred to herein as "programming" and may include writing to or erasing from the memory cell (e.g., the memory cell may be programmed to an erased state).The method examples described herein can be implemented at least in part by a machine or a computer. Some examples may include a computer-readable medium or a machine-readable medium encoded with instructions that can be used to configure an electronic device to perform a method as described in the examples above. Implementations of such methods may include code, such as microcode, assembly language code, high-level language code, or the like. Such code may contain computer readable instructions for performing various methods. The code can form part of a computer program product. In addition, the code may be tangibly stored on one or more volatile or non-volatile tangible computer-readable media, for example, during execution or at other times. Examples of these tangible computer-readable media may include, but are not limited to, hard disks, removable disks, removable optical disks (for example, optical disks and digital video disks), cassette tapes, memory cards or sticks, RAM, ROM, SSD, UFS devices , EMMC devices, etc.Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will understand that any arrangement calculated to achieve the same purpose can be substituted for the specific embodiments shown. The various embodiments use permutations and/or combinations of the embodiments described herein. The above description is intended to be illustrative rather than restrictive, and the idioms or terms used herein are for descriptive purposes. In addition, in the above detailed description, it can be seen that for the purpose of simplifying the present disclosure, various features are grouped together in a single embodiment. After studying the above description, the combination of the above embodiment and other embodiments will be obvious to those skilled in the art. |
A method for in situ formation of low defect, strained silicon and a device formed according to the method are disclosed. In one embodiment, a silicon germanium layer is formed on a substrate, and a portion of the silicon germanium layer is removed to expose a surface that is smoothed with a smoothing agent. A layer of strained silicon is formed on the silicon germanium layer. In various embodiments, the entire method is conducted in a single processing chamber, which is kept under vacuum. |
1.A method for manufacturing a semiconductor device includes:In the processing chamber, a silicon germanium layer is formed on the substrate;In the processing chamber, remove a part of the silicon germanium layer;In the processing chamber, after removing a part of the silicon germanium layer, smoothing the surface of the silicon germanium layer; andForming a silicon layer on the smoothed surface of the silicon germanium layer, wherein the silicon germanium layer includes a relaxed silicon germanium layer, and wherein the lattice spacing of the silicon and the relaxed silicon germanium layer are mismatched .2.The method of claim 1, wherein the substrate is not removed from the processing chamber until after the silicon layer is formed.3.The method of claim 2, wherein the processing chamber is maintained under vacuum at least for the earliest period from the removal of a portion of the silicon germanium layer until after the silicon layer is formed.4.The method of claim 1, wherein the step of forming the silicon germanium layer comprises:Forming a first silicon germanium layer on a silicon substrate, wherein the germanium concentration of the first layer gradually increases along the thickness of the first layer; andForming a second silicon germanium layer on the first silicon germanium layer, wherein the germanium concentration of the second layer remains constant throughout the thickness of the second layer, wherein the second silicon germanium layer includes the relaxation Silicon germanium layer.5.The method of claim 4, wherein the step of forming the first layer comprises:Increasing the germanium concentration of the first layer causes the germanium concentration to increase by 10% for each micron-thick first layer.6.The method of claim 4, wherein the step of forming the second layer comprises:Germanium having a concentration approximately equal to the germanium concentration of the upper portion of the first layer is introduced into the second layer.7.The method of claim 4, wherein the second layer is formed to a thickness between about 0.5-1 microns.8.The method of claim 1, wherein the removing step comprises:An etchant is introduced onto the surface of the silicon germanium layer.9.The method of claim 8, wherein the etchant comprises:At least one of HCl and HBr.10.The method of claim 1, wherein the thickness of the silicon germanium layer between about 0.1-0.2 microns is removed.11.The method of claim 1, wherein the smoothing step comprises:A smoothing agent is introduced on the surface of the silicon germanium layer.12.The method of claim 11, wherein the smoothing agent comprises hydrogen gas.13.The method of claim 12, wherein the hydrogen gas is introduced at a temperature of about 1100 degrees Celsius.14.The method of claim 1, wherein the silicon layer is formed to a thickness between about 50 angstroms and 1000 angstroms.15.The method of claim 1, wherein the silicon layer is an expanded stretched silicon layer.16.A semiconductor device, including:SubstrateA silicon germanium layer formed on the substrate, the silicon germanium layer having a smoothed surface; andA silicon layer formed on the smoothed surface of the silicon germanium layer, wherein the defect density of the silicon layer is less than about 10,000 dislocations per square centimeter, wherein the silicon germanium layer includes a relaxed silicon germanium layer, and wherein The lattice spacing of the silicon does not match the lattice spacing of the relaxed silicon germanium layer.17.The device of claim 16, wherein the silicon germanium layer comprises:A first silicon germanium layer formed on the substrate, wherein the germanium concentration of the first layer gradually increases along the thickness of the first layer; andA second silicon germanium layer formed on the first silicon germanium layer, wherein the germanium concentration of the second layer is kept constant throughout the thickness of the second layer, wherein the second silicon germanium layer includes the Relax the silicon germanium layer.18.The device of claim 17, wherein the germanium concentration of the first layer is increased by 10% for each micrometer of the first layer.19.The device of claim 17, wherein the thickness of the second layer is between about 0.5-1 microns.20.The apparatus of claim 17, wherein:The germanium concentration of the second layer is approximately equal to the germanium concentration of the upper portion of the first layer.21.The device of claim 17, wherein the thickness of the silicon layer is between about 50 angstroms and 1000 angstroms.22.The device of claim 16, wherein the silicon layer is an expanded and stretched silicon layer.23.The device of claim 16, wherein the silicon layer includes an etched surface.24.A method for manufacturing a semiconductor device includes:In the processing chamber, a first silicon germanium layer is formed on the silicon substrate, wherein the germanium concentration of the first layer gradually increases along the thickness of the first layer;In the processing chamber, a second silicon germanium layer is formed on the first silicon germanium layer, wherein the germanium concentration of the second layer is kept constant throughout the thickness of the second layer;In the processing chamber, remove a part of the second layer;In the processing chamber, after removing a part of the second layer, smoothing the surface of the second layer; andForming a silicon layer on the smoothed surface of the second layer, wherein the second silicon germanium layer includes a relaxed silicon germanium layer, and wherein the lattice spacing of the silicon and the relaxed silicon germanium layer lost pair.25.The method of claim 24, wherein the substrate is not removed from the processing chamber until after the silicon layer is formed.26.The method of claim 25, wherein the processing chamber is maintained under vacuum at least for the earliest period from the removal of a portion of the second layer until after the formation of the silicon layer.27.The method of claim 24, wherein the step of forming the first layer comprises:The germanium concentration of the first layer is increased so that the germanium concentration is increased by 10% for each micron-thick first layer.28.The method of claim 24, wherein the step of forming the second layer comprises:Germanium having a concentration approximately equal to the germanium concentration of the upper portion of the first layer is introduced into the second layer.29.The method of claim 24, wherein the removing step comprises:An etchant is introduced onto the surface of the second layer.30.The method of claim 29, wherein the etchant comprises:At least one of HCl and HBr.31.The method of claim 24, wherein the smoothing step comprises:A smoothing agent is introduced on the surface of the second layer.32.The method of claim 31, wherein the smoothing agent comprises hydrogen gas.33.The method of claim 24, wherein the silicon layer is an expanded silicon layer. |
Stretched semiconductor structureTechnical fieldThe embodiments disclosed herein generally relate to circuit processing.Background techniqueThe performance level of various semiconductor devices, such as transistors, depends at least in part on the mobility of charge carriers (eg, electron and / or electron vacancies also known as holes) through the semiconductor device. In transistors, the mobility of charge carriers through the channel region is particularly important.The mobility of charge carriers may be affected by various factors. For example, the rough surface of a particular layer of a device may reduce the mobility of charge carriers through this layer of the device. Dislocation of charge carriers may also reduce the mobility of charge carriers by forming a local dispersion region of the charge carriers. This local dispersion region may act as a leak that causes power loss through this part of the device Flow path.The problems associated with misalignment are not limited to a single layer of the device. Specifically, the misalignment on the existing device layer may be transferred to and pass through other layers formed on this existing layer. In this way, the misalignment that occurs in one layer may subsequently migrate and suppress the charge carrier mobility throughout one or more layers of the final device.Various techniques have been used to increase the charge carrier mobility in semiconductor devices. For example, the epitaxial growth process commonly used to form the layers of the device can be significantly slowed to reduce the number of defects (eg, misalignment) in the final device. However, devices built according to this technique generally still have misalignments on the order of 100,000 per square centimeter.Alternatively, chemical mechanical polishing ("CMP") can be used to reduce the thickness of a layer of the device and at the same time smooth the surface of the reduced thickness layer, which can increase the charge carrier mobility. However, the CMP process is relatively expensive and complicated because the CMP process requires at least two other components (e.g., a CMP component and a cleaning component that cleans the device after the CMP process) in addition to the epitaxial growth component. . From an infrastructure perspective, other components used in the CMP process generally require expensive items such as slurry supply, waste disposal, and additional space.Moreover, the CMP process requires moving the device layer between components, which exposes the device layer to atmospheric pollutants and natural oxides, both of which may generate impurities, which may increase defects on the device layer. The device layer constructed according to the CMP technology generally has a misalignment on the order of about 10,000 per square centimeter.BRIEF DESCRIPTIONVarious embodiments are shown in the drawings by way of example and not by way of limitation, where the same reference numbers indicate similar units. It should be noted that all references to "a", "this", "the", "other", "alternative" or "various" embodiments in this disclosure are not necessarily all the same embodiment, such references Indicates at least one.FIG. 1 is a flowchart showing one embodiment of a method for forming a low-defect strained silicon in situ.Figure 2 shows the formation of a graded silicon germanium layer on a substrate according to one embodiment.FIG. 3 shows the formation of a relaxed silicon germanium layer on the graded silicon germanium layer of FIG. 2.FIG. 4 shows the introduction of etchant and smoothing agent on the surface of the relaxed silicon germanium layer of FIG. 3.Figure 5 shows the formation of a silicon layer on the smooth surface of the reduced relaxed silicon germanium layer.FIG. 6 is an embodiment of a device constructed according to the method described herein.detailed descriptionFor illustration, the following description and drawings provide examples. However, these examples should not be considered limiting, as they are not intended to provide all possible implementations of a series of exhaustiveness.Referring now to FIG. 1, it shows a flowchart of one embodiment of a method for forming low-defect, stretched silicon in situ. At block 10, a silicon germanium layer is formed on the substrate in the processing chamber. In various embodiments, the substrate is composed of silicon. The processing chamber may be, for example, a chemical vapor deposition ("CVD") chamber, an organometallic chemical vapor deposition ("MOCVD") chamber, or a plasma enhanced chemical vapor deposition ("PECVD") chamber.In one embodiment, the silicon germanium layer may be composed of a graded silicon germanium layer formed on the substrate and a relaxed silicon germanium layer formed on this graded silicon germanium layer. For example, the germanium concentration of the graded silicon germanium layer may increase along the entire thickness of this graded silicon germanium layer. In various embodiments, the germanium concentration of the entire graded silicon germanium layer may be between about 0% and 30%. However, other concentrations beyond this range can also be used.For a p-type metal oxide semiconductor device ("PMOS"), in one embodiment, the germanium concentration in the upper portion of the graded silicon germanium layer is between about 25% and 30%. For an n-type metal oxide semiconductor device ("NMOS"), in one embodiment, the germanium concentration in the portion of the graded silicon germanium layer is between about 20% and 25%. However, the 30% germanium concentration in the upper part of the graded silicon germanium layer is effective for both PMOS and NMOS devices. Although the preferred germanium concentrations for PMOS and NMOS devices have been described above, other concentrations can also be used.In one embodiment, the germanium concentration in the graded silicon germanium layer can be increased by 10 percent for each micrometer of the graded silicon germanium layer thickness. For example, a 3 micron thick graded silicon germanium layer can be grown in 8-12 hours, and the germanium concentration gradually increases from 0% at the bottom of the layer to 30% at the top of the layer. In various embodiments, the chemicals used to form the silicon germanium layer (eg, the layer may include a graded layer and a relaxation layer) may include silane (eg, SiH4), germane (eg, GeH4) according to the desired germanium content ) And one or more of dichlorosilane (for example, SiH2Cl2). The concentration of each specific component (for example, silane, germane, dichlorosilane) can be changed during introduction into the processing chamber (for example, chemical vapor deposition chamber) to obtain a grading effect.The relaxed silicon germanium layer may have a constant germanium concentration, which is approximately the same as the germanium concentration in the upper portion of the graded silicon germanium layer. Moreover, the thickness of the relaxed silicon germanium layer may be between about 0.5-1 microns.At block 12 of FIG. 1, a portion of the silicon germanium layer is removed in the processing chamber to remove the upper surface of the silicon germanium layer. This upper surface may have more dislocations than the lower portion of the silicon germanium layer. In various embodiments, about 0.1-0.2 microns of silicon germanium layer is removed. The step of removing a part of the silicon germanium layer may include the step of introducing an etchant to the surface of the silicon germanium layer. The etchant may include at least one of HCl and HBr, for example.In embodiments where the silicon germanium layer includes a graded silicon germanium layer and a relaxed silicon germanium layer, the etchant may be introduced before and / or after the formation of the relaxed silicon germanium layer. If an etchant is applied to the surface of the relaxed silicon germanium layer, it can beneficially remove any cross-hatched surface roughness of the relaxed silicon germanium layer, which is a dislocation in the graded silicon germanium layer Caused by the upward transfer to the surface of the relaxed silicon germanium layer.At block 14, the surface of the silicon germanium layer (e.g., exposed by the removal of block 12) may be smoothed within the processing chamber. Although represented as two different blocks, the removal of block 12 and the smoothing of block 14 can be performed simultaneously or sequentially. By smoothing the exposed surface, the misalignment can be removed and / or minimized to prevent the misalignment from being transferred upward from the silicon germanium layer to the silicon layer formed at block 16.In various embodiments, the smoothing step includes the step of introducing a smoothing agent (eg, hydrogen) to the surface of the silicon germanium layer. Similar to the etchant, the smoothing agent may be introduced before and / or after the formation of the relaxed silicon germanium layer. The step of introducing a smoothing agent such as hydrogen may be performed at a temperature of about 1100 degrees Celsius (for example, high-temperature annealing).At block 16, a silicon layer is formed on the smoothed surface of the silicon germanium layer. The chemicals used to form the silicon layer include silane. In various embodiments, the silicon layer may be formed to a thickness between approximately 50 angstroms and 1000 angstroms. The silicon layer formed in block 16 may have a relatively smooth surface and a low defect level (for example, less than about 10,000 dislocations per square centimeter, preferably less than about 1,000 dislocations per square centimeter), because the silicon germanium layer Many of the defects have been etched away, and the top surface of the silicon germanium layer has been smoothed to prevent the defects from passing upward to the silicon layer.Due to the lattice size mismatch between silicon and silicon germanium (for example, silicon germanium causes a larger lattice due to its germanium content), the formation of a silicon layer on the silicon germanium layer results in a stretched silicon layer. In this way, the silicon layer expands (eg, is stretched) to match the silicon germanium lattice. Stretched silicon beneficially increases the charge carrier mobility through the device. There are other advantages, among which, in particular, the reduction of defects and / or dislocations during processing helps to maximize the benefits of stretched silicon.As shown in FIG. 1, at blocks 10 to 16, the substrate remains in the same processing chamber. Moreover, in various embodiments, the processing chamber may be maintained under vacuum at least for the earliest period from the removal of a portion of the silicon germanium layer (eg, block 12) until after the silicon layer is formed (eg, block 16).One advantage of not removing the substrate from the processing chamber until after the silicon layer is formed is that the introduction of atmospheric contaminants on the substrate during processing can be minimized if not eliminated, which reduces the number of defects on the substrate. This advantage can also be achieved by maintaining the processing chamber under vacuum during processing, which limits the level of impurities (eg, atmospheric pollutants and natural oxides) that can be deposited onto the substrate.2 to 5 illustrate the sequence of forming low-defect stretched silicon in situ according to one embodiment. Specifically, FIG. 2 shows the substrate 20 on which the graded silicon germanium layer 18 is formed. As described above, the germanium concentration of the graded silicon germanium layer 18 may gradually increase along its thickness. In various embodiments, for each micrometer thick layered silicon germanium layer 18, the germanium concentration is increased by 10%.FIG. 3 shows the relaxed silicon germanium layer 22 formed on the graded silicon germanium layer 18 of FIG. 2. In various embodiments, the germanium concentration of the relaxed silicon germanium layer 22 is constant throughout its thickness, and this concentration is approximately equal to the germanium concentration of the upper portion of the graded silicon germanium layer 18. Moreover, in one embodiment, the thickness of the relaxed silicon germanium layer 22 may be between about 0.5-1 microns.FIG. 4 shows the etchant and smoothing agent introduced as a mixture 24 onto the surface of the relaxed silicon germanium layer 22. As described above, the etchant and the smoothing agent may be introduced separately or simultaneously to remove a part of the relaxed silicon germanium layer 22 and smooth the surface of the relaxed silicon germanium layer 22 that has been exposed by removing a part of the relaxed silicon germanium layer 22. The result of the introduction of the mixture 24 is a reduction (eg, thickness) of the relaxed silicon germanium layer 26, as shown in FIG. FIG. 5 shows the silicon layer 28 formed on the reduced relaxed silicon germanium layer 26. In various embodiments, the thickness of the silicon layer 28 may be between about 50 angstroms and 1000 angstroms.In various embodiments, the defect density of the silicon layer 28 is less than about 10,000 dislocations per square centimeter, and more preferably less than about 1,000 dislocations per square centimeter. If the device of FIG. 5 is constructed according to the teachings of the various embodiments disclosed herein, the interface between the silicon germanium layer and the silicon layer will have good edge uniformity, and there will be no etching residue along this interface. These two characteristics of the interface are different from devices constructed using the CMP process. Due to the characteristics of the CMP process, such devices will leave etching residues and the interface is not uniform. Moreover, the device of FIG. 5 will be free of atmospheric pollutants along the interface between the silicon germanium layer and the silicon layer, because this device is formed in a single processing chamber.The various methods described herein can be used, for example, to form the device 29 of FIG. 6. The device 29 includes a composite substrate 31 having a first source / drain region 32 and a second source / drain region 34 formed therein. The gate electrode 36 is formed on the surface of the composite substrate 31. In this embodiment, the composite substrate also includes a silicon-based substrate.The channel region of the device 29 (eg, below the gate electrode 36 as shown in FIG. 6) includes a graded silicon germanium layer 38, a relaxed silicon germanium layer 40, and a silicon layer 42. In other embodiments, a single silicon germanium layer (eg, whose germanium concentration may be graded or constant) may be used instead of the combination of graded silicon germanium layer 38 and relaxed silicon germanium layer 40.The graded silicon germanium layer 38 is provided on the substrate 30. As described above, in one embodiment, the germanium concentration of the graded silicon germanium layer 38 gradually increases along its entire thickness. For example, for each micrometer-thick layered silicon germanium layer 38, the germanium concentration can be increased by 10 percent.The relaxed silicon germanium layer 40 is provided on the graded silicon germanium layer 38, and the germanium concentration is kept constant along its entire thickness. In one embodiment, the germanium concentration of the relaxed silicon germanium layer 40 is approximately equal to the germanium concentration of the upper portion of the graded silicon germanium layer 38. In various embodiments, the thickness of the relaxed silicon germanium layer 40 may be between about 0.5-1.0 microns.The silicon layer 42 is provided on the relaxed silicon germanium layer 40. In various embodiments, the thickness of the silicon layer 42 may be between about 50 angstroms and 1000 angstroms. Due to the difference in the lattice size of the relaxed silicon germanium layer 40 and the silicon layer 42, the silicon layer 42 is stretched, which increases the charge carrier mobility through the channel region of the device 29. The device 29 can be beneficially used as a transistor in any suitable circuit due to its increased charge carrier mobility.It should be understood that even in the foregoing description, many features and advantages of various embodiments have been clarified in conjunction with structural and functional details of various embodiments, the present disclosure is merely illustrative. Specific changes may be made, particularly the structure and processing of the various parts, without departing from the scope of the various embodiments as expressed in the generally broader sense of the terms of the appended claims. |
Systems and methods are provided for allocating memory to dissimilar memory devices. An exemplary embodiment includes a method for allocating memory to dissimilar memory devices. An interleave bandwidth ratio is determined, which comprises a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio to define two or more memory zones having different performance levels. Memory address requests are allocated to the memory zones based on a quality of service (QoS). |
CLAIMS What is claimed is: 1. A method for allocating memory to dissimilar memory devices, the method comprising: determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices; interleaving the dissimilar memory devices according to the interleave bandwidth ratio and defining two or more memory zones having different performance levels; and allocating memory address requests to the memory zones based on a quality of service (QoS). 2. The method of claim 1, wherein the dissimilar memory devices comprise a first type of dynamic random access memory (DRAM) and a second type of DRAM. 3. The method of claim 2, wherein one or more of the first type or second type of DRAM comprises a double data rate (DDR) memory. 4. The method of claim 1, wherein the QoS comprises a declared QoS from an application. 5. The method of claim 4, wherein the allocating the memory address requests to the memory zones based on the QoS comprises a high-level operating system (HLOS) receiving the memory address requests. 6. The method of claim 1, wherein the QoS is declared via an application program interface (API) associated with a high-level operating system (HLOS). 7. The method of claim 1, wherein the QoS comprises an estimated QoS based on a current performance of one or more of the memory zones. 8. The method of claim 1, wherein the allocating memory address requests to the memory zones based on the quality of service (QoS) comprises a memory channel optimization module estimating the QoS. 9. A system for allocating memory to dissimilar memory devices, the method comprising: means for determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices; means for interleaving the dissimilar memory devices according to the interleave bandwidth ratio and defining two or more memory zones having different performance levels; and means for allocating memory address requests to the memory zones based on a quality of service (QoS). 10. The system of claim 9, wherein the dissimilar memory devices comprise a first type of dynamic random access memory (DRAM) and a second type of DRAM. 11. The system of claim 10, wherein one or more of the first type or second type of DRAM comprises a double data rate (DDR) memory. 12. The system of claim 9, wherein the QoS comprises one of a declared QoS from an application or an estimated QoS. 13. The system of claim 9, wherein the means for allocating comprises one of a high- level operating system (HLOS) and memory channel optimization module. 14. A memory system for managing memory devices in a computer system, the memory system comprising: a first type of memory device; a second type of memory device; a memory channel optimization module in communication with the first and second types of memory devices, the memory channel optimization module operable in a unified mode of operation to interleave the first and second types of memory devices by: determining an interleave bandwidth ratio comprising a ratio of bandwidths for the first type of memory device and the second type of memory device; and interleaving the first and second types of memory devices according to the interleave bandwidth ratio and defining two or more memory zones having different performance levels; and a high-level operating system (HLOS) in communication with the memory channel optimization module for allocating memory address requests from one or more applications to one of the memory zones based on a QoS. 15. The memory system of claim 14, wherein the first type of memory device comprises a first type of double data rate (DDR) memory and the second type of memory device comprises a second type of DDR memory. 16. The memory system of claim 14, wherein the HLOS receives the QoS via an associated application program interface (API). 17. The memory system of claim 14, wherein the memory channel optimization module is further operable to estimate the QoS based on a current performance level for one or more of the memory zones. 18. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, the computer readable program code adapted to be executed to implement a method for dynamically allocating memory to dissimilar memory devices, the method comprising: determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices; interleaving the dissimilar memory devices according to the interleave bandwidth ratio and defining two or more memory zones having different performance levels; and allocating memory address requests to the memory zones based on a quality of service (QoS). 19. The computer program product of claim 18, wherein the dissimilar memory devices comprise a first type of dynamic random access memory (DRAM) and a second type of DRAM. 20. The computer program product of claim 19, wherein one or more of the first type or second type of DRAM comprises a double data rate (DDR) memory. 21. The computer program product of claim 18, wherein the QoS comprises a declared QoS from an application. 22. The computer program product of claim 21, wherein the allocating the memory address requests to the memory zones based on the declared QoS comprises a high-level operating system (HLOS) receiving the memory address requests. 23. The computer program product of claim 18, wherein the QoS is declared via an application program interface (API) associated with a high-level operating system (HLOS). 24. The computer program product of claim 18, wherein the QoS comprises an estimated QoS based on a current performance of one or more of the memory zones. 25. The computer program product of claim 18, wherein the allocating memory address requests to the memory zones based on the quality of service (QoS) comprises a memory channel optimization module estimating the QoS. |
SYSTEM AND METHOD FOR ALLOCATING MEMORY TO DISSIMILAR MEMORY DEVICES USING QUALITY OF SERVICE PRIORITY AND RELATED APPLICATIONS STATEMENT [0001] This application is a continuation-in-part patent application of copending U.S. Patent Application Serial No. 13/726,537 filed on December 24, 2012, and entitled "System and Method for Managing Performance of a Computing Device Having Dissimilar Memory Types (Docket No. 123065U1), which claims priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application filed on December 10, 2012, assigned Provisional Application Serial No. 61/735,352 (Docket No. 123065P1), and entitled "System and Method for Managing Performance of a Computing Device Having Dissimilar Memory Types," each of which are hereby incorporated by reference in their entirety. DESCRIPTION OF THE RELATED ART [0002] System performance and power requirements are becoming increasingly demanding in computer systems and devices, particularly in portable computing devices (PCDs), such as cellular telephones, portable digital assistants (PDAs), portable game consoles, palmtop computers, tablet computers, and other portable electronic devices. Such devices may comprise two or more types of processing units optimized for a specific purpose. For example, one or more central processing units (CPUs) may used for general system-level performance or other purposes, while a graphics processing unit (GPU) may be specifically designed for manipulating computer graphics for output to a display device. As each processor requires more performance, there is a need for faster and more specialized memory devices designed to enable the particular purpose(s) of each processor. Memory architectures are typically optimized for a specific application. CPUs may require high-density memory with an acceptable system-level performance, while GPUs may require relatively lower-density memory with a substantially higher performance than CPUs. [0003] As a result, a single computer device, such as a PCD, may include two or more dissimilar memory devices with each specialized memory device optimized for its special purpose and paired with and dedicated to a specific processing unit. In this conventional architecture (referred to as a "discrete" architecture), each dedicated processing unit is physically coupled to a different type of memory device via a plurality of physical/control layers each with a corresponding memory channel. Each dedicated processing unit physically accesses the corresponding memory device at a different data rate optimized for its intended purpose. For example, in one exemplary configuration, a general purpose CPU may physically access a first type of dynamic random access memory (DRAM) device at an optimized data bandwidth (e.g., 17Gb/s). A higher-performance, dedicated GPU may physically access a second type of DRAM device at a higher data bandwidth (e.g., 34Gb/s). While the discrete architecture individually optimizes the performance of the CPU and the GPU, there are a number of significant disadvantages. [0004] To obtain the higher performance, the GPU-dedicated memory must be sized and configured to handle all potential use cases, display resolutions, and system settings. Furthermore, the higher performance is "localized" because only the GPU is able to physically access the GPU-dedicated memory at the higher data bandwidth. While the CPU can access the GPU-dedicated memory and the GPU can access the CPU-dedicated memory, the discrete architecture provides this access via a physical interconnect bus (e.g., a Peripheral Component Interconnect Express (PCIE)) between the GPU and the CPU at a reduced data bandwidth, which is typically less than the optimized bandwidth for either type of memory device. Even if the physical interconnect bus between the GPU and the CPU did not function as a performance "bottleneck", the discrete architecture does not permit either the GPU or the CPU to take advantage of the combined total available bandwidth of the two different types of memory devices. The memory spaces of the respective memory devices are placed in separate contiguous blocks of memory addresses. In other words, the entire memory map places the first type of memory device in one contiguous block and separately places the second type of memory device in a different contiguous block. There is no hardware coordination between the memory ports of the different memory devices to support physical access residing within the same contiguous block. [0005] Accordingly, while there is an increasing demand for more specialized memory devices in computer systems to provide increasingly more system and power performance in computer devices, there remains a need in the art for improved systems and methods for managing dissimilar memory devices. SUMMARY OF THE DISCLOSURE [0006] Systems and methods are provided for allocating memory to dissimilar memory devices. An exemplary embodiment comprises a method for allocating memory to dissimilar memory devices. An interleave bandwidth ratio is determined, which comprises a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio to define two or more memory zones having different performance levels. Memory address requests are allocated to the memory zones based on a quality of service (QoS). BRIEF DESCRIPTION OF THE DRAWINGS [0007] In the Figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same Figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all Figures. [0008] FIG. 1 is a block diagram of an embodiment of system for managing dissimilar memory devices. [0009] FIG. 2 is a flowchart of an embodiment of a method performed by the memory channel optimization module in FIG. 1 for managing dissimilar memory devices. [0010] FIG. 3 is an exemplary table illustrating an interleave bandwidth ratio for various types of dissimilar memory devices. [0011] FIG. 4 is a block diagram illustrating components of the memory channel optimization module of FIG. 1. [0012] FIG. 5 is an exemplary table illustrating a memory channel address remapping based on various interleave bandwidth ratios. [0013] FIG. 6 is a combined flow/block diagram illustrating the general operation, architecture, and functionality of an embodiment of the channel remapping module of FIG. [0014] FIG. 7 is a diagram illustrating an embodiment of an interleave method for creating multiple logical zones across dissimilar memory devices. [0015] FIG. 8 is a block diagram illustrating an exemplary implementation of the memory channel optimization module in a portable computing device. [0016] FIG. 9 is a block diagram illustrating another embodiment of a system comprising the memory channel optimization module coupled to high-level operating system (HLOS) for allocating memory to dissimilar memory devices. [0017] FIG. 10 is block diagram illustrating an embodiment of the architecture and operation of the system of FIG. 9 for allocating memory to zones in a unified memory space via QoS provided by the HLOS. [0018] FIG. 11 is a block diagram illustrating another embodiment for allocating memory to zones in a unified memory space via a QoS monitory module integrated with the memory channel optimization module. [0019] FIG. 12 is a flowchart illustrating an embodiment of a method for dynamically allocating memory to dissimilar memory devices based on a QoS service. [0020] FIG. 13 illustrates the diagram of FIG. 7 for allocating memory to the logical zones via a memory allocation function associated with the HLOS API. DETAILED DESCRIPTION [0021] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. [0022] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0023] The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0024] As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal). [0025] In this description, the terms "communication device," "wireless device," "wireless telephone", "wireless communication device," and "wireless handset" are used interchangeably. With the advent of third generation ("3G") wireless technology and four generation ("4G"), greater bandwidth availability has enabled more portable computing devices with a greater variety of wireless capabilities. Therefore, a portable computing device may include a cellular telephone, a pager, a PDA, a smartphone, a navigation device, or a hand-held computer with a wireless connection or link. [0026] FIG. 1 illustrates a system 100 comprising a memory management architecture that may be implemented in any suitable computing device having two or more dedicated processing units for accessing two or more memory devices of different types, or similar types of memory devices having different data bandwidths (referred to as "dissimilar memory devices"). The computing device may comprise a personal computer, a workstation, a server, a portable computing device (PCD), such as a cellular telephone, a portable digital assistant (PDA), a portable game console, a palmtop computers, or a tablet computer, and any other computing device with two or more dissimilar memory devices. As described below in more detail, the memory management architecture is configured to selectively provide two modes of operation: a unified mode and a discrete mode. In the discrete mode, the memory management architecture operates as a "discrete architecture" in the conventional manner as described above, in which each dedicated processing unit accesses a corresponding memory device optimized for its intended purpose. For example, a dedicated general purpose central processing unit (CPU) may access a first type of memory device at an optimized data bandwidth, and a higher-performance, dedicated graphics processing unit (GPU) may access a second type of memory device at a higher data bandwidth. In the unified mode, the memory management architecture is configured to unify the dissimilar memory devices and enable the dedicated processing units to selectively access, either individually or in combination, the combined bandwidth of the dissimilar memory devices or portions thereof. [0027] As illustrated in the embodiment of FIG. 1 , the system 100 comprises a memory channel optimization module 102 electrically connected to two different types of dynamic random access memory (DRAM) devices 104a and 104b and two or more dedicated processing units (e.g., a CPU 108 and a GPU 106) that may access the DRAM devices 104a and 104b. GPU 106 is coupled to the memory channel optimization module 102 via an electrical connection 1 10. CPU 108 is coupled to the memory channel optimization module 102 via an electrical connection 1 12. The memory channel optimization module 102 further comprises a plurality of hardware connections for coupling to DRAM devices 104a and 104b. The hardware connections may vary depending on the type of memory device. In the example of FIG. 1, DRAM 104a supports four channels 114a, 1 14b, 114c, and 114d that connect to physical/control connections 116a, 116b, 116c, and 116d, respectively. DRAM 104b supports two channels 118a and 1 18b that connect to physical/control connections 120a and 120b, respectively. It should be appreciated that the number and configuration of the physical/control connections may vary depending on the type of memory device, including the size of the memory addresses (e.g., 32-bit, 64-bit, etc.). [0028] FIG. 2 illustrates a method 200 executed by the memory channel optimization module 102 for implementing the unified mode of operation by interleaving the dissimilar memory devices (e.g., DRAM devices 104a and 104b). At block 202, the memory channel optimization module 102 determines an interleave bandwidth ratio comprising a ratio of the data band widths for the DRAM devices 104a and 104b. The data bandwidths may be determined upon boot-up of the computing device. [0029] In an embodiment, the interleave bandwidth ratio may be determined by accessing a data structure, such as, table 300 illustrated in FIG. 1. Table 300 identifies interleave bandwidth ratios for various combinations of types of dissimilar memory devices for implementing the two DRAM devices 104a and 104b. Columns 302 list various configurations for the DRAM device 104a. Rows 304 list various configurations for the DRAM device 104b. In this regard, each numerical data field identifies the interleave bandwidth ratio for the corresponding configuration row/column configuration. For example, the first data field in the upper portion of table 300 is highlighted in black and lists an interleave bandwidth ratio of 2.00, which corresponds to a bandwidth of 12.8GB/s for the DRAM device 104a and a data bandwidth of 6.4GB/s for the DRAM device 104b. In FIG. 3, the DRAM devices 104a and 104b are optimized for use in a mobile computing system. DRAM device 104b comprises a low power double data rate (LPDDR) memory device, which may be conventionally optimized for use in the discrete mode for dedicated use by the CPU 108. The DRAM device 104a comprises a Wide I/O (Wide IO) memory device, which may be conventionally optimized for use in the discrete mode for dedicated use by the GPU 106. In this regard, the numerical values identify the interleave bandwidth ratios for DRAM devices 104a and 104b according to variable performance parameters, such as, the memory address bit size (x64, x l28, x256, x512), clock speed (MHz), and data bandwidth (GB/s). The memory channel optimization module 102 may perform a look-up to obtain the interleave bandwidth ratio associated with the DRAM devices 104a and 104b. At block 202 in FIG. 2, the memory channel optimization module 102 may also determine the numerical data bandwidths (e.g., from a table 300 or directly from the DRAM devices 104a and 104b) and then use this data to calculate the interleave bandwidth ratio. [0030] It should be appreciated that the types of memory devices and performance parameters may be varied depending on the particular type of computing device, system applications, etc. in which the system 100 is being implemented. The example types and performance parameters illustrated in FIG. 3 are merely used in this description to describe an exemplary interleaving method performed by the memory channel optimization module 102 in a mobile system. Some examples of other random access memory technologies suitable for the channel optimization module 102 include NOR FLASH, EEPROM, EPROM, DDR-NVM, PSRAM, SRAM, PROM, and ROM. One of ordinary skill in the art will readily appreciate that various alternative interleaving schemes and methods may be performed. [0031] Referring again to FIG. 2, at block 204, the memory channel optimization module 102 interleaves the DRAM devices 104a and 104b according to the interleave bandwidth ratio determined in block 202. The interleaving process matches traffic to each of the memory channels 1 14a, 1 14b, 1 14c, 1 14d and 118a and 118b for DRAM devices 104a and 104b, respectively, to the particular channel's available bandwidth. For example, if the DRAM device 104a has a data bandwidth of 34GB/s and the DRAM device 104b has a data bandwidth of 17 GB/s, the interleave bandwidth ratio is 2: 1. This means that the data rate of the DRAM device 104a is twice as fast as the data rate of the DRAM device 104b. [0032] As illustrated in FIG. 4, the memory channel optimization module 102 may comprise one or more channel remapping module(s) 400 for configuring and maintaining a virtual address mapping table for DRAM devices 104a and 104b according to the interleave bandwidth ratio and distributing traffic to the DRAM devices 104a and 104b according to the interleave bandwidth ratio. An exemplary address mapping table 500 is illustrated in FIG. 5. Address mapping table 500 comprises a list of address blocks 502 (which may be of any size) with corresponding channel and/or memory device assignments based on the interleave bandwidth ratio. For example, in FIG. 5, column 504 illustrates an alternating assignment between DRAM device 104a ("wideio2") and DRAM device 104b ("lpddr3e") based on an interleave bandwidth ratio of 1 : 1. Even numbered address blocks (N, N+2, N+4, N+6, etc.) are assigned to wideio2, and odd numbered address blocks (N+l , N+3, N+5, etc.) are assigned to lpddr3e. [0033] Column 506 illustrates another assignment for an interleave bandwidth ratio of 2: 1. Where DRAM device 104a ("wideio2") has a rate twice as fast as DRAM device 104b ("lpddr3e), two consecutive address blocks are assigned to wideio2 for every one address block assigned to lpddr3e. For example, address blocks N and N+l are assigned to wideio2. Block N+2 is assigned to lppdr3e. Blocks N+3 and N+4 are assigned to wideio2, and so on. Column 508 illustrates another assignment for an interleave bandwidth ration of 1 :2 in which the assignment scheme is reversed because the DRAM device 104b ("lpddr3e") is twice as fast as DRAM device 104a ("wideio2"). [0034] Referring again to the flowchart of FIG. 2, at block 206, the GPU 106 and CPU 108 may access the interleaved memory, in a conventional manner, by sending memory address requests to the memory channel optimization module 102. As illustrated in FIG. 6, traffic may be received by channel remapping logic 600 as an input stream of requests 606, 608, 610, 612, 614, 616, etc. corresponding to address blocks N, N+l, N+2, N+3, N+4, N+5, etc. (FIG. 5). The channel remapping logic 600 is configured to distribute (block 208 - FIG. 2) the traffic to the DRAM devices 104a and 104b according to the interleave bandwidth ratio and the appropriate assignment scheme contained in address mapping table 500 {e.g., columns 504, 506, 508, etc.). [0035] Following the above example of a 2: 1 interleave bandwidth ratio, the channel remapping logic 600 steers the requests 606, 608, 610, 612, 614, and 616 as illustrated in FIG. 6. Requests 606, 608, 612, and 614 for address blocks N, N+l, N+3, and N+4, respectively, may be steered to DRAM device 104a. Requests 610 and 616 for address blocks N+2, and N+5, respectively, may be steered to DRAM device 104b. In this manner, the incoming traffic from the GPU 106 and the CPU 108 may be optimally matched to the available bandwidth on any of the memory channels 114 for DRAM device 104a and/or the memory channels 118 for DRAM device 104b. This unified mode of operation enables the GPU 106 and the CPU 108 to individually and/or collectively access the combined bandwidth of the dissimilar memory devices rather than being limited to the "localized" high performance operation of the conventional discrete mode of operation. [0036] As mentioned above, the memory channel optimization module 102 may be configured to selectively enable either the unified mode or the discrete mode based on various desirable use scenarios, system settings, etc. Furthermore, it should be appreciated that portions of the dissimilar memory devices may be interleaved rather than interleaving the entire memory devices. FIG. 7 illustrates a multi-layer interleave technique that may be implemented by memory channel optimization module 102 to create multiple "logical" devices or zones. Following the above example using a 2: 1 interleave bandwidth ratio, the DRAM device 104a may comprise a pair of 0.5GB memory devices 702 and 704 having a high performance bandwidth of 34 GB/s conventionally optimized for GPU 106. DRAM device 104b may comprise a 1GB memory device 706 and a 2GB memory device 708 each having a lower bandwidth of 17GB/s conventionally optimized for CPU 108. The multi- layer interleave technique may create two interleaved zones 710 and 712 and a non- interleaved zone 714. Zone 710 may be 4-way interleaved to provide a combined 1.5GB at a combined bandwidth of 102 GB/s. Zone 712 may be 2-way interleaved to provide a combined 1.5GB at 34 GB/s/ Zone 714 may be non-interleaved to provide 1GB at 17GB/s. The multi-layer interleaving technique combined with the memory management architecture of system 100 may facilitate transitioning between interleaved and non- interleaved portions because the contents of interleaved zones 710 and 712 may be explicitly designated for evictable or migratable data structures and buffers, whereas the contents of non-interleaved zone 714 may be designated for processing, such as, kernel operations and/or other low memory processes. [0037] As mentioned above, the memory channel optimization module 102 may be incorporated into any desirable computing system. FIG. 8 illustrates the memory channel optimization module 102 incorporated in an exemplary portable computing device (PCD) 800. The memory optimization module 102 may comprise a system-on-a-chip (SoC) or an embedded system that may be separately manufactured and incorporated into designs for the portable computing device 800. [0038] As shown, the PCD 800 includes an on-chip system 322 that includes a multicore CPU 402A. The multicore CPU 402 A may include a zeroth core 410, a first core 412, and an Nth core 414. One of the cores may comprise, for example, the GPU 106 with one or more of the others comprising CPU 108. According to alternate exemplary embodiments, the CPU 402 may also comprise those of single core types and not one which has multiple cores, in which case the CPU 108 and the GPU 106 may be dedicated processors, as illustrated in system 100. [0039] A display controller 328 and a touch screen controller 330 may be coupled to the GPU 106. In turn, the touch screen display 108 external to the on-chip system 322 may be coupled to the display controller 328 and the touch screen controller 330. [0040] FIG. 8 further shows that a video encoder 334, e.g., a phase alternating line (PAL) encoder, a sequential color a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the multicore CPU 402A. Further, a video amplifier 336 is coupled to the video encoder 334 and the touch screen display 108. Also, a video port 338 is coupled to the video amplifier 336. As shown in FIG. 8, a universal serial bus (USB) controller 340 is coupled to the multicore CPU 402A. Also, a USB port 342 is coupled to the USB controller 340. Memory 404A and a subscriber identity module (SEVl) card 346 may also be coupled to the multicore CPU 402A. Memory 404A may comprise two or more dissimilar memory devices (e.g., DRAM devices 104a and 104b), as described above. The memory channel optimization module 102 may be coupled to the CPU 402 A (including, for example, a CPU 108 and GPU 106) and the memory 404A may comprise two or more dissimilar memory devices. The memory channel optimization module 102 may be incorporated as a separate system-on-a-chip (SoC) or as a component of SoC 322. [0041] Further, as shown in FIG. 8, a digital camera 348 may be coupled to the multicore CPU 402A. In an exemplary aspect, the digital camera 348 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera. [0042] As further illustrated in FIG. 8, a stereo audio coder-decoder (CODEC) 350 may be coupled to the multicore CPU 402A. Moreover, an audio amplifier 352 may coupled to the stereo audio CODEC 350. In an exemplary aspect, a first stereo speaker 354 and a second stereo speaker 356 are coupled to the audio amplifier 352. FIG. 8 shows that a microphone amplifier 358 may be also coupled to the stereo audio CODEC 350. Additionally, a microphone 360 may be coupled to the microphone amplifier 358. In a particular aspect, a frequency modulation (FM) radio tuner 362 may be coupled to the stereo audio CODEC 350. Also, an FM antenna 364 is coupled to the FM radio tuner 362. Further, stereo headphones 366 may be coupled to the stereo audio CODEC 350. [0043] FIG. 8 further illustrates that a radio frequency (RF) transceiver 368 may be coupled to the multicore CPU 402A. An RF switch 370 may be coupled to the RF transceiver 368 and an RF antenna 372. As shown in FIG. 8, a keypad 204 may be coupled to the multicore CPU 402A. Also, a mono headset with a microphone 376 may be coupled to the multicore CPU 402A. Further, a vibrator device 378 may be coupled to the multicore CPU 402A. [0044] FIG. 8 also shows that a power supply 380 may be coupled to the on-chip system 322. In a particular aspect, the power supply 380 is a direct current (DC) power supply that provides power to the various components of the PCD 800 that require power. Further, in a particular aspect, the power supply is a rechargeable DC battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source. [0045] FIG. 8 further indicates that the PCD 800 may also include a network card 388 that may be used to access a data network, e.g., a local area network, a personal area network, or any other network. The network card 388 may be a Bluetooth network card, a WiFi network card, a personal area network (PAN) card, a personal area network ultra-low- power technology (PeANUT) network card, or any other network card well known in the art. Further, the network card 388 may be incorporated into a chip, i.e., the network card 388 may be a full solution in a chip, and may not be a separate network card 388. [0046] As depicted in FIG. 8, the touch screen display 108, the video port 338, the USB port 342, the camera 348, the first stereo speaker 354, the second stereo speaker 356, the microphone 360, the FM antenna 364, the stereo headphones 366, the RF switch 370, the RF antenna 372, the keypad 374, the mono headset 376, the vibrator 378, and the power supply 380 may be external to the on-chip system 322. [0047] FIGS. 9 - 13 illustrate various alternative embodiments of systems and methods for leveraging aspects of the remapping and interleaving solutions described above in connection with FIGS. 1 - 8 in a high-level operating system (HLOS) environment. It should be appreciated that the HLOS environment may provide a heterogeneous computing platform or a heterogeneous system architecture (HSA), such as those disclosed in HSA standards published by the HSA Foundation. The current standard, AMD I/O VirtuaUzation Technology (IOMMU) Specification (Publication No. 48882, Revision 2.00, issued March 24, 2011), is hereby incorporated by reference in its entirety. [0048] As known in the art, a system based on a HSA may be configured to provide a unified view of the system memory. HSA permits developers to program at a higher abstraction level by, for example, using mainstream programming languages, abstracting away hardware specifics from the developer, and leaving the hardware-specific coding to be performed by the hardware vendor. However, there is no known solution for efficiently implementing a HSA in a system with dissimilar memory types or devices. [0049] It should be appreciated that the systems and methods described below in connection with FIGS 9 - 13 generally provide a unique and desirable solution for supporting a HSA and/or a HLOS in a system comprising dissimilar memory types or devices, such as those described above. The systems and methods described below may provide high performance, lower power, and lower costs by removing the existing need for all memories in the platform to be uniform. Furthermore, hardware developers may have the flexibility to combine, for example, both high and lost cost memory devices and/or types in a computing device which adheres to the HSA standard. [0050] FIG. 9 illustrates a system 900 comprising a HLOS 902 in communication with the memory channel optimization module 102 and one or more applications 906 for dynamically allocating memory to dissimilar memory devices. The memory channel optimization module 102 may be generally configured and operate in the manner described above. The memory channel optimization module 102 is electrically connected to two or more dissimilar memory types or devices (e.g., DRAM 104a and 104b) and any number of processing units that may access the dissimilar memory devices. It should be appreciated that the processing units may include dedicated processing units (e.g., a CPU 108 and a GPU 106) or other programmable processors. GPU 106 is coupled to the memory channel optimization module 102 via an electrical connection 110. CPU 108 is coupled to the memory channel optimization module 102 via an electrical connection 112. One or more programmable processors (not shown) may be coupled to the memory channel optimization module 102 via corresponding connections. The dedicated processing units, the programmable processors, and any applications 906 accessing the dissimilar memory devices may be generally referred to as "clients" of the HLOS 902 and/or the memory channel optimization module 102. [0051] The programmable processors may comprise digital signal processor(s) (DSPs) for special-purpose and/or general -purpose applications including, for example, video applications, audio applications, or any other applications 906. As mentioned above, the dedicated processing units, the applications 906, the HLOS 902, and/or the programmable processors may support heterogeneous computing platforms configured to support a heterogeneous system architecture (HSA). It should be appreciated that the HSA creates an improved processor design that exposes to the applications 906 the benefits and capabilities of mainstream programmable computing elements. With HSA, the applications 906 can create data structures in a single unified address space and can initiate work items in parallel on the hardware most appropriate for a given task. Sharing data between computing elements is as simple as sending a pointer. Multiple computing tasks can work on the same coherent memory regions, utilizing barriers and atomic memory operations as needed to maintain data synchronization. [0052] As described above in more detail, the memory channel optimization module 102 further comprises a plurality of hardware connections for coupling to the DRAM 104a and 104b. The hardware connections may vary depending on the type of memory devices. In an embodiment, the dissimilar memory devices comprise a double data rate (DDR) memory device that provide corresponding channels that connect to physical/control connections on the memory channel optimization module 102. It should be appreciated that the number and configuration of the physical/control connections may vary depending on the type of memory device, including the size of the memory addresses (e.g., 32-bit, 64-bit, etc.). [0053] The HLOS 902 comprises quality of service (QoS) monitor module(s) 904. The QoS monitor module(s) 904 provide QoS services to the applications 906 by guaranteeing and/or matching application memory requirements. The QoS services may be based on a programmer-declared QoS provided to the HLOS 902 via, for example, an application programmer interface (API) 1002 associated with the QoS monitor modules 904. In other embodiments, the HLOS 902 may determine an estimated QoS based on monitoring the memory access behavior and/or performance of the applications 906 (e.g., processes, threads, etc.). Further exemplary QoS values may be the memory bandwidth and/or the latency requirements, or other memory performance metric(s), for the data to be allocated on the platform memory such that the application doing the data access is able to satisfy the desired performance and quality. [0054] As illustrated in the embodiment of FIG. 10, the HLOS 902 supports interleaved memory access to the dissimilar memory devices addressed by a unified address space 1000. The unified address space 1000 may comprise one or more logical memory zones (e.g., memory zones 1004, 1006, and 1008). It should be appreciated that the unified address space 1000 and the memory zones 1004, 1006, and 1008 may be configured using the multi-layer interleave technique described above and illustrated in FIG. 7 to create multiple "logical" devices or memory zones. For example, revisiting the above example of FIG. 7, a 2: 1 interleave bandwidth ratio may be employed. The DRAM device 104a may comprise a pair of 0.5GB memory devices 702 and 704 having a high performance bandwidth of 34 GB/s conventionally optimized for GPU 106. DRAM device 104b may comprise a 1GB memory device 706 and a 2GB memory device 708 each having a lower bandwidth of 17GB/s conventionally optimized for CPU 108. The multi-layer interleave technique may create two interleaved zones 710 and 712 and a non-interleaved zone 714. Zone 710 may be 4-way interleaved to provide a combined 1.5GB at a combined bandwidth of 102 GB/s. Zone 712 may be 2-way interleaved to provide a combined 1.5GB at 34 GB/s/ Zone 714 may be non-interleaved to provide 1GB at 17GB/s. The multi-layer interleaving technique combined with the memory management architecture of system 100 may facilitate transitioning between interleaved and non-interleaved portions because the contents of interleaved zones 710 and 712 may be explicitly designated for evictable or migratable data structures and buffers, whereas the contents of non-interleaved zone 714 may be designated for processing, such as, kernel operations and/or other low memory processes. For purposes of FIG. 10, the memory zones 1004, 1006, and 1008 may correspond to zones 710, 712, and 714 from FIG. 7. Memory zones 1004, 1006, and 1008 may having different density and/or performance levels. [0055] The HLOS 902 integrated with the memory channel optimization module 102 provides an efficient memory allocation scheme. It should be appreciated that the HLOS 902 and/or the memory channel optimization module 102 may allocate memory to different application workloads with varying memory performance requirements through the device. The HLOS 902 is configured to properly manage the allocation/de-allocation of the memory components of varying performance requirement for efficient utilization of the hardware platform. [0056] The QoS monitoring module 904 may allow for dynamically allocated and free virtual memory from one or more of the memory zones 1004, 1006, and 1008. In an embodiment, the QoS monitoring module 904 may assign higher performing zones to tasks/threads associated with applications 906, which may request or otherwise receive higher performance. The QoS monitoring module 904 may assign lower performing zones to tasks/threads that do not request higher performance. Furthermore, the QoS monitoring module 904 may dynamically control memory allocation to fallback from, for example, a first requested zone type to a second or third choice. [0057] The QoS monitoring module 904 may be further configured to audit and migrate or evict processes from higher performing zones based on the credentials of that process and how desirable it may be for that process to exist in that zone. Processes may be audited and migrated or evicted from zones that could be deleted, powered down, etc., thereby offering system power reduction during a sleep mode. The QoS monitoring module 130 may periodically monitor the applications 906 and, based on the monitored performance, evaluate and recommend modifications to the zoning configurations. [0058] The QoS monitoring module 904 may be configured to provide QoS requests or hints when allocating memory for an application code. It should be appreciated that various QoS or related parameters may be monitored by the QoS monitoring module 904 and may indicate, for example, the performance level or the nature of access on the allocated region {e.g., streaming high throughput large contiguous, discrete random access in small chunks, etc.). [0059] The QoS monitoring module 904 may translate the QoS parameter(s) and map them to a particular memory type or memory zone. For instance, random access may use lower memory access latency for efficient implementation of the application code, whereas it may be desirable for streaming high throughput application code to use high memory bandwidth. The QoS parameters may include direct real time values, such as, for example, "memory access latency < x nsec". In the embodiment of FIG. 10 in which the HLOS 902 includes the API 1002, the QoS parameters may be an optional argument to a memory allocation library. [0060] The QoS monitoring module 904 may be configured to augment a memory management module in the kernel to keep track of the dynamic usages of the different types of heterogeneous memory. The augmented memory management module may determine the appropriate allocation of the requested memory to one of the memory zones 1004, 1006, and 1008 based on QoS hints. [0061] It should be appreciated that QoS values need not be used. In the absence of any QoS values, the QoS monitoring module 904 may determine the appropriate memory zones for allocation of application requested memory based on initial runtime performance. The memory zones 1004, 1006, and 1008 may be dynamically shifted from one zone to another if, for example, the runtime performance of the application 906 is impacted based on the current memory zone allocation. In the absence of a QoS parameter, the QoS monitoring module 904 may keep track of the memory access performance of a process and/or thread by tracking if the accesses are relatively large contiguous chunks or randomly accessed. The time gap between each access burst may be used to estimate the QoS parameter. [0062] The QoS monitoring module 904 may be further configured to swap the allocated memory for a particular process or thread to the relevant memory zone that optimally matches the estimated QoS when the particular process/thread is in pending/wait stage. Swapping the allocated memory to a different zone may be avoided during a run state to tradeoff overhead during active execution. [0063] In embodiments implementing an estimated QoS, the QoS monitoring module 904 may be configured to match the current allocated memory zone. The QoS monitoring module 904 may monitor the thread/process for future changes in the memory access behavior. The frequency of monitoring process may be varied as desired. Alternatively, the QoS monitoring module 904 may eliminate further monitoring based on the overall activity on system 900 to reduce the overhead of the monitor process. [0064] It should be appreciated that various hardware structures may be implemented that are configured to extract the memory access behavior/pattern of a process/thread for the purpose of determining the estimated QoS to map the memory allocation to the appropriate memory zone. Furthermore, memory zone allocation can be further granular when different allocations within a particular process/thread could be allocated to different memory zones where the QoS satisfies a broader range. For example, some components may be better suited for high bandwidth streaming data that can survive higher latency by need high throughput compared to, for example, fast random access but low bandwidth memory. [0065] FIG. 11 illustrates another embodiment of a system 1100 for integrating one or more of the QoS services described above with the memory channel optimization module 102. This approach may be desirable for accommodating legacy applications 906 that may not be compatible with a QoS solution provided by the HLOS 902. In this embodiment, the memory channel optimization module 102 further comprises the QoS monitoring module(s) 904 that are operatively coupled to the channel remapping module(s) 400 described above. [0066] FIG. 12 illustrates a method 1200 for dynamically allocating memory in either the system 900 (FIG. 9) or the system 1100 (FIG. 11) according to the interleaving and remapping approaches described above. At block 1202, an interleave bandwidth ratio is determined. As described above, the interleave bandwidth ratio may comprise a ratio of bandwidths for the two or more dissimilar memory types or devices. At block 1204, the dissimilar memory types or devices are interleaved according to the interleave bandwidth ratio determined at block 1202. Any of the above-described or other interleaving approaches may be implemented to define two or more memory zones (e.g., zone 1004, 1006, and 1008) with each memory zone having a different performance level and/or density level. At block 1206, the HLOS 902 and/or the memory channel optimization module 102 may receive memory address requests from the applications 906 (or other clients). In response, memory is allocated to the appropriate memory zone based on either a declared QoS (e.g., via API 1002) or an estimated QoS. [0067] In the embodiment illustrated in FIG. 13, the declared QoS may be implemented using a "malloc" (i.e., memory allocation) function corresponding to the API 1002. Following the above example (FIG. 7) using a 2: 1 interleave bandwidth ratio, the DRAM device 104a may comprise a pair of 0.5GB memory devices 702 and 704 having a high performance bandwidth of 34 GB/s conventionally optimized for GPU 106. DRAM device 104b may comprise a 1GB memory device 706 and a 2GB memory device 708 each having a lower bandwidth of 17GB/s conventionally optimized for CPU 108. The multi-layer interleave technique may create two interleaved zones 710 and 712 and a non-interleaved zone 714. Zone 710 may be 4-way interleaved to provide a combined 1.5GB at a combined bandwidth of 102 GB/s. Zone 712 may be 2- way interleaved to provide a combined 1.5GB at 34 GB/s/ Zone 714 may be non-interleaved to provide 1GB at 17GB/s. It should be appreciated that the QoS may be applied to all different variants of memory allocation functions, and that "malloc" is used merely as one possible example. [0068] A first malloc function 1302 may be used for declaring a first QoS associated with, for example, the 4-way interleaved memory zone 710. A second malloc function 1304 may be used for declaring a second QoS associated with, for example, the 2-way interleaved zone 712. A third malloc function 1306 may be used for declaring a third QoS associated with, for example, the non-interleaved zone 1306. [0069] It should be appreciated that one or more of the method steps described herein may be stored in the memory as computer program instructions, such as the modules described above. These instructions may be executed by any suitable processor in combination or in concert with the corresponding module to perform the methods described herein. [0070] Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method. [0071] Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example. [0072] Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the Figures which may illustrate various process flows. [0073] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. [0074] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. [0075] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0076] Alternative embodiments will become apparent to one of ordinary skill in the art to which the invention pertains without departing from its spirit and scope. Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
An improved atomic layer doping apparatus is disclosed as having multiple doping regions in which individual monolayer species are first deposited and then dopant atoms contained therein are diffused into the substrate. Each doping region is chemically separated from adjacent doping regions. A loading assembly is programmed to follow pre-defined transfer sequences for moving semiconductor substrates into and out of the respective adjacent doping regions. According to the number of doping regions provided, a plurality of substrates could be simultaneously processed and run through the cycle of doping regions until a desired doping profile is obtained. |
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method of operating an atomic layer doping apparatus, said doping apparatus comprising a plurality of doping regions, said doping regions being chemically isolated from one another, said method comprising the steps of:positioning a plurality of wafers in respective doping regions; introducing a first dopant gas species into some of said plurality of doping regions and depositing said first dopant gas species on at least one of said plurality of wafers as a first atomic monolayer, said first atomic monolayer comprising dopant atoms of said first dopant gas species; moving said plurality of wafers from said some of said plurality of doping regions to other doping regions; and introducing a second gas species into said other doping regions and contacting said second gas species on at least one of said plurality of wafers to introduce said dopant atoms into said at least one of said plurality of wafers. 2. The method of claim 1 further comprising the act of sequentially moving said plurality of wafers through at least two of said plurality of doping regions in accordance with a predefined pattern.3. The method of claim 1 wherein said second gas species is a non-reactive plasma.4. The method of claim 1 further comprising the act of annealing said at least one of said plurality of wafers.5. The method of claim 1 further comprising the act of sequentially moving said plurality of wafers through all said doping regions.6. The method of claim 1 further comprising the act of sequentially moving said plurality of wafers through predetermined regions of said doping regions.7. A method of conducting atomic layer doping comprising the steps of:depositing a first atomic monolayer including atoms of a first dopant species on a substrate in a first doping region; moving said substrate from said first doping region to a second doping region, which is chemically isolated from said first doping region, for depositing a second monolayer including atoms of a second dopant species on said substrate; and moving said substrate from said second doping region to a third doping region, which is chemically isolated from said first and second doping regions, for introducing said atoms of said first and second dopant species into said wafer. 8. The method of claim 7, wherein said act of introducing said atoms of said first and second dopant species into said wafer further comprises introducing a non-reactive plasma into said third doping region and contacting said non-reactive plasma with said first and second atomic monolayer species.9. The method of claim 7, wherein said act of introducing said atoms of said first and second dopant species into said wafer further comprises heating said wafer so that said atoms diffuse into a surface region of said wafer.10. The method of claim 7 further comprising the act of annealing said wafer.11. The method of claims 7 further comprising the act of sequentially moving said substrate back and forth between said first, second and third doping regions. |
FIELD OF THE INVENTIONThe present invention relates to the field of semiconductor integrated circuits and, in particular, to an improved method for doping wafers.BACKGROUND OF THE INVENTIONIncorporation of dopants or chosen impurities into a semiconductor material, commonly known as doping, is well known in the art. Thermal diffusion and ion implantation are two methods currently used to introduce a controlled amount of dopants into selected regions of a semiconductor material.Doping by thermal diffusion is a two-step process. In the first step, called predeposition, the semiconductor is either exposed to a gas stream containing excess dopant at low temperature to obtain a surface region saturated with the dopant, or a dopant is diffused into a thin surface layer from a solid dopant source coated onto the semiconductor surface. The predeposition step is followed by the drive-in step, during which the semiconductor is heated at high temperatures in an inert atmosphere so that the dopant in the thin surface layer of the semiconductor is diffused into the interior of the semiconductor, and thus the predeposited dopant atoms are redistributed to a desired doping profile.Ion implantation is preferred over thermal diffusion because of the capability of ion implantation to control the number of implanted dopant atoms, and because of its speed and reproducibility of the doping process. The ion implantation process employs ionized-projectile atoms that are introduced into solid targets, such as a semiconductor substrate, with enough kinetic energy (3 to 500 KeV) to penetrate beyond the surface regions. A typical ion implant system uses a gas source of dopant, such as, BF3, PF3, SbF3, or AsH3, for example, which is energized at a high potential to produce an ion plasma containing dopant atoms. An analyzer magnet selects only the ion species of interest and rejects the rest of species. The desired ion species are then injected into an accelerator tube, so that the ions are accelerated to a high enough velocity to acquire a threshold momentum to penetrate the wafer surface when they are directed to the wafers.Although ion implantation has many advantages, such as the ability to offer precise dopant concentrations, for example, for silicon of about 10<14 >to 10<21 >atoms/cm<3>, there are various problems associated with this doping method. For example, a major drawback for ion implantation is the radiation damage, which occurs because of the bombardment involved with heavy particles and further affects the electrical properties of the semiconductor. The most common radiation damage is the vacancy-interstitial defect, which occurs when an incoming dopant ion knocks substrate atoms from a lattice site and the newly dislocated atoms rest in a non-lattice position. Further, most of the doping atoms are not electrically active right after implantation mainly because the dopant atoms do not end up on regular, active lattice sites. By a suitable annealing method, however, the crystal lattice could be fully restored and the introduced dopant atoms are brought to electrically active lattice sites by diffusion.Ion channeling is another drawback of ion implantation that could also change the electrical characteristics of a doped semiconductor. Ion channeling occurs when the major axis of the crystal wafer contacts the ion beam, and when ions travel down the channels, reaching a depth as much as ten times the calculated depth. Thus, a significant amount of additional dopant atoms gather in the channels of the major axis. Ion channeling can be minimized by several techniques, such as employing a blocking amorphous surface layer or misorienting the wafer so that the dopant ions enter the crystal wafer at angles different than a 90[deg.] angle. For example, misorientation of the wafer 3 to 7[deg.] off the major axis prevents the dopant ions from entering the channels. However, these methods increase the use of the expensive ion-implant machine and, thus, could be very costly for batch processing.Another disadvantage of the conventional doping methods is the autodoping. After dopants are incorporated into a crystalline wafer to form various junctions, they undergo many subsequent processing steps for device fabrication. Although efforts are made to use low-temperature processing techniques to minimize redistribution of incorporated dopant atoms, the dopants still redistribute during the course of further processing. For example, this redistribution of dopants becomes extremely important when an epitaxial film is grown over the top of the doped area, particularly because of the high temperature required for epitaxial growth. At high temperatures, the dopant diffuses into the growing epitaxial film during the epitaxial growth, and this phenomenon is referred to as autodoping. This phenomenon also leads to unintentional doping of the film in between the doped regions, or into the nondiffused substrate. For this, integrated circuit designers must leave adequate room between adjacent regions to prevent the laterally diffused regions from touching and shorting.Furthermore, current doping systems today employ a batch processing, in which wafers are processed in parallel and at the same time. An inherent disadvantage of batch processing is cross contamination of the wafers from batch to batch, which further decreases the process control and repeatability, and eventually the yield, reliability and net productivity of the doping process.Accordingly, there is a need for an improved doping system, which will permit minimal dopant redistribution, precise control of the number of implanted dopants, higher commercial productivity and improved versatility. There is also needed a new and improved doping system and method that will eliminate the problems posed by current batch processing technologies, as well as a method and system that will allow greater uniformity and doping process control with respect to layer thickness necessary for increasing density of integration in microelectronics circuits.SUMMARY OF THE INVENTIONThe present invention provides an improved method and unique atomic layer doping system and method for wafer processing. The present invention contemplates an apparatus provided with multiple doping regions in which individual monolayers of dopant species are first deposited by atomic layer deposition (ALD) on a wafer and then the respective dopants are diffused, by thermal reaction, for example, into the wafer surface. Each doping region of the apparatus is chemically isolated from the other doping regions, for example, by an inert gas curtain. A robot is programmed to follow pre-defined transfer sequences to move wafers into and out of respective doping regions for processing. Since multiple regions are provided, a multitude of wafers can be simultaneously processed in respective regions, each region depositing only one monolayer dopant species and subsequently diffusing the dopant into the wafer. Each wafer can be moved through the cycle of regions until a desired doping concentration and profile is reached.The present invention allows for the atomic layer doping of wafers with higher commercial productivity and improved versatility. Since each region may be provided with a pre-determined set of processing conditions tailored to one particular monolayer dopant species, cross contamination is also greatly reduced.These and other features and advantages of the invention will be apparent from the following detailed description which is provided in connection with the accompanying drawings, which illustrate exemplary embodiments of the invention.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a schematic top view of a multiple-chamber atomic layer doping apparatus according to the present invention.FIG. 2 is a partial cross-sectional view of the atomic layer doping apparatus of FIG. 1, taken along line 2-2' and depicting two adjacent doping regions according to a first embodiment of the present invention and depicting one wafer transfer sequence.FIG. 3 is a partial cross-sectional view of the atomic layer doping apparatus of FIG. 1, taken along line 2-2' and depicting two adjacent doping regions according to a second embodiment of the present invention.FIG. 4 is a partial cross-sectional view of the atomic layer doping apparatus of FIG. 2, depicting a physical barrier between two adjacent doping chambers.FIG. 5 is a schematic top view of a multiple-chamber atomic layer doping apparatus according to the present invention and depicting a second wafer transfer sequence.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSIn the following detailed description, reference is made to various exemplary embodiments of the invention. These embodiments are described with sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be employed, and that structural and electrical changes may be made without departing from the spirit or scope of the present invention.The term "substrate" used in the following description may include any semiconductor-based structure. Structure must be understood to include silicon, silicon-on insulator (SOI), silicon-on sapphire (SOS), doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. The semiconductor need not be silicon-based. The semiconductor could be silicon-germanium, germanium, or gallium arsenide. When reference is made to substrate in the following description, previous process steps may have been utilized to form regions or junctions in or on the base semiconductor or foundation.The term "dopant" is intended to include not only elemental dopant atoms, but dopant atoms with other trace elements or in various combinations with other elements as known in the semiconductor art, as long as such combinations retain the physical and chemical properties of the dopant atoms. The term "p-type dopant" used in the following description may include any p-type impurity ions, such as zinc (Zn), magnesium (Mg), beryllium (Be), boron (B), gallium (Ga) or indium (In), among others. The term "n-type dopant" may include any n-type impurity ions, such as silicon (Si), sulfur (S), tin (Sn), phosphorus (P), arsenic (As) or antimony (Sb), among others.The present invention provides an atomic layer doping method and apparatus. As it will be described in more details below, the apparatus is provided with multiple doping regions in which individual monolayer dopant species are first deposited on a substrate and then dopant atoms corresponding to each of the monolayer species are diffused into respective substrates. Each doping region is chemically separated from the adjacent doping regions. A robot is programmed to follow pre-defined transfer sequences for moving wafers into and out of the respective adjacent doping regions. According to the number of doping regions provided, a multitude of substrates could be simultaneously processed and run through the cycle of different doping regions until a desired doping concentration of a wafer surface is completed.The present invention provides a simple and novel multi-chamber system for atomic layer doping processing. Although the present invention will be described below with reference to the atomic layer deposition of a dopant species Ax and the subsequent diffusion of its dopant atoms into a wafer, it must be understood that the present invention has equal applicability for the formation of any doped material capable of being formed by atomic layer doping techniques using any number of species, where each dopant species is deposited in a reaction chamber dedicated thereto.A schematic top view of a multiple-chamber atomic layer doping apparatus 100 of the present invention is shown in FIG. 1. According to an exemplary embodiment of the present invention, doping regions 50a, 50b, 52a, 52b, 54a, and 54b are alternately positioned around a loading mechanism 60, for example a robot. These doping regions may be any regions for the atomic layer doping treatment of substrates. The doping regions may be formed as cylindrical reactor chambers, 50a, 50b, 52a, 52b, 54a, and 54b, in which adjacent chambers are chemically isolated from one another.To facilitate wafer movement, and assuming that only one monolayer of a dopant species Ax is to be deposited per cycle, the reactor chambers are arranged in pairs 50a, 50b; 52a, 52b; 54a, 54b. One such pair, 50a, 50b is shown in FIG. 2. While one of the reactor chambers of a pair, for example 50a, deposits one monolayer of the dopant species Ax, the other reactor chamber of the pair, for example 50b, facilitates subsequent diffusion of the dopant atoms of species Ax into the wafer to complete the doping process. The adjacent reactor chamber pairs are chemically isolated from one another, for example by a gas curtain, which keeps the monolayer of dopant species Ax in a respective region, for example 50a, and which allows wafers treated in one reaction chamber, for example 50a, to be easily transported by the robot 60 to the other reaction chamber 50b, and vice versa. Simultaneously, the robot can also move wafers between chambers 52a or 52b, and 54a and 54b. In order to chemically isolate the paired reaction chambers 50a, 50b; 52a, 52b; and 54a, 54b, the paired reaction chambers show a wall through which the wafers may pass, with the gas curtain acting in effect as a chemical barrier preventing the gas mixture within one chamber, for example 50a, from entering the paired adjacent chamber, for example 50b. It should be noted that, when a particular doping concentration and/or profile is required, the robot can simply move wafers back and forth between the adjacent chambers, for example 50a, 50b, until the desired doping profile and/or concentration of the wafer is obtained.It should also be noted that, while two adjacent chambers have been illustrated for doping of a substrate using monolayers of dopant species Ax, one or more additional chambers, for example 50c, 52c, 54c, may also be used for deposition of additional respective monolayers of dopant species, such as By, for example, with the additional chambers being chemically isolated from the chambers depositing the Ax monolayer dopant species in the same way the chambers for depositing the Ax species are chemically isolated.The loading assembly 60 of FIG. 1 may include an elevator mechanism along with a wafer supply mechanism. As well-known in the art, the supply mechanism may be further provided with clamps and pivot arms, so that a wafer 55 can be maneuvered by the robot and positioned according to the requirements of the atomic layer doping processing described in more detail below.Further referring to FIG. 1, a processing cycle for atomic layer doping on a wafer 55 begins by selectively moving a first wafer 55, from the loading assembly 60 to the chamber reactor 50a, in the direction of arrow A1 (FIG. 1). Similarly, a second wafer 55' may be selectively moved by the loading assembly 60 to the chamber reactor 52a, in the direction of arrow A2. Further, a third wafer 55'' is also selectively moved by the loading assembly 60 to the chamber reactor 54a, in the direction A3. At this point, each of chambers 50a, 52a, 54a are ready for atomic layer deposition of a monolayer of a dopant species, for example Ax.FIG. 2 illustrates a cross-sectional view of the apparatus 100 of FIG. 1, taken along line 2-2' . For simplicity, FIG. 2 shows only a cross-sectional view of adjacent reactor chambers 50a and 50b. In order to deposit an atomic monolayer on the wafer 55, the wafer 55 is placed inside of the reactor chamber 50a, which may be provided as a quartz or aluminum container 120. The wafer 55 is placed by the loading assembly 60 (FIG. 1) onto a suscepter 140a (FIG. 2), which in turn is situated on a heater assembly 150a. Mounted on the upper wall of the reactor chamber 50a is a dopant gas supply inlet 160a, which is further connected to a dopant gas supply source 162a for a first dopant gas precursor Ax. An exhaust outlet 180a, connected to an exhaust system 182a, is situated on the opposite wall from the dopant gas supply inlet 160a. The wafer 55 is positioned on top of the suscepter 140a (FIG. 2) by the loading assembly 60, and then a first dopant gas precursor Ax is supplied into the reactor chamber 50a through the dopant gas inlet 160a. The first dopant gas precursor Ax flows at a right angle onto the wafer 55 and reacts with its top substrate surface to form a first monolayer 210a of the first dopant species Ax, by an atomic layer deposition mechanism. Preferred gas sources of dopants are hydrated forms of dopant atoms such as arsine (AsH3) and diborane (B2H6). These gases are mixed in different dilutions in pressurized containers, such as the dopant gas supply source 162a (FIG. 2), and connected directly to the dopant gas inlets, such as the dopant gas inlet 160a (FIG. 2). Gas sources offer the advantage of precise control through pressure regulators and are favored for deposition on larger wafers.Alternatively, a liquid source of dopant such as chlorinated or brominated compounds of the desired element may be used. When a liquid source of dopant is used, a boron liquid source, for example boron tribromide (BBr3), or a phosphorous liquid source, for example phosphorous oxychloride (POCl3), may be held in temperature-controlled flasks over which an inert gas, such as nitrogen (N2), is bubbled through the heated liquid, so that the gas becomes saturated with dopant atoms. The inert gas carries the dopant vapors through a gas tube and creates a laminar flow of dopant atoms. A reaction gas is also required to create the elemental dopant form in the tube. For BBr3, for example, the reaction gas is oxygen, which creates the boron trioxide (B2O3) which further deposits as a monolayer of boron trioxide on the surface of the wafer.In any event, after the deposition of a monolayer of the first dopant species Ax on the wafer surface 55, the processing cycle for the wafer 55 continues with the removal of the wafer 55 from the chamber reactor 50a to the chamber reactor 50b, in the direction of arrow B1, as also illustrated in FIG. 1. After the deposition of the first monolayer 210a of the first dopant species Ax, the wafer 55 is moved from the reactor chamber 50a, through a gas curtain 300 (FIG. 2), to the reactor chamber 50b, by the loading assembly 60 (FIG. 1) and in the direction of arrow B1of FIG. 2. It is important to note that the gas curtain 300 provides chemical isolation between adjacent deposition regions.The loading assembly 60 moves the wafer 55 through the gas curtain 300, onto the suscepter 140b situated in the reactor chamber50b, which, in contrast with the reactor chamber 50a, contains no dopant source and no dopant species. A heater assembly 150b is positioned under the suscepter 140b to facilitate the diffusion of the dopant atoms from the newly deposited first monolayer 210a of the first dopant species Ax into the wafer 55. The heat from the heater assembly 150b drives the dopant atoms into the wafer 55 and further redistributes the dopant atoms from the first monolayer 210a deeper into the wafer 55 to form a doped region 210b of the first dopant species Ax. During this step, the surface concentration of dopant atoms is reduced and the distribution of dopant atoms continues, so that a precise and shallow doping distribution in the doped region 210b of the wafer 55 is obtained. Accordingly, the depth of the doped region 210b of the wafer 55 is controlled, first, by the repeatability of the atomic layer deposition for the monolayers of dopant species and, second, by the degree of diffusion of dopants form the monolayers of dopant species into the wafers.Alternatively, a plasma of a non-reactive gas may be used to complete the diffusion of the dopant atoms into the doped region 210b of the wafer 55. In this embodiment, a supply inlet 160b (FIG. 2), which is further connected to a non-reactive gas supply source 162b, for the plasma of the non-reactive gas, is mounted on the upper wall of the reactor chamber 50b. An exhaust inlet 180b, connected to an exhaust system 182b, is further situated on the opposite wall to the non-reactive gas supply inlet 160b. Next, the non-reactive gas By is supplied into the reactor chamber 50b through the non-reactive gas inlet 160b, the non-reactive gas By flowing at a right angle onto the deposited first monolayer 210a of the first dopant species Ax. This way, particles of the non-reactive gas By "knock" the dopant atoms from the first monolayer 210a of the first doping species Ax into the wafer 55 to form the doped region 210b of the wafer 55.Following the formation of the doped region 210b of the wafer 55, the process continues with the removal of the wafer 55 from the reactor chamber 50b, through the gas curtain 300, and into the reactor chamber 50a to continue the doping process. This process is repeated cycle after cycle, with the wafer 55 traveling back and forth between the reactor chamber 50a, and the reactor chamber 50b, to acquire the desired doping profile of the region 210b. Once the desired doping profile of the wafer 55 has been achieved, an anneal step in the atomic layer doping process is required, to restore any crystal damage and to electrically activate the dopant atoms. As such, annealing can be achieved by a thermal heating step. However, the anneal temperature must be preferably below the diffusion temperature to prevent lateral diffusion of the dopants. Referring to FIG. 2, the anneal step could take place in the reactor chamber 50b, for example, by controlling the heat from the heater assembly 150b. Alternatively, the anneal step may take place into an adjacent reactor chamber, for example reactor chamber 52a, depending on the processing requirements and the desired number of wafers to be processed.By employing chemically separate reactor chambers for the deposition process of species Ax dopant and possibly others, the present invention has the major advantage of allowing different processing conditions, for example, deposition or diffusion temperatures, in different reactor chambers. This is important since the chemisorption and reactivity requirements of the ALD process have specific temperature requirements, in accordance with the nature of the precursor gas. Accordingly, the apparatus of the present invention allows, for example, reactor chamber 50a to be set to a different temperature than that of the reactor chamber 50b. Further, each reactor chamber may be optimized either for improved chemisorption, reactivity or dopant conditions.The configuration of the atomic layer doping apparatus illustrated above also improves the overall yield and productivity of the doping process, since each chamber could run a separate substrate, and therefore, a plurality of substrates could be run simultaneously at a given time. In addition, since each reactor chamber accommodates only one dopant species, cross-contamination from one wafer to another is greatly reduced. Moreover, the production time can be decreased since the configuration of the apparatus of the present invention saves a great amount of purging and reactor clearing time.Of course, although the doping process was explained above only with reference to the first substrate 55 in the first chamber reactor 50a and the second chamber reactor 50b, it is to be understood that same processing steps are carried out simultaneously on the second and third wafers 55' , 55'' for their respective chamber reactors. Further, the second and third wafers 55' , 55'' are moved accordingly, in the directions of arrows A2, B2 (corresponding to chamber reactors 52a, 52b) and arrows A3, B3 (corresponding to chamber reactors 54a, 54b). Moreover, while the doping process was explained above with reference to only one first substrate 55 for the first and second reactor chambers 50a, 50b, it must be understood that the first and second reactor chambers 50a, 50b could also process another first substrate 55, in a direction opposite to that of processing the other first substrate. For example, if one first substrate 55 travels in the direction of arrow B1 (FIG. 2) the other first substrate 55 could travel in the opposite direction of arrow B1, that is from the second reactor chamber 50b to the first reactor chamber 50a. Assuming a specific doping concentration is desired on the wafer 55, after the diffusion of the dopant atoms from the first monolayer 210a in the reactor chamber 50b, the wafer 55 is then moved back by the assembly system 60 to the reactor chamber 50a, where a second monolayer of the first dopant species Ax is next deposited over the first monolayer of the first dopant species Ax. The wafer 55 is further moved to the reactor chamber 50b for the subsequent diffusion of the dopant atoms from the second monolayer of the first dopant species Ax. The cycle continues until a desired doping concentration on the surface of the wafer 55 is achieved, and, thus, the wafer 55 travels back and forth between reactor chambers 50a and 50b. As explained above, the same cycle process applies to the other two wafers 55' , 55'' that are processed simultaneously in their respective reactor chambers.Although the invention is described with reference to reactor chambers, any other type of doping regions may be employed, as long as the wafer 55 is positioned under a flow of dopant source. The gas curtain 300 provides chemical isolation to all adjacent deposition regions. Thus, as illustrated in FIGS. 2-3, the gas curtain 300 is provided between the two adjacent reactor chambers 50a and 50b so that an inert gas 360, such as nitrogen, argon, or helium, for example, flows through an inlet 260 connected to an inert gas supply source 362 to form the gas curtain 300, which keeps the first dopant gas Ax and the non-reactive gas By from flowing into adjacent reaction chambers. An exhaust outlet 382 (FIG. 2) is further situated on the opposite wall to the inert gas inlet 260. It must also be noted that the pressure of the inert gas 360 must be higher than that of the first dopant gas Ax and that of the non-reactive gas By, so that the two doping gases Ax, By are constrained by the gas curtain 300 to remain within their respective reaction chambers.FIG. 3 illustrates a cross-sectional view of the apparatus 100 of FIG. 2, with same adjacent reactor chambers 50a and 50b, but in which the inert gas 360 shares the exhaust outlets 180a and 180b with the two doping gases Ax and By, respectively. Thus, the atomic layer doping apparatus 100 may be designed so that the inert gas 360 of the gas curtain 300 could be exhausted through either one or both of the two exhaust outlets 180a and 180b, instead of being exhausted through its own exhaust outlet 382, as illustrated in FIG. 2.FIG. 4 shows another alternate embodiment of the apparatus in which the gas curtain 300 separating adjacent chambers in FIGS. 2-3 is replaced by a physical boundary, such as a wall 170 having a closeable opening 172. A door 174 (FIG. 4) can be used to open and close the opening 172 between the adjacent paired chambers 50a, 50b. This way, the wafer 55 can be passed between the adjacent chambers 50a, 50b through the open opening 172 by the robot 60, with the door 174 closing the opening 172 during atomic layer doping processing.Although the present invention has been described with reference to only three semiconductor substrates processed at relatively the same time in respective pairs of reaction chambers, it must be understood that the present invention contemplates the processing of any "n" number of wafers in their corresponding "m" number of reactor chambers, where n and m are integers. Thus, in the example shown in FIG. 1, n=3 and m=6, providing an atomic layer doping apparatus with at least 6 reaction chambers that could process s simultaneously 3 wafers for a repeating two-step atomic layer doping using Ax as a dopant source and By as a non-reactive gas for diffusion. It is also possible to have n=2 and m=6 where two wafers are sequentially transported to and processed in the reaction chambers for sequential doping with two species, for example, Ax and a second dopant species Cz, while employing the non-reactive gas By to facilitate the diffusion of the dopant atoms Ax and Cz. Other combinations are also possible. Thus, although the invention has been described with the wafer 55 traveling back and forth from the reactor chamber 50a to the reactor chamber 50b with reference to FIG. 2, it must be understood that, when more than two reactor chambers are used for doping with more than two monolayer species Ax, Cz, the wafer 55 will be transported by the loading assembly 60 among all the reaction chambers in a sequence required to produce a desired doping profile.Also, although the present invention has been described with reference to wafers 55, 55' and 55'' being selectively moved by the loading assembly 60 to their respective reactor chambers 50a and 50b (for wafer 55), 52a and 52b (for wafer 55'), and 54a and 54b (for wafer 55''), it must be understood that each of the three above wafers or more wafers could be sequentially transported to, and processed in, all the reaction chambers of the apparatus 100. This way, each wafer could be rotated and moved in one direction only. Such a configuration is illustrated in FIG. 5, according to which a processing cycle for atomic layer deposition on a plurality of wafers 55, for example, begins by selectively moving each wafer 55, from the loading assembly 60 to the chamber reactor 50a, in the direction of arrow A1 (FIG. 5), and then further to the reactor chamber 50b, 52a, 52b, 54a, and 54b. One reaction chamber, for example 50a, can serve as the initial chamber and another, for example 54b, as the final chamber. Each wafer 55 is simultaneously processed in a respective chamber and is moved sequentially through the chambers by the loading assembly 60, with the cycle continuing with wafers 55 traveling in one direction to all the remaining reactors chambers. Although this embodiment has been described with reference to a respective wafer in each chamber, it must be understood that the present invention contemplates the processing of any "n" number of wafers in corresponding "m" number of reactor chambers, where n and m are integers and n≤m. Thus, in the example shown in FIG. 5, the ALD apparatus with 6 reaction chambers could process simultaneously up to 6 wafers.The above description illustrates preferred embodiments that achieve the features and advantages of the present invention. It is not intended that the present invention be limited to the illustrated embodiments. Modifications and substitutions to specific process conditions and structures can be made without departing from the spirit and scope of the present invention. Accordingly, tie invention is not to be considered as being limited by the foregoing description and drawings, but is only limited by the scope of the appended claims. |
Systems and methods for performing non-allocating memory access instructions with physical address. A system includes a processor, one or more levels of caches, a memory, a translation look-aside buffer (TLB), and a memory access instruction specifying a memory access by the processor and an associated physical address. Execution logic is configured to bypass the TLB for the memory access instruction and perform the memory access with the physical address, while avoiding allocation of one or more intermediate levels of caches where a miss may be encountered. |
CLAIMS WHAT IS CLAIMED IS: 1. A method for accessing memory comprising: specifying a physical address for the memory access; bypassing virtual-to-physical address translation; and performing the memory access using the physical address. 2. The method of claim 1 wherein the memory access is a load request initiated by a processor, the method further comprising: traversing one or more levels of caches configured between the processor and the memory for data associated with the physical address of the load request; and returning the data directly to the processor from the cache level or memory where the data is first found, without modifying the states of any intermediate cache levels wherein the load request encounters a miss. 3. The method of claim 2 further comprising: avoiding allocation of the data in the intermediate cache levels wherein the load request encounters a miss. 4. The method of claim 1, further comprising: avoiding look-up of page attributes associated with the physical address. 5. The method of claim 1 wherein the memory access is a store request initiated by a processor, the method further comprising: traversing one or more levels of caches configured between the processor and the memory for the physical address of the store request; and writing the data associated with the store request directly from the processor to the cache level or memory where the physical address is first found, without modifying the states of any intermediate cache levels wherein the store request encounters a miss. 6. The method of claim 5, further comprising avoiding allocation of any intermediate cache levels wherein the store request encounters a miss. 7. The method of claim 5, wherein the store request is executed as a write-through operation such that if the physical address is first found in a first cache level, the method further comprises writing the data to any cache level present between the first cache level and the memory. 8. The method of claim 1, wherein the physical address corresponds to registers in a register file. 9. A memory access instruction for accessing memory by a processor, wherein the memory access instruction comprises: a first field corresponding to an address for the memory access; a second field corresponding to an access mode; and a third field comprising operation code configured to direct execution logic to: in a first mode of the access mode, determine the address in the first field to be a physical address; bypass virtual-to-physical address translation; and perform the memory access with the physical address. 10. The memory access instruction of claim 9, wherein the operation code is configured to direct the execution logic to: in a second mode of the access mode, determine the address in the first field to be a virtual address; perform virtual-to-physical address translation from the virtual address to determine a physical address; and perform the memory access with the physical address. 11. A processing system comprising: a processor comprising a register file; a memory; a translation look-aside buffer (TLB) configured to translate virtual-to-physical addresses; and execution logic configured to, in response to a memory access instruction specifying a memory access and an associated physical address:bypass virtual-to-physical address translation for the memory access instruction; and perform the memory access with the physical address. 12. The processing system of claim 11 wherein the memory access is a load, and the execution logic is configured to: traverse one or more levels of caches configured between the processor and the memory for data associated with the physical address of the load request; and return the data directly to a register corresponding to the physical address in the register file, from the cache level or memory where the data is first found, without modifying the states of any intermediate cache levels wherein the load request encounters a miss. 13. The processing system of claim 12 wherein the execution logic is further configured to avoid allocation of the data in the intermediate cache levels wherein the load request encounters a miss. 14. The processing system of claim 11, wherein the execution logic is further configured to avoid look-up of page attributes associated with the physical address. 15. The processing system of claim 11 wherein the memory access is a store, and the execution logic is configured to: traverse one or more levels of caches configured between the processor and the memory for the physical address of the store request; and write the data associated with the store request directly from the processor to the cache level or memory where the physical address is first found, without modifying the states of any intermediate cache levels wherein the store request encounters a miss. 16. The processing system of claim 15, wherein the execution logic is further configured to avoid allocation of any intermediate cache levels wherein the store request encounters a miss. 17. The processing system of claim 15, wherein the memory access is further specified as a write-through operation such that if the physical address is first found in a first cache level, the execution logic is configured to write the data to any cache level present between the first cache level and the memory. 18. The processing system of claim 11 integrated in a semiconductor die. 19. The processing system of claim 11, integrated into a device selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer. 20. A system for accessing memory comprising: means for specifying a physical address for the memory access; means for bypassing virtual-to-physical address translation; and means for performing the memory access using the physical address. 21. The system of claim 20 wherein the memory access is a load request initiated by a processor, the system further comprising: means for traversing one or more levels of caches configured between the processor and the memory for data associated with the physical address of the load request; and means for returning the data directly to the processor from the cache level or memory where the data is first found, without modifying the states of any intermediate cache levels wherein the load request encounters a miss. 22. The system of claim 20 wherein the memory access is a store request initiated by a processor, the system further comprising: means for traversing one or more levels of caches configured between the processor and the memory for the physical address of the store request; and means for writing the data associated with the store request directly from the processor to the cache level or memory where the physical address is first found,without modifying the states of any intermediate cache levels wherein the store request encounters a miss. 23. A non-transitory computer-readable storage medium comprising code, which, when executed by a processing system, causes the processing system to perform operations for accessing memory, the non-transitory computer-readable storage medium comprising: code for specifying a physical address for the memory access; code for bypassing virtual-to-physical address translation; and code for performing the memory access using the physical address. |
NON- ALLOCATING MEMORY ACCESS WITH PHYSICAL ADDRESS Claim of Priority under 35 U.S.C. §119 [0001] The present Application for Patent claims priority to Provisional Application No. 61/584,964 entitled "Non-Allocating Memory Access with Physical Address" filed January 10, 2012, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. Field of Disclosure [0002] Disclosed embodiments are directed to memory access operations using physical addresses. More particularly, exemplary embodiments are directed to memory access instructions designed to bypass virtual-to-physical address translation and avoid allocating one or more intermediate levels of cache. Background [0003] Virtual memory, as is well known in the art, can be addressed by virtual addresses. The virtual address space is conventionally divided into blocks of contiguous virtual memory addresses, or "pages." While programs may be written with reference to virtual addresses, a translation to physical address may be necessary for the execution of program instructions by processors. Page tables may be employed to map virtual addresses to corresponding physical addresses. Memory management units (MMUs) are conventionally used to look up page tables which hold virtual-to-physical address mappings, in order to handle the translation. Because contiguous virtual addresses may not conveniently map to contiguous physical addresses, MMUs may need to walk through several page tables (known as "page table walk") for a desired translation. [0004] MMUs may include hardware such as a translation lookaside buffer (TLB). A TLB may cache translations for frequently accessed pages in a tagged hardware lookup table. Thus, if a virtual address hits in a TLB, the corresponding physical address translation may be reused from the TLB, without having to incur the costs associated with a page table walk. [0005] MMUs may also be configured to perform page table walks in software. Software page table walks often suffer from the limitation that the virtual address of a page table entry(PTE) is not known, and thus it is also not known if the PTE is located in one of associated processor caches or main memory. Thus, the translation process may be tedious and time consuming. [0006] The translation process may suffer from additional drawbacks associated with a "hypervisor" or virtual machine manager (VMM). The VMM may allow two or more operating systems (known in the art as "guests"), to run concurrently on a host processing system. The VMM may present a virtual operating platform and manage the execution of the guest operating systems. However, conventional VMMs do not have visibility into cacheability types, such as "cached" or "uncached," of memory elements (data/instructions) accessed by the guests. Thus, it is possible for a guest to change the cacheability type of memory elements, which may go unnoticed by the VMM. Further, the VMM may not be able to keep track of virtual-to-physical address mappings which may be altered by the guests. While known architectures adopt mechanisms to hold temporary mappings of virtual-to-physical addresses specific to the guests, such mapping mechanisms tend to be very slow. [0007] Additional drawbacks may be associated with debuggers. Debug software or hardware may sometimes use instructions to query the data value present at a particular address in a processing system being debugged. Returning the queried data value may affect the cache images, depending on cacheability types of the associated address. Moreover, page table walks or TLB accesses may be triggered on account of the debuggers, which may impinge on the resources of the processing system. [0008] Accordingly, there is a need in the art to avoid aforementioned drawbacks associated with virtual-to-physical address translation in processing systems. SUMMARY [0009] Exemplary embodiments of the invention are directed to systems and method for memory access instructions designed to bypass virtual-to-physical address translation and avoid allocating one or more intermediate levels of caches. [0010] For example, an exemplary embodiment is directed to a method for accessing memory comprising: specifying a physical address for the memory access; bypassing virtual-to- physical address translation; and performing the memory access using the physical address.[0011] Another exemplary embodiment is directed to a memory access instruction for accessing memory by a processor, wherein the memory access instruction comprises: a first field corresponding to an address for the memory access; a second field corresponding to an access mode; and a third field comprising operation code configured to direct execution logic to: in a first mode of the access mode, determine the address in the first field to be a physical address; bypass virtual-to-physical address translation; and perform the memory access with the physical address. The operation code is further configured to direct the execution logic to: in a second mode of the access mode, determine the address in the first field to be a virtual address; perform virtual-to-physical address translation from the virtual address to determine a physical address; and perform the memory access with the physical address. [0012] Another exemplary embodiment is directed to a processing system comprising: a processor comprising a register file; a memory; a translation look-aside buffer (TLB) configured to translate virtual-to-physical addresses; and execution logic configured to, in response to a memory access instruction specifying a memory access and an associated physical address: bypass virtual-to-physical address translation for the memory access instruction; and perform the memory access with the physical address. [0013] Another exemplary embodiment is directed to a system for accessing memory comprising: means for specifying a physical address for the memory access; means for bypassing virtual-to-physical address translation; and means for performing the memory access using the physical address. [0014] Another exemplary embodiment is directed to a non-transitory computer-readable storage medium comprising code, which, when executed by a processing system, causes the processing system to perform operations for accessing memory, the non-transitory computer-readable storage medium comprising: code for specifying a physical address for the memory access; code for bypassing virtual-to-physical address translation; and code for performing the memory access using the physical address. BRIEF DESCRIPTION OF THE DRAWINGS [0015] The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof.[0016] FIG. 1 illustrates processing system 100 configured to implement exemplary memory access instructions according to exemplary embodiments. [0017] FIG. 2 illustrates a logical implementation of an exemplary memory access instruction specifying a load. [0018] FIG. 3 illustrates an exemplary operational flow of a method of accessing memory according to exemplary embodiments. [0019] FIG. 4 illustrates a block diagram of a wireless device that includes a multi-core processor configured according to exemplary embodiments. DETAILED DESCRIPTION [0020] Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention. [0021] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments of the invention" does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation. [0022] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising,", "includes" and/or "including", when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. [0023] Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actionsdescribed herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, "logic configured to" perform the described action. [0024] Exemplary embodiments relate to processing systems comprising a virtually addressed memory space. Embodiments may comprise instructions and methods which specify a physical address instead of a virtual address. The exemplary memory access instruction may be a load or a store. As will be described in detail, the exemplary memory access instructions may simplify software page table walks, improve VMM functions, and make debugging easier. [0025] With reference now to FIG. 1, an exemplary processing system 100 is illustrated. Processing system 100 may comprise processor 102, which may be a CPU or a processor core. Processor 102 may comprise one or more execution pipelines (not shown) which may support one or more threads, one or more register files (collectively depicted as register file 104), and other components as are well known in the art. Processor 102 may be coupled to local (or LI) caches such as I-cache 108 and D-cache 110, as well as one or more higher levels of caches, such as L2 cache, etc (not explicitly shown). The caches may be ultimately in communication with main memory such as memory 112. Processor 102 may interact with MMU 106 to obtain translations of virtual-to-physical addresses in order to perform memory access operations (loads/stores) on the caches or memory 112. MMU 106 may include a TLB (not shown) and additional hardware/software to perform page table walks. A virtual machine manager, VMM 114 is shown to be in communication with processor 102. VMM 114 may support one or more guests 116 to operate on processing system 100. The depicted configuration of processing system 100 is for illustrative purposes only, and skilled persons will recognize suitable modifications and additional components and connections to processing system 100 without departing from the scope of disclosed embodiments.[0026] With continuing reference to FIG. 1, an exemplary memory access instruction 120 will now be described. Instruction 120 is illustrated in FIG. 1 by means of dashed lines representing communication paths which may be formed in executing the instruction. Skilled persons will recognize that implementation of instruction 120 may be suitably modified to fit particular configurations of processing system 100. Further, reference is made herein, to "execution logic" which has not explicitly illustrated, but will be understood to generally comprise appropriate logic blocks and hardware modules which will be utilize to perform the various operations involved in the execution of instruction 120 in processing system 100 according to exemplary embodiments. Skilled persons will recognize suitable implementations for such execution logic. [0027] In one exemplary embodiment, instruction 120 is a load instruction, wherein the load instruction may directly specify the physical address for the load, instead of the virtual address as known in conventional art. By specifying the physical address for the load, instruction 120 avoids the need for a virtual-to-physical address translation, and thus, execution of instruction 120 may avoid accessing MMU 106 (as shown in FIG. 1). Thus, execution of instruction 120 may proceed by directly querying caches, such as I- cache 108 and D-cache 110 using the physical address for the load. [0028] In one scenario, the physical address for the load may hit in one of the caches. For example, execution of instruction 120 may first query local caches, and if there is a miss, execution may proceed to a next level cache, and so on, until there is a hit. Regardless of which cache level generates a hit, the data value corresponding to the physical address for the load is retrieved from the hitting cache, and may be directly delivered to register file 104. [0029] In the scenario wherein the physical address for the load does not hit in any of the caches, the corresponding data value may be fetched from main memory 112. However, this will be treated as an uncached load or a non-allocating load. In other words, the caches will not be updated with the data value following a miss. In one example of a debugger (not shown) performing debug operations on processing system 100, instruction 120 may be generated following a load request for the physical address by the debugger. The above exemplary execution of instruction 120 can be seen to leave the cache images unperturbed by the debugger's request because of the non- allocating nature of instruction 120. In comparison to conventional implementations,processing system 100 may thus remain free from disruption of normal operations on account of a debugger affecting cache images. [0030] In another exemplary embodiment, instruction 120 may be a store instruction, wherein the store instruction may directly specify the physical address for the store, instead of a virtual address as known in conventional art. Similar to operation of the load instruction as described above, the store instruction may query local caches first, and if there is a hit, a store may be performed. At least two varieties of store operations may be specified by the operation code of instruction 120 - write-through and write-back. In a write-through store, caches such as I-cache 108 and D-cache 110, may be queried with the physical address and in the case of a hit, the next higher level of cache hierarchy, and ultimately, main memory, memory 112, may also be queried and updated. On the other hand, for a write-back store, in the case of a hit the store operation ends without proceeding to the higher levels of cache hierarchy. [0031] For both write -back and write-through stores, if a miss is encountered, the store may proceed to querying a next level cache with the physical address, and thereafter, main memory 112 if necessary. However, a miss will not entail cache allocation in exemplary embodiments, similar to loads. A dedicated buffer or data array may be included in some embodiments for such non-allocating store operations, as will be further described with reference to FIG. 2. [0032] With reference now to FIG. 2, an exemplary hardware implementation of instruction 120 is illustrated. An expanded view of a cache, such as D-cache 110 is shown to comprise component arrays: data array 210 which stores data values; tag array 202 which comprises selected bits of physical addresses of corresponding data stored in data array 210; state array 204 which stores associated state information for the corresponding set; and replacement pointer array 206 which stores associated way information for any allocating load or store operation which may require the way to be replaced for the corresponding allocation. Although not accessed for the execution of instruction 120, DTLB 214 may hold virtual-to-physical address translations for frequently accessed addresses. DTLB 214 may be included for example in MMU 106. [0033] Firstly, with regard to loads, when instruction 120 for an exemplary load is received for processing by processor 102, the physical address field specified in instruction 120 for the load is retrieved. The physical address field is parsed for the fields: PA [Tag Bits] 208a corresponding to the bits associated with the tag for the load address; PA [Set Bits]208b corresponding to the set associated with the load address; and PA [Data Array Bits] 208c corresponding to the location in data array 210 for a load address which hits in D-cache 110. In one implementation, PA [Data Array Bits] 208c may be formed by a combination of PA [Set Bits] 208b and a line offset value to specify the location of a load address. For example, data array 210 may comprise cacheline blocks. The line offset value may be used to specify desired bytes of data located in the cacheline blocks based on the physical address for the load and size of the load, such as byte, halfword, word, doubleword, etc. [0034] Execution of instruction 120 may also comprise asserting the command Select PA Directly 216, which causes selector 216 to directly choose PA [Tag Bits] 208a over bits which may be derived from DTLB 214 and may also suppress a virtual-to-physical address translation by the DTLB 214. Tag array 202 and state array 204 may be accessed using PA [Set Bits] 208b, and comparators 218 may then compare whether the tag bits, PA [Tag Bits] 208a, are present in tag array 202, and if their state information is appropriate (e.g. "valid"). If comparators 218 generate a hit on hit/miss line 220, confirming that the load address is present and valid, then PA [Data Array Bits] 208c and associated way information derived from replacement pointer array 206 may jointly be used to access data array 210 to retrieve the desired data value for the exemplary load instruction specified by instruction 120. The desired data value may then be read out of read data line 224 and may be transferred directly to processor 102, for example, into register file 104. [0035] In the above implementation of querying and retrieving data from D-cache 110 in accordance with exemplary embodiments of instruction 120 specifying a load, cache images, such as that of D-cache 110, may remain unchanged. In other words, regardless of whether there was a hit or a miss, tag array 202, state array 204, replacement pointer array 206, and data array 210 are not altered. [0036] Turning now to stores, the operation is similar, for both write-through and write -back stores. For example, if instruction 120 specifies a store of data to a physical address, then in one implementation, local cache, D-cache 110 may be queried for both write- through and write-back stores, and if the physical address is found, then the data may be written to a dedicated array, write data array 222, which may be included in data array 210 as shown in FIG. 2. In the case of write-through stores, the operation may proceed to querying and updating a next higher level cache (not shown) as described above,while in the case of a write-back the operation may end with writing write data array 222. [0037] For both write-through and write -back stores, if the physical address is not found, i.e. there is a miss, then any updates to the arrays of D-cache 110 may be skipped, and the data may be written directly to the physical address location in memory 112. In other words, the store may be treated as a non-allocating store. Such exemplary store operations specified by instruction 120 may be used in debug operations, for example, by a debugger. [0038] Similar to the load/store instructions which may be specified by instruction 120 for data which may pertain to D-cache 110, exemplary embodiments may also include load/store instructions for instruction values pertaining to I-cache 108. For example, a physical address fetch instruction may be specified, which may be executed in like manner as instruction 120 described above. The physical address fetch instructions may be used to locate an instruction value corresponding to a physical address in a non-allocating manner. Thus, I-cache 108 may first be queried. If a hit is encountered, the desired fetch operation may proceed by fetching the instruction value from the physical address specified in the instruction. If a miss is encountered, allocation of I-cache 108 may be skipped and execution may proceed to query any next level cache and ultimately main memory 112 if required. [0039] While the above description has been generally directed to bypassing MMU 106 / DTLB 214 for every instance of instruction 120, a variation of instruction 120 may be additionally or alternatively included in some embodiments. Without loss of generality, a variation of instruction 120 may be designated as instruction 120' (not shown), wherein instruction 120' may comprise specified mode bits to control bypass of MMUs or TLBs. For example, in a first mode defined by mode bits of instruction 120', the address value specified in instruction 120' may be treated as a virtual address and MMU 106 may be accessed for a virtual-to-physical address translation. On the other hand, in a second mode defined by mode bits of instruction 120', the address value may be treated as a physical address and MMU 106 may be bypassed. [0040] Accordingly, in some embodiments, instruction 120' may comprise the following fields. A first field of instruction 120' may correspond to an address for the memory access which may be determined to be a virtual address or a physical address based on the above-described modes. A second field of instruction 120' may correspond to an accessmode to select between the above first mode or the second mode; and a third field of instruction 120' may comprise an operation code (or OpCode as known in the art) of instruction 120'. If the access mode is set to the first mode, the execution logic may determine the address in the first field to be a physical address and bypass virtual-to- physical address translation in MMU 106 / DTLB 214 and perform the memory access with the physical address. On the other hand, the access mode is set to the second mode, the execution logic may determine the address in the first field to be a virtual address and perform any required virtual-to-physical address translation from the virtual address to determine a physical address by invoking MMU 106 / DTLB 214 and then proceed to perform the memory access with the physical address. [0041] It will be appreciated that embodiments include various methods for performing the processes, functions and/or algorithms disclosed herein. For example, as illustrated in FIG. 3, an embodiment can include a method for accessing memory (e.g. D-cache 210) comprising: specifying a physical address (e.g. instruction 120 specifying a physical address comprising bits 208a, 208b, and 208c) for the memory access - Block 302; bypassing address translation (e.g. bypassing DTLB 214) - Block 304; and performing the memory access using the physical address (e.g. selector 216 configured to select physical address bits 208a, 208b, and 208c instead of virtual-to-physical address translation from DTLB 214) - Block 306. [0042] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0043] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans mayimplement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0044] The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. [0045] Referring to FIG. 4, a block diagram of a particular illustrative embodiment of a wireless device that includes a multi-core processor configured according to exemplary embodiments is depicted and generally designated 400. The device 400 includes a digital signal processor (DSP) 464. Similar to processing system 100, DSP 464 may include MMU 106, processor 102 comprising register file 104, I-cache 108, and D- cache 110 of FIG. 1, which may be coupled to memory 432 as shown. The device 400 may be configured to execute instructions 120 and 120' without performing a virtual-to- physical address translation as described in previous embodiments. FIG. 4 also shows display controller 426 that is coupled to DSP 464 and to display 428. Coder/decoder (CODEC) 434 (e.g., an audio and/or voice CODEC) can be coupled to DSP 464. Other components, such as wireless controller 440 (which may include a modem) are also illustrated. Speaker 436 and microphone 438 can be coupled to CODEC 434. FIG. 4 also indicates that wireless controller 440 can be coupled to wireless antenna 442. In a particular embodiment, DSP 464, display controller 426, memory 432, CODEC 434, and wireless controller 440 are included in a system-in-package or system-on-chip device 422. [0046] In a particular embodiment, input device 430 and power supply 444 are coupled to the system-on-chip device 422. Moreover, in a particular embodiment, as illustrated in FIG. 4, display 428, input device 430, speaker 436, microphone 438, wireless antenna 442, and power supply 444 are external to the system-on-chip device 422. However, each of display 428, input device 430, speaker 436, microphone 438, wireless antenna442, and power supply 444 can be coupled to a component of the system-on-chip device 422, such as an interface or a controller. [0047] It should be noted that although FIG. 4 depicts a wireless communications device, DSP 464 and memory 432 may also be integrated into a set-top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, or a computer. A processor (e.g., DSP 464) may also be integrated into such a device. [0048] Accordingly, an embodiment of the invention can include a computer readable media embodying a method for accessing memory using physical address and bypassing a MMU configured for virtual-to-physical address translation. Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in embodiments of the invention. [0049] While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
A method of forming a small contact hole uses a bright field mask to form a small cylinder in a positive resist layer. A negative resist layer is formed around the small cylinder, and then etched or polished back to leave a top portion of the small cylinder exposed above the negative resist layer. The negative resist layer and the small cylinder (positive resist) are flood exposed to light, and then subject to a developer. What remains is a small contact hole located where the small cylinder was previously located. |
We claim: 1. A method of forming a contact hole for a semiconductor device, comprising:forming an interlayer dielectric layer on a substrate, and then forming a positive resist on the interlayer dielectric layer; irradiating the positive resist to form an irradiated positive resist layer using a bright field mask, wherein the bright field mask has a pattern corresponding to a contact hole to be formed within the semiconductor device; developing the positive resist, so as to remove the irradiated positive resist layer, thereby leaving only a portion of the positive resist remaining above the interlayer dielectric layer; applying a negative resist to cover the interlayer dielectric layer and the portion of the positive resist; recessing the negative resist so that a top region of the portion of the positive resist extends above the recessed negative resist; exposing the recessed negative resist and the portion of the positive resist to a flood light exposure; and applying a developer to the semiconductor device so as to remove the portion of the positive resist, wherein a contact hole is formed in a location where the portion of the positive resist was previously formed. 2. The method according to claim 1, wherein the negative resist is recessed by plasma etching back the negative resist.3. The method according to claim 1, wherein the negative resist is recessed by wet chemical etching back the negative resist.4. The method according to claim 1, wherein negative resist is recessed by polishing the negative resist.5. The method according to claim 1, wherein the negative resist is a negative photoresist.6. The method according to claim 5, wherein the positive resist is a positive photoresist.7. The method according to claim 1, wherein in the recessing step, a top surface of the negative resist is recessed to a level of between 2 and 20 angstroms beneath a top surface of the portion of the positive resist.8. A method of forming a contact hole for a semiconductor device, comprising:forming a first layer on a substrate, and then forming a positive resist on the first layer; irradiating the positive resist using a bright field mask, wherein the bright field mask has a dimension equal in size to a dimension of a contact hole to be formed within the semiconductor device; subjecting the irradiated positive resist to a developer, so as to remove the irradiated positive resist, thereby leaving only a portion of the positive resist remaining above the first layer; applying a negative resist to cover the first layer and the portion of the positive resist; recessing the negative resist so that the portion of the positive resist is not covered on its top surface by the negative resist; exposing the recessed negative resist and the portion of the positive resist to light; and applying a developer to the semiconductor device so as to dissolve the portion of the positive resist, wherein a contact hole is formed in a location where the portion of the positive resist was previously formed. 9. The method according to claim 8, wherein the negative resist is recessed by plasma etching back the negative resist.10. The method according to claim 8, wherein the negative resist is recessed by wet chemical etching back the negative resist.11. The method according to claim 8, wherein the negative resist is recessed by polishing the negative resist.12. The method according to claim 8, wherein the negative resist is a negative photoresist.13. The method according to claim 8, wherein the positive resist is a positive photoresist. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to contact hole patterning. More particularly, this invention relates to bright field image reversal for contact hole patterning.2. Description of the Related ArtSemiconductor critical dimensions (CD) are becoming increasingly smaller to accommodate faster, smaller and more powerful semiconductor devices.Contact holes are an important requirement for forming semiconductor devices. Typically, contact holes are formed using a dark field mask and a positive photoresist. Positive photoresists are typically three-component materials, having a matrix component, a sensitizer component, and a solvent component, whose properties are changed by a photochemical transformation of the photosensitive component, from that of a dissolution inhibitor to that of a dissolution enhancer. See, for example, R. Wolf, Silicon Processing for the VLSI Era, Volume 1, page 418.For forming very small contact hole features, such as contact holes or vias less than 100 nanometers in dimension, dark field patterning causes some problems, since it provides for poor CD control in these very small size ranges. This is primarily due to the resolution of small contact hole features using a dark field mask and positive photoresist being difficult to control due to resolution limits and high mask error factor sensitivity (MEF).SUMMARY OF THE INVENTIONAccordingly, it is a general object of the present invention to provide an improved method of forming small contact holes for semiconductor devices.In accordance with a preferred embodiment of the present invention, the above object may be achieved by a method of forming a contact hole for a semiconductor device. The method includes a step of forming an interlayer dielectric layer on a substrate, and then forming a positive resist on the interlayer dielectric layer. The method then includes a step of irradiating the positive resist using a bright field mask, with reversed polarity (dark vs. bright) of a normal contact mask. The bright field mask has a dimension equal in size to a dimension of a contact hole to be formed within the semiconductor device. The method further includes a step of developing the irradiated positive resist, so as to remove the irradiated positive resist, thereby leaving only a portion of the positive resist remaining above the interlayer dielectric layer. The method still further includes a step of applying a negative resist to cover the interlayer dielectric layer and the portion of the positive resist, and then recessing the negative resist so that a top region of the portion of the positive resist extends above the recessed negative resist. The method also includes a step of exposing the recessed negative resist and the portion of the positive resist to a flood light exposure, and then applying a developer to the semiconductor device so as to remove the portion of the positive resist. As a result, a via or contact hole is formed in the location where the portion of the positive resist was previously formed.The above object may also be achieved by a method for forming a contact hole for a semiconductor device. The method includes forming a first layer on a substrate, and then forming a positive resist on the first layer. The method also includes irradiating the positive resist using a bright field mask, wherein the bright field mask has a dimension equal in size to a dimension of a contact hole to be formed within the semiconductor device. The method further includes subjecting the irradiated positive resist to a developer, so as to remove the irradiated positive resist, thereby leaving only a portion of the positive resist remaining above the first layer. The method still further includes applying a negative resist to cover the first layer and the portion of the positive resist. The method also includes recessing the negative resist so that the portion of the positive resist is not covered on its top surface by the negative resist. The method further includes exposing the recessed negative resist and the portion of the positive resist to light. The method still further includes applying a developer to the semiconduct of device so as to dissolve the portion of the positive resist. As a result, a contact hole is formed in a location where the portion of the positive resist was previously formed.BRIEF DESCRIPTION OF THE DRAWINGSThese and other objects and advantages of the present invention will become more fully apparent from the following detailed description when read in conjunction with the accompanying drawings with like reference numerals indicating corresponding parts throughout, wherein:FIG. 1 is a plot of the MEF for the formation of a post using a bright field mask;FIG. 2 is a plot of the MEF for the formation of a contact using a dark field mask; andFIGS. 3(a)-3(d) illustrate cross-sectional views of the lithographic process of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTA major concern in conventional lithographic processes is that the processes are reaching the size limit in contact hole formation, especially for forming contact holes of 100 nanometers (nm) or less in size. As explained earlier, the conventional contact hole formation uses a dark field mask and a positive photoresist, which results in difficulty in control due to resolution limits and high MEF.FIG. 1 shows the MEF for a bright field mask, and FIG. 2 shows the MEF for a dark field mask. These two figures show the sensitivity of the wafer dimension to the mask dimension. The measure of MEF is the slope of the line which is approximated by the first term in the equation of the line that's printed on the graph. The dark field mask change leads to a wafer (resist) change 4.6 times greater. The bright field mask change is magnified 1.7 times at the wafer. Thus, as seen from the plots shown in FIGS. 1 and 2, the amount of contact hole variation on a wafer will be much less using the bright field mask as compared with using a dark field mask. The data shown in FIGS. 1 and 2 are based on tests performed by the inventors.The present invention overcomes the problems in small contact hole formation by forming a small cylinder in a positive photoresist using a bright field (BF) mask, whereby resolution and MEF are improved as compared to conventional lithographic processes. Further processing to reverse the image forms a well-controlled contact hole pattern. Bright field masks are masks where most of the mask is transparent with only a fraction of the mask opaque.The present invention will now be described in detail with reference to FIGS. 3(a) to 3(d). FIG. 3(a) shows a cross-sectional view of a semiconductor structure that includes a substrate 100, an interlayer dielectric layer 110, and a positive photoresist layer 120. The interlayer dielectric layer 110 may be any conventional low-k dielectric, i.e., a dielectric with a low dielectric constant, such as SILK, that provides a nonconductive shield between conductive layers.As shown in FIG. 3(a), a bright field (BF) mask 130 is provided above the semiconductor structure, and the positive photoresist layer 120 is illuminated with light from a light source located above the BF mask 130. As a result, most of the positive photoresist layer 120 is exposed to the light, except for a portion 125 directly below the BF mask 130. A typical mask includes transparent regions and opaque regions. The light impinges on the entire mask, and passes through the transparent regions to regions directly below on the substrate, but does not pass through the opaque regions to regions directly below on the substrate.FIG. 3(b) shows the semiconductor structure after the positive photoresist layer 120 has been in contact with a developer solution. The developer solution acts to dissolve the light-exposed portion of the positive photoresist layer 120, leaving only a small cylinder 140 remaining above the interlayer dielectric layer 110. Any type of conventional developer solution for removing positive photoresist may be used in this step. In general, the cylinder 140 need not have a circular cross-section, but may alternatively have other cross-sectional shapes.FIG. 3(c) shows the semiconductor structure after a negative photoresist layer 150 has been formed around the small cylinder 140. A negative photoresist is a resist that acts oppositely from a positive photoresist, in that when a portion of a negative photoresist is exposed to light, it hardens and will not be dissolved when placed in a developer. All nonexposed portions of the negative photoresist will not harden and thus will be dissolved when placed in the developer.Negative photoresists have typically not been utilized in photolithography as VLSI structures have gotten smaller and smaller, since the swelling of negative photoresists during development makes them unsuitable for CDs less than 3 micrometers. See R. Wolf, Silicon Processing for the VLSI Era, Vol. 1, page 420. However, the present invention utilizes negative photoresists in a novel manner to provide a method for forming contact holes having very small dimensions.Referring back to FIG. 3(c), the negative photoresist layer 150 is formed so that its top surface is substantially coplanar with the top surface of the small cylinder 140 (which is a positive photoresist structure). In one configuration, the negative photoresist layer 150 is spin-coated onto the semiconductor structure so that its top surface is just below (e.g., by a few tens of angstroms) the top surface of the small cylinder 140, thereby leaving a stub of the small cylinder 140 extending out from the top surface of the negative photoresist layer 150. In a second configuration, the negative photoresist layer 150 is spin-coated onto the semiconductor structure so that its top surface is formed above and thus filly covers the small cylinder 140 by a small amount (e.g., a few angstroms up to a few tens of angstroms). With this second configuration, the negative photoresist layer 150 is etched back or polished so that a small portion of the top of the small cylinder 140 (e.g., a few angstroms up to a few tens of angstroms) extends out from the top surface of the negative photoresist layer 150.Other ways of forming the negative photoresist layer 150 onto the semiconductor structure, besides spin-coating, may be contemplated, while remaining within the scope of the invention. For example, spray coating, meniscus coating, or doctor blade coating may be used for forming the negative photoresist layer 150.As shown in FIG. 3(c), a layer that includes a negative photoresist layer 150, and that also includes a small cylinder 140 that is formed out of positive photoresist, is provided as a combined layer above the interlayer dielectric layer 110.FIG. 3(d) shows the semiconductor structure after the structure has been subject to flood light exposure, i.e., the entire top surface is exposed, and then subject to a developer. The developer used to arrive at the structure shown in FIG. 3(d) may be the same developer used to remove most of the positive photoresist layer 120 to arrive at the structure shown in FIG. 3(b). Referring again to FIG. 3(d), when the developer contacts the negative photoresist layer 150, the negative photoresist layer is not dissolved, since it was previously subject to light by way of the flood light exposure. The small cylinder 140, on the other hand, is dissolved when exposed to the developer, since the flood light exposure changes its characteristics to make it soluble in the developer (since the small cylinder 140 is formed out of positive photoresist).As a result, what remains is a cylindrical hole 160, formed in the place where the small cylinder 140 was previously located (e.g., before it dissolved in the developer). The photoresist layer 150 surrounding the hole 160 can then be used as a resist pattern to etch a small contact hole in the dielectric layer 110, such as a contact hole to a gate of a transistor, to be formed within the semiconductor structure. Since a bright field mask and a positive photoresist are used in the formation of the resist pattern for a small-sized contact hole according to the present invention, excellent control can be achieved in forming a contact hole to a precise, small size (e.g., less than 200 nm). Such control is not readily accomplished by the conventional methods that use a dark field mask to form contact holes.While there has been illustrated and described what is at present considered to be a preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made, and equivalents may be substituted for elements thereof without departing from the true scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the central scope thereof. Therefore, it is intended that this invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out the invention, but that the invention will include all embodiments falling within the scope of the appended claims. |
Embodiments of the disclosure are directed to controlling an endpoint device running an endpoint device using a central control server. The central controller server is configured to communicate withthe endpoint device across a communications interface compliant with a remote direct access (RDMA) compliant protocol. The central control server includes an RDMA network interface controller and a control process. The control process can execute an endpoint device algorithm to identify read and write commands to be sent across the RDMA protocol-compliant interface to the endpoint device. The RDMAnetwork interface controller can convert messages into RDMA compliant messages that include direct read or write commands and memory location information. The endpoint device can also include a network interface controller that can understand the RDMA message, identify the memory location from the message, and execute the direct read or write access command. |
1.A control server device comprising:a processor, implemented at least in hardware to perform a control process on behalf of the endpoint device to identify a next action for the endpoint device;a network interface controller implemented at least in hardware to transmit a message to an endpoint device via a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol, the message including a turn label, a steering label offset, and for the endpoint Direct memory access command.2.The control server device according to claim 1, wherein said processor identifies a steering tag value for said direct memory access of said endpoint device based on a control process performed for said endpoint device, and wherein The memory location includes a steering tag offset value.3.The control server device of claim 1 further comprising an integrated switch that connects said network interface controller to said endpoint.4.The control server device of claim 3, wherein the processor identifies a MAC address of the endpoint device based on performing the control process, and the integrated switch routes the message to the MAC address based on the MAC address The endpoint device.5.The control server device of claim 1, wherein the network interface controller comprises an RDMA controller for configuring an RDMA message for transmission to the endpoint device, the RDMA message including a direct memory access command And memory location.6.The control server device of claim 1, further comprising a steering tag table, the steering tag table including a steering tag value corresponding to a memory location of the endpoint device, and wherein the processor executes the endpoint The device corresponds to a control process to identify a steering tag corresponding to a memory location for direct memory access of the endpoint device.7.A computer program product tangibly embodied on a non-transitory computer readable medium, the computer program product comprising instructions operable, when executed, to:Performing a control process of the endpoint device at the central server;Identifying a memory location for direct memory access of the endpoint device based on a control process of the endpoint device;Constructing a remote direct memory access (RDMA) message including the memory location and direct memory access commands;The RDMA message is sent to the endpoint device via a communication interface compliant with the RDMA protocol.8.The computer program product of claim 7, the instructions being further operative to identify a steering corresponding to the memory location of the endpoint device for the direct memory access command based on the control process Tag value.9.A computer program product according to claim 7 or 8, said instructions being further operable to identify a machine address of said endpoint device, and wherein constructing said RDMA message comprises said machine address to be directed to said endpoint device Add to the RDMA message.10.The computer program product of claim 7, the instructions further operable to receive a read response from the endpoint device from the endpoint device via the communication interface conforming to the RDMA protocol.11.The computer program product of claim 7, wherein transmitting the RDMA message to the endpoint device over a communication interface compliant with the RDMA protocol comprises transmitting the RDMA message to an endpoint control interface associated with the endpoint device.12.An endpoint device that communicates with a central control server via a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol, the endpoint device comprising:Memory map register;A network interface controller that is implemented at least in hardware for:Receiving an RDMA message from the central control server through the communication interface;Identifying, in the RDMA message, a memory location in the memory map register for direct memory access;Identifying a command for the direct memory access from the RDMA message;The memory location is accessed directly to satisfy the command.13.The endpoint device of claim 12, wherein the RDMA message identifies a memory location in the memory map register, and wherein the network interface controller is configured to directly access a location in the memory map register Said memory location.14.The endpoint device of claim 13, wherein the memory location of the message comprises a steering tag offset value corresponding to a memory location in the memory of the endpoint device.15.The endpoint device of claim 14, wherein the network interface controller comprises a hardwired memory register address and the network interface controller is configured to:The memory register address in the memory is identified based on comparing the memory register address to a steering tag offset value.16.The endpoint device of claim 12, wherein the network interface controller comprises an RDMA network interface controller.17.The endpoint device of claim 12, wherein the endpoint device lacks one or both of a microcontroller or a network processor.18.A computer program product tangibly embodied on a non-transitory computer readable medium, the computer program product comprising instructions operable, when executed, to:Receiving a message from a communication interface compliant with a Remote Direct Memory Access (RDMA) compatible protocol;Identifying a memory location for direct memory access from the message;Identifying a command from the message;The direct memory access is performed based on the command from the message.19.The computer program product of claim 18, wherein the message comprises a steering tag offset value that identifies a memory location of a memory of the endpoint device.20.The computer program product of claim 19, the instructions further operable to compare the steering tag value in the message with a steering tag value at the endpoint device, the location at the endpoint device The turnkey value corresponds to the memory location of the memory at the endpoint device.21.A system comprising:Central Control Server, including:a processor, implemented at least in hardware for performing a control process on behalf of the endpoint device to identify a next action for the endpoint device, anda network interface controller, implemented at least in hardware for transmitting a message to an endpoint device via a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol, the message including a turn label, a steering label offset, and Direct memory access command for the endpoint;An endpoint device that communicates with a central control server via a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol, the endpoint device comprising:Memory map register;A network interface controller that is implemented at least in hardware for:Receiving an RDMA message from the central control server through the communication interface;Identifying, in the RDMA message, a memory location in the memory map register for direct memory access;Identifying a command for the direct memory access from the RDMA message;The memory location is accessed directly to satisfy the command.22.The system of claim 21 wherein the endpoint device lacks one or both of a microcontroller or a network processor.23.The system of claim 21 wherein the network interface controller includes an RDMA controller for configuring an RDMA message for transmission to the endpoint device, the RDMA message including a direct memory access command and the memory position.24.The system of claim 21 wherein said network interface controller comprises a hardwired memory register address and said network interface controller is configured to:The memory register address in the memory is identified based on comparing the memory register address to a steering tag offset value.25.The endpoint device of claim 21 wherein the network interface controller comprises an RDMA network interface controller. |
Direct memory access for endpoint devicesCross-reference to related applicationsThis application claims the benefit of priority to the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the disclosure of .Technical fieldThe present disclosure relates to direct memory access and, more particularly, to direct memory access for endpoint devices.Background techniqueCommunication with the remote hardware application can include the use of network packet processing at the remote hardware. The use of additional processing of incoming and outgoing packets may result in increased resource requirements, extended delays, and increased costs.Complex allocations of resources for transmitting and/or receiving packets are used to communicate between the dispatch controller and the endpoint device. Scheduling transactions through pre-allocated transmission time windows can lead to complexity, such as increased latency and overhead, reduced usefulness of communication links, and the need for specialized hardware.DRAWINGS1 is a schematic block diagram of a remote direct memory access control system in accordance with an embodiment of the present disclosure.2 is a schematic block diagram of an apparatus for executing an endpoint device, in accordance with an embodiment of the present disclosure.3 is a process flow diagram for communicating with an endpoint device over a remote direct memory access compatible protocol, in accordance with an embodiment of the present disclosure.4 is a process flow diagram for performing direct memory access based on commands received via a remote direct memory access compatible protocol, in accordance with an embodiment of the present disclosure.detailed descriptionThe automation system can include an autonomous operating subsystem. The protocols used by some automation systems are designed for serial communication. Other automation systems use TCP/IP connectivity and Ethernet. To maintain serial protocol compatibility, Ethernet media uses time domain multiplexing to emulate traditional serial protocols. Using Ethernet reduces latency, but automation protocols often fail to take full advantage of the capabilities provided by Ethernet.The present disclosure describes a central control server that monitors and controls one or more endpoint devices, such as endpoint devices in a workflow for an automated system. The central control server uses the RDMA protocol to directly read and/or write endpoint device operational parameters, such as via a memory mapped control register of the endpoint device state machine. This allows for low latency real-time control of the managed system. Examples of endpoint devices include automata, robots, machines, process flows, industrial processes, mechanical devices, power systems, and the like.Not every endpoint device is controlled by an endpoint-local microcontroller or network processor, while the present disclosure describes moving control to a central control server. End-to-end workflow analysis and optimization can be achieved by moving control to a central control server. Endpoint devices are no longer limited to data within their direct subsystems. The endpoint device subsystem can be repurposed as needed because the specific control functions for the application are moved to the central control server, while the endpoint device retains the commands used to implement direct read or write access to execute commands received from the central control server. Features. For example, an assembly robot can automatically change extensions to perform different tasks, which can reduce idle time.In addition, endpoint devices no longer require a microcontroller or network processor. Rather, the endpoint controller interface in communication with the endpoint device can include an RDMA Network Interface Controller (RNIC) that can be used to resolve commands received from the central control server over the RDMA interface. Moreover, a network interface controller with reduced complexity can further reduce the cost and latency for receiving and executing commands (eg, introducing a rNIC with a lowercase letter "r"). This less complex rNIC can further reduce unit cost and point of failure while maintaining the functionality for parsing RDMA messages.An additional advantage of the present disclosure is end-to-end security. Rather than operating independently of subsystems, subsystems have limited capabilities to detect or adapt to other systems in the event that the subsystems operate independently, while the present disclosure improves the security of the overall system by centralizing control of the entire line. The interaction can be determined prior to issuing the command, for example, confirming the machine address in the RDMA message received from the central control server. In addition, not every endpoint device saves its own state information, but the central control server can save state information for each endpoint device and thus save state information for the entire system. Changes in endpoint device status can compromise security; by having the central control server monitor status information for each endpoint device and respond to error status or changes to status, the central control server can resolve the issue, shut down the endpoint device, or shut down the entire workflow . The central control server also alerts emergency responders, other upstream workflow issues, and tracks valuable metrics. In addition, status information can be updated quickly and often without increasing the burden of communication or central control server processing.The present disclosure can utilize the automatic configuration of endpoints and controllers to utilize XML files to exchange capabilities. These can include sensor type, number of axes, range of motion, extended limits, accessory type, supported security protocols, power levels, band rates, and other parameters (eg, endpoint device parameters, etc.).FIG. 1 is a schematic block diagram of a remote direct memory access (RDMA) control system 100 in accordance with an embodiment of the present disclosure. The RDMA control system 100 includes a central control server 102 that communicates with one or more endpoint control interfaces 122 via an RDMA protocol compatible communication interface 130 (abbreviated as RDMA interface 130). In some embodiments, each endpoint control interface 122 can be part of a process workflow 120 (eg, an industrial workflow or a manufacturing plant). In some embodiments, each endpoint control interface 122 may be part of autonomous and/or different workflows. Each endpoint control interface 122 can be connected to the endpoint device 124, integrated to the endpoint device 124, or otherwise communicated with the endpoint device 124. Endpoint device 124 can be an automaton, a robot, a machine, a process flow, an industrial process, a mechanical device, a power system, and the like. Endpoint device 124 can be implemented, at least in part, in hardware. Each endpoint device 124 can be the same or can be different.The central control server 102 can execute the control process 119 that models the process of the endpoint device 124 or can emulate the endpoint device 124. The processor 104 can use the state information 107 and the endpoint device model 116 to perform the control process 119. In some embodiments, the control process can include one or more endpoint device models 116 that model the control process for each process or program or action associated with the endpoint device. The endpoint device model 116 can include a local or internal model of each endpoint device 124. The endpoint device model 116 can include each process or program that the endpoint device 124 will execute to derive the next state of the endpoint device 124. The endpoint device model 116 may utilize state information 107 received from the endpoint control interface 122 via the RDMA interface 130 and/or state information 107 stored in the memory 106.The processor 104 can be implemented at least in part in hardware and can include software and firmware. Processor 104 may comprise any processor or processing device such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a coprocessor, a system on a chip ( SOC) or other device used to execute the code.The control process 119 uses the endpoint device model 116 and status information 107 to identify the next action or next state for the endpoint device 124, essentially running the model at the central control server 102 to emulate the process or program of the endpoint device 124. In order to achieve the next desired state, it may be necessary to send a command to the endpoint device indicating what needs to be changed in order to implement it. The next state information may correspond to a read or write command, which may include read length and memory location or write length, value, and memory location. RNIC 108 can convert the command into a message that conforms to the RDMA protocol. The message can include the machine address of the endpoint device, the turn label, and the like. The turn tab indicates the memory area at the endpoint device for read/write commands. The turn tag can include a turn tag offset value to specify a memory location for the read/write command. The steering tag offset can be associated with a control register in the endpoint device state machine. The message can also contain commands and values.In some embodiments, the central control server 102 includes a virtual machine 110. In some implementations, control process 119 can reside in or be implemented by virtual machine 110. Although only one virtual machine is shown, the central control server 102 can include more virtual machines than shown. Virtual machine 110 may utilize hardware resources, including processor 104 and memory 106. Hardware resources can be virtualized, which means that a single physical hardware resource can be partitioned into multiple virtual hardware resources to enable system 100 to use a single physical hardware resource in multiple virtual machines 110. Virtualization can be achieved using a virtual machine monitor (VVM) 112. In an embodiment, VMM 112 includes software that applies a virtualization layer in central control server 102, where hardware resources can be virtualized into virtual machine 110. The virtual machine 110 can utilize the status information 107 and the endpoint device model 116 to determine, for example, the next state for the endpoint device 124.The virtual machine 110 can execute the endpoint device model 116 to perform operations associated with the endpoint device 122. The virtual machine 110 can execute the algorithm using state information received from the endpoint control interface 122 through the RDMA interface 130, thereby moving the processing of the algorithm from the endpoint control interface 122 to the central control server 102. The virtual machine 110 can execute commands to change the state of the endpoint control interface 122. The status information is communicated in the RDMA message via the RDMA interface 130, which includes a write command, a memory location indicator (eg, a turn tab), and other information (eg, machine address and connection address). In essence, virtual machine 110 can perform processing for endpoint device 124; RNIC 108 and RDMA interface 130 allow low latency communication between central control server 102 and endpoint control interface 122 such that virtual machine 110 can have low latency read endpoints The status of device 124, processing the information, and transmitting the write information to endpoint control interface 122.Memory 106 may include memory location information 114 with respect to one or more endpoint devices 124 to which central control server 102 is connected. Memory 106 can include memory location information 114 for each of one or more endpoint devices 124. The memory information can include a steering tag value that is mapped to a memory location. For example, memory location information 114 may include a turn tag value that maps to a memory location in endpoint device 124. The memory location at endpoint device 124 can be associated with the functionality of endpoint device 124.The memory 106 can also store a lookup table. After the control process 119 determines the next state for the endpoint device 124, the RNIC 108 can use the lookup table to find out which memory map register (in FIG. 2 is the memory map register 206) at the endpoint device 124 needs to be accessed for execution. The next stage, along with the endpoint machine address, the steering label, and the steering label offset used to access the registers.RDMA Network Interface Controller (RNIC) 108 can be used to encapsulate information from control process 119 or virtual machine 110 into RDMA messages that conform to the RDMA protocol. RDMA facilitates direct memory access to memory on remote systems (e.g., endpoint device 108) in a manner that bypasses the system CPU and operating system of the receiving device. Bypassing the CPU and operating system means that RDMA messaging can be low latency. RDMA supports zero-copy networking by enabling the RNIC to transfer data directly to application memory (ie, memory space allocated to the application's system memory) or from the application memory, the application memory and the kernel used by the operating system. The memory is maintained separately, eliminating the need to copy data between the application memory and the data buffers in the kernel memory used by the operating system.The central control server 102 can use a mechanism called memory registration for allocation of memory. Memory registration facilitates access by the RNIC 108 to the memory area. The bound memory window allows the RNIC 108 to access the memory represented by the memory window. The memory registration provides a mechanism to allow the RNIC 108 to access the memory mapped registers at the endpoint device 124 using the steering tag (STag) and tag offset. The memory registration provides the RNIC 108 with a mapping between the STag and the memory location at the endpoint device 124. The memory registration also provides a description of the access control associated with memory location 114 to RNIC 108. A collection of memory locations that have been registered is referred to as a memory region. The resources associated with the memory region and the memory region itself may be registered with the RNIC 108 before the RNIC 108 can use the memory region.There is a local STag representing the registered memory on the system, and there is a remote STag to which the system on the other side of the connection has registered. Remote storage is abstract because the local side does not know its exact locationAs previously mentioned, the message sent by RNIC 108 to endpoint control interface 122 is a message that conforms to the RDMA protocol. The message includes memory information such as a turn tag (or sTag), as well as the machine address of the target endpoint device and data representing the read or write operation. In some embodiments, the message may also include a connection address, so the endpoint device can verify that the source of the message is a known connection rather than an intruder.The central control server 102 can also include a system manager 117 that is implemented at least in hardware to supervise each of the endpoint devices 124. System manager 117 can monitor status information for each of endpoint devices 124. Based on the status information, the system manager 117 can identify the error status for each endpoint device. The system manager 117 can shut down the endpoint device 124 if an error condition is detected. The system manager 117 can also close the entire workflow 120 if there is a guarantee (e.g., by identifying an error condition of one or more endpoint devices 124).System 100 also includes a switch 118. Switch 118 can be an integrated switch in a central control server. The integrated switch can include a multi-master Ethernet controller chip with integrated Ethernet switching resources. Examples of integrated switches include RED ROCK CANYONTM. Since the business is mainly top-down, congestion is minimal and flow control is minimal. Standalone switches can also be used in some implementations.2 is a schematic block diagram 200 of an endpoint control interface 202 for controlling endpoint device 212, in accordance with an embodiment of the present disclosure. The endpoint control interface 202 can be implemented in at least hardware. Endpoint control interface 202 can be integrated into endpoint device 212 or otherwise communicate with endpoint device 212. Endpoint device 212 can be an automaton, robot, machine, or other component of an automated or remote control/monitoring process architecture. Endpoint control interface 202 can include logic that includes network interface controller 204 and memory map registers 206. Memory map register 206 includes a register address corresponding to a pin on endpoint device 212 and allows direct access to endpoint device 212 through memory map register 206.The Network Interface Controller (NIC) in Figure 2 can be a full RDMA NIC (RNIC) 204B or can be a modified version of the RNIC (referred to as rNIC 204A, marked with a lowercase "r" to indicate simplification of the RNIC or RDMA protocol. Or limited implementation).The rNIC 204A implements a subset of the full RDMA protocol. For example, the rNIC 204A can support a single connection or multiple connections to a central control server instead of supporting thousands or millions of connections. Rather than building a large table or lookup table mechanism, the rNIC 204A can perform a direct comparison of the received addressing and memory location values in parallel (e.g., by hard coding the values within the rNIC 204A). In some examples, rNIC 204A may be customized specifically for endpoint device 212, and machine address values and memory locations may be hard coded into rNIC 204A.The rNIC 204A can be configured to handle three types of messages: write commands, read commands, and read responses. Additionally, rNIC 204A may abandon retransmission operations, such as TCP/IP retransmission protocols. The central control server 102 can be configured to send an additional read request if the previous read request was not answered within a predetermined amount of time (several microseconds, milliseconds, seconds, minutes, etc.).The rNIC 204A is configured to receive an RDMA message including a direct access command (such as a read or write) and including a memory location identifier. The memory location may be a turn label value that maps to a memory location in memory map register 206. For rNIC 204A, the RDMA message can include a reduced number of steering tag values as compared to the RDMA message for RNIC 204B. For example, rNIC 204A is configured to communicate with a single peer or up to several peers.In some embodiments, endpoint control interface 202 can include memory location information 210, which can be a library of information or a hard-coded set of tables or information. In some embodiments, memory location information 210 may include a particular memory location to allow rNIC 204A to convert the steering tag offset value to a memory location in memory mapped register 206.The endpoint control interface 202 can also include a machine identifier 214. The endpoint control interface 202 can compare the machine identifier in the message received from the central control server 102 with the machine identifier 214 of the endpoint device to confirm that the message is intended for the endpoint control interface 202.In some embodiments, RNIC 204B will include an interface to a value table for connecting addresses and memory addresses. Because there is less information in the received RDMA message, the rNIC 204A does not need to include an interface to the value table, and the rNIC 204A can compare the information in the RDMA message with one or two values. In addition, rNIC 204A can transmit the read response back to the RNIC in the central control server. The rNIC 204A may not need to retransmit the message (including some messages). Instead, the central control server can resend the read request after the predetermined time has expired. In addition, rNIC 204A typically does not initiate a connection with central control server 102. Instead, rNIC 204A accepts messages from the central control processor and can respond to read requests using existing connections established by central control server 102.Since the endpoint device functionality is emulated on the central control server, short (eg, at most one maximum transmission unit) RDMA messages using only a few RDMAStags can be used to read or write the mapped register contents. This allows the endpoint control interface 202 to implement only a portion of the RDMA and TCP functions while maintaining low latency read and write operations. The rNIC204A can be connected to a fully implemented RNIC on the central control server to reduce hardware requirements from the rNIC 204A.Memory location information 210 can be directed to a memory location in memory map register 206. The memory location may represent an access point to endpoint device 212 for direct access to command functions. The reading from the memory location may indicate the current state of the endpoint control interface 202 (or more specifically, the state of the functionality of the endpoint control interface 202 from the state machine 208). Writing to the memory location can cause the endpoint device to change its state or perform a function in state machine 208.The rNIC 204A can receive messages from a communication interface that conforms to the RDMA protocol. The message may include a turn label (sTag) representing a window or region of the memory map register 206. The sTag also includes an sTag offset value that represents a particular portion of the window or region in the memory map register 206 to be accessed. For example, sTag may indicate a memory register window, such as registers 1-10, and an offset may represent register 1+x, where x is an offset offset by one.Memory map registers 206 may include silicon logic that interfaces directly with endpoint device 212. The rNIC 204A reads or writes to the memory map register 206. Endpoint device 212 control is based on values at each register of memory mapped registers 206. For example, rNIC 204A can write to a memory map register to cause endpoint device 212 to change its state. Similarly, the status information of the endpoint device 212 can be read from the memory mapped register location.Endpoint device 212 may also include state machine 208. State machine 208 can include silicon logic. State machine 208 can interface with endpoint device 212. In state machine 208, each value has an address. The rNIC 204A can write or read to the state machine 208 through the memory map register 206.3 is a process flow diagram 300 for communicating with an endpoint device over a remote direct memory access compatible protocol, in accordance with an embodiment of the present disclosure. The central control server may receive an RDMA message (302) containing status information from the endpoint control interface for the endpoint device from a communication interface compliant with the Remote Direct Memory Access (RDMA) protocol. The central control server can receive RDMA messages via an RDMA Network Interface Controller (RNIC). The central control server can use the status information received from the RDMA communication interface to run a control process or virtual machine (304) that represents the endpoint device. The output of the control process or virtual machine may include an identification of the desired state of the endpoint device (306). For example, the results can include commands (eg, write commands) for changing the state of the endpoint device or commands (eg, read commands) for providing further state information.The central control server can identify the memory location (308) for the read or write command. The memory location may be identified based on a memory information library, which may include one or more steering tag values mapped to memory locations in a memory mapped register at an endpoint device that controls the endpoint device. Additionally, the central control server can identify the machine address identifier and connection identifier for the endpoint device.The central control server can encapsulate command and memory location information into messages such as RDMA messages via the RNIC (310). The RNIC may send an RDMA message to the endpoint device (312) via a communication interface compliant with the RDMA protocol. In some embodiments, the central control server may receive a read response from the endpoint device via the RNIC, the read response being indicated as a dashed arrow returning to (302).4 is a process flow diagram 400 for performing direct memory access based on commands received via a remote direct memory access (RDMA) compatible protocol, in accordance with an embodiment of the present disclosure. The RDMA compatible network interface controller (rNIC) on the endpoint device can receive the RDMA message from the central control server via the RDMA compatible communication interface (402). The rNIC can identify the machine address (404) from the RDMA message to confirm that the message is for the endpoint device. In some implementations, the rNIC includes a filter that filters out packets that do not have a MAC address configured to receive the rNIC. The rNIC can identify the command (406) from the RDMA message. For example, the command can be a read command or a write command. The rNIC can identify the memory location (408) for the command. The rNIC can be hard coded with a memory location mapped to a memory mapped register. The memory location can be identified by a memory location identifier (eg, a turn tag value). The rNIC can directly access the memory map register (410) based on the memory location from the message. The access can be a write operation where the rNIC is written directly to the location in the memory map register. The access can be a read operation in which the rNIC reads from a location in the memory map register. The rNIC can send a read response (412) to the central control server via the RDMA compliant communication interface.This disclosure allows for multiple security options:1.The key embedded in the device VNM is scanned via the QR code at the time of installation to load the other side onto the server (public/private key).2.Make an initial handshake with the endpoint that is directly connected to the server.3.MACSeC/LinkSec4.IPSecThe present disclosure also includes the ability to stop or return to a secure location (depending on the type of machine) if/when the network connection is lost. This can be done by simple periodic heartbeat packets, detecting link loss or other mechanisms.The systems and devices described herein can reduce the computational power required on the device side by an order of magnitude, making it very simple. This is very important because, as one of the most important priorities, industrial components are designed to be durable; they need to withstand vibration, heat and other harsh environments with minimal maintenance over their service life.The present disclosure can also be applied to an Internet of Things (IOT) device. As shown in FIG. 1, central control processor 102 can transmit an RDMA message (or a message that is at least partially compliant with the RDMA protocol). Messages can be transmitted over a wireless network, such as a cellular network or WIFI network or other wireless technology. The network interface controller on the endpoint device can receive messages from the wireless network.This disclosure describes the use of the RDMA protocol. Among the various RDMA protocols considered in this disclosure are the Internet Wide Area RDMA Protocol (iWARP), RDMA on the Converged Ethernet (RoCE), and INFINIBANDTM.It should be appreciated that the examples given above are non-limiting examples that are provided for the purpose of illustrating certain principles and features, and do not necessarily limit or constrain the potential embodiments of the concepts described herein. For example, various combinations of the features and components described herein can be utilized to implement various embodiments, including combinations of various implementations of the components described herein. Other implementations, functions, and details should be apparent from the context of this specification.In Example 1, an aspect of an embodiment is directed to a control server that includes a central processor that is implemented at least in hardware to perform a control process on behalf of an endpoint device to identify a direct a memory location of the memory access, and a network interface controller implemented at least in hardware to transmit a message to an endpoint of the execution endpoint device via a communication interface compliant with a remote direct memory access (RDMA) protocol, the message Includes memory locations for direct memory accesses to endpoints.In Example 2, the subject matter of Example 1 further includes the processor identifying a steering tag value for direct memory access of the endpoint device based on the execution control process, and wherein the memory location includes a steering tag value.In Example 3, the subject matter of Example 1 or Example 2 may include an integrated switch that connects the network interface controller to the endpoint.In Example 4, the subject matter of Example 1 or Example 2 or Example 3 can include the central processor identifying the routing address of the endpoint based on the execution control process, and the integrated switch routing the message to the endpoint based on the routing address.In Example 5, the subject matter of any of Example 1 or Example 2 or Example 3 or Example 4 may further comprise the network interface controller comprising an RDMA controller to configure an RDMA message for transmission to the endpoint, the RDMA message comprising directly Memory access commands and memory locations.In Example 6, the subject matter of any of Example 1 or Example 2 or Example 3 or Example 4 or Example 5 may further include a steering tag library including a steering tag value corresponding to a memory location of the endpoint, and Wherein the processor performs a control process corresponding to the endpoint device to identify a steering tag corresponding to a memory location for direct memory access of the endpoint.In Example 7, an aspect of an embodiment relates to receiving, at a central control server, status information from an endpoint device for an endpoint device from a remote direct memory access (RDMA) protocol compliant communication interface; performing at the central server based on the state information Simulation of endpoint devices; identifying memory locations for direct memory accesses of endpoint devices based on simulations of endpoint devices; constructing RDMA messages including memory locations and direct memory access commands; and communicating interfaces through RDMA protocols The RDMA message is sent to the endpoint device.In Example 8, the subject matter of Example 7 can also include identifying, based on the simulation, a steering tag value corresponding to a memory location for the direct memory access command in the endpoint device.In Example 9, the subject matter of any of Examples 7 or 8 may further comprise identifying a machine address of the endpoint device, and wherein constructing the RDMA message comprises adding a machine address directed to the endpoint device to the RDMA message.In Example 10, the subject matter of Example 7 can also include receiving a read response from the endpoint device from the endpoint device via a communication interface that is compatible with the RDMA protocol.In Example 11, an aspect of an embodiment relates to a computer program product tangibly embodied on a non-transitory computer readable medium, the computer program product comprising instructions operable, when executed, for: centrally Performing an emulation of the endpoint device by the endpoint device at the server; identifying a memory location for direct memory access of the endpoint device based on simulation of the endpoint device; constructing a remote direct memory access including the memory location and direct memory access commands ( RDMA) message; and sends an RDMA message to the endpoint device over a communication interface compliant with the RDMA protocol.In Example 12, the subject matter of Example 11 can also include instructions that are further operable to identify, based on the simulation, a steering tag value corresponding to a memory location for the direct memory access command in the endpoint device.In Example 13, the subject matter of Example 11 or Example 12 can further include instructions that are further operable to identify a machine address of the endpoint device, and wherein constructing the RDMA message comprises adding a machine address directed to the endpoint device to the RDMA message.In Example 14, the subject matter of Example 11 can further include a further instruction operative to receive a read response from the endpoint device from the endpoint device over a communication interface compliant with the RDMA protocol.In Example 15, an aspect of an embodiment relates to an endpoint device in communication with a central control server over a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol. The endpoint device can include a memory mapped register and a network interface controller implemented at least in hardware. The network interface controller can be configured to receive an RDMA message from the central control server over the communication interface; identify a memory location in the memory map register for direct memory access from the RDMA message; identify from the RDMA message for Direct memory access commands; and directly access memory locations to satisfy the command.In Example 16, the subject matter of Example 15 can include the RDMA message identifying a memory location in the memory map register, and wherein the network interface controller is configured to directly access the memory location in the memory map register.In Example 17, the subject matter of Example 15 or Example 16 can include that the memory location of the message includes a steering tag value corresponding to a memory location in a memory of the endpoint device.In Example 18, the subject matter of Example 15 or Example 16 or Example 17 can include the network interface controller including a hardwired steering tag value, and the network interface controller configured to hardwire the memory location in the message The turn tag values are compared to identify the memory locations in the memory.In Example 19, the subject matter of Example 15 or Example 16 or Example 17 or Example 18 can include the network interface controller including at least a portion of the RDMA controller.In Example 20, aspects of the embodiments relate to methods performed in an endpoint device. The method can include receiving, by a network interface controller, a message from a communication interface compliant with a remote direct memory access (RDMA) compatible protocol; identifying, by the network interface controller, a memory location for direct memory access from the message The command is recognized by the network interface controller from the message; and the direct memory access is performed by the network interface controller based on the command from the message.In Example 21, the subject matter of Example 20 can also include the message including a turn tag value identifying a memory location of a memory of the endpoint device.In Example 22, the subject matter of Example 20 can further include comparing, by the network interface controller, the steering tag value in the message to the steering tag value at the endpoint device, the steering tag value at the endpoint device and the memory of the memory at the endpoint device The location corresponds.In Example 23, the subject matter of Example 20 can also include identifying the machine address from the message and confirming that the machine address from the message matches the machine address of the endpoint device.In Example 24, aspects of the embodiments are directed to a computer program product tangibly embodied on a non-transitory computer readable medium, the computer program product comprising instructions operable, when executed, for Receiving a message through a communication interface compliant with a Remote Direct Memory Access (RDMA) compatible protocol; identifying a memory location for direct memory access from the message; identifying a command from the message; and executing a command from the message to perform direct memory storage take.In Example 25, the subject matter of Example 24 can further include the message including a turn tag value identifying a memory location of a memory of the endpoint device.In Example 26, the subject matter of Example 24 can further include a further instruction operative to compare the steering tag value in the message to a steering tag value at the endpoint device, the steering tag value at the endpoint device and the endpoint device The memory location of the memory at it corresponds.In Example 27, aspects of the embodiments relate to an endpoint device in communication with a central control server over a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol. The endpoint device can include a memory mapped register unit and a network interface controller unit implemented at least in hardware. The network interface controller unit can be configured to receive an RDMA message from the central control server over the communication interface; identify a memory location in the memory mapped register unit for direct memory access from the RDMA message; identify from the RDMA message for direct memory The command to access; and directly access the memory location to satisfy the command.In Example 28, aspects of an embodiment relate to an endpoint device in communication with a central control server over a communication interface compliant with a Remote Direct Memory Access (RDMA) protocol. The endpoint device can include a memory mapped register and a network interface controller implemented at least in hardware. The network interface controller can be configured to receive an RDMA message from the central control server over the communication interface; identify a memory location in the memory map register for direct memory access from the RDMA message; identify from the RDMA message for Direct memory access commands; and directly access memory locations to satisfy the command. In some embodiments, the endpoint device does not include a microcontroller or network processor, but instead includes an rNIC or RNIC for resolving messages sent by the central control server over the RDMA protocol.In Example 29, aspects of the embodiments relate to a system comprising a central control server comprising a central processor, the central processor being implemented at least in hardware to perform a control process on behalf of the endpoint device to identify a memory location for direct memory access to the endpoint device, and a network interface controller that transmits the message to the endpoint executing the endpoint device through a communication interface compliant with the Remote Direct Memory Access (RDMA) protocol, at least in hardware implementation The message includes a memory location for direct memory access of the endpoint. The system also includes one or more endpoint devices. Each endpoint device can include a memory mapped register and a network interface controller implemented at least in hardware. The network interface controller can be configured to receive an RDMA message from the central control server over the communication interface; identify a memory location in the memory map register for direct memory access from the RDMA message; identify from the RDMA message for Direct memory access commands; and directly access memory locations to satisfy the command. The endpoint device does not include a microcontroller or network processor, but instead includes an rNIC or RNIC for resolving messages sent by the central control server over the RDMA protocol.Example 30 can include the subject matter of Example 29, wherein the endpoint device lacks one or both of a microcontroller or a network processor.Example 31 may include the subject matter of Example 29 or Example 30, wherein the network interface controller includes an RDMA controller to configure an RDMA message for transmission to the endpoint device, the RDMA message including a direct memory access command and a memory location.Example 32 may include the subject matter of any one of example 29 or 30 or 31, wherein the network interface controller comprises a hardwired memory register address, and the network interface controller is configured to offset the memory register address from the steering tag A comparison is made to identify the memory register address in the memory.Example 33 may include the subject matter of any one of example 29 or example 30 or example 31 or example 32, wherein the network interface controller comprises an RDMA network interface controller.Example 34 may include the subject matter of either example 29 or example 30 or example 31 or any of example 32 or example 33, wherein the endpoint device does not include a microcontroller or network processor, but instead includes for parsing by the central control server via RDMA The rNIC or RNIC of the message sent by the protocol.The present disclosure has been described in terms of certain embodiments and generally associated methods, and variations and permutations of these embodiments and methods will be apparent to those skilled in the art. For example, the actions described herein can be performed in a different order than that described and still achieve the desired results. As an example, the processes depicted in the figures are not necessarily in a particular order or order of the sequence shown to achieve the desired results. In some implementations, multitasking and parallel processing may be advantageous. In addition, other user interface layouts and features can be supported. Other variations are within the scope of the claims.The description contains many specific implementation details, and should not be construed as limiting the scope of the invention or the scope of the invention. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can be implemented in various embodiments either individually or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even as originally claimed, one or more features from the claimed combination may be deleted from the combination in some cases and claimed. The combination can be for a sub-combination or a sub-combination variant.Similarly, although the operations are described in a particular order in the figures, this should not be understood as requiring that such operations be performed in the particular order or in the order presented, or that all illustrated operations are performed. Achieve the desired results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the above-described embodiments should not be construed as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated into a single software product or packaged. To multiple software products.Thus, specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve the desired results. In addition, the processes depicted in the figures are not necessarily in the order of the particular order or sequence shown to achieve the desired results. |
The embodiment of the invention belongs to the field of integrated circuit structure manufacturing. In an example, an integrated circuit structure includes: a first conductive interconnect line in a first interlayer dielectric (ILD) layer over a substrate; a second conductive interconnect line in a second ILD layer over the first ILD layer; and a conductive via coupling the first conductive interconnect line and the second conductive interconnect line, the conductive via having a single nitrogen-free tantalum (Ta) barrier layer. In another example, a method of fabricating an integrated circuit structure includes forming a partial trench in an interlayer dielectric (ILD) layer, the ILD layer on an etch stop layer; etching a suspension via landing on the etch stop layer; and performing a penetration etch through the etch stop layer to form trenches and via openings in the ILD layer and the etch stop layer. |
1.An integrated circuit structure comprising:a first conductive interconnect in a first interlayer dielectric (ILD) layer over the substrate;a second conductive interconnect in a second ILD layer over the first ILD layer; andA conductive via coupling the first conductive interconnect and the second conductive interconnect, the conductive via having a single nitrogen-free tantalum (Ta) barrier.2.The integrated circuit structure of claim 1 wherein the single nitrogen-free tantalum (Ta) barrier layer has a thickness in the range of 1-5 nanometers.3.2. The integrated circuit structure of claim 1 or 2, wherein the single nitrogen-free tantalum (Ta) barrier layer extends from the conductive via to the second conductive interconnect.4.The integrated circuit structure of claim 3, further comprising:a conductive fill within the single nitrogen-free tantalum (Ta) barrier layer of the conductive via and the second conductive interconnect, the conductive fill comprising directly on the single nitrogen-free tantalum (Ta) Ta) Copper on barrier layer.5.2. The integrated circuit structure of claim 1 or 2, wherein the single nitrogen-free tantalum (Ta) barrier layer is directly on a conductive fill of the first conductive interconnect, the conductive fill comprising copper or cobalt.6.A method of fabricating an integrated circuit structure, the method comprising:forming a portion of the trench in an interlayer dielectric (ILD) layer, the ILD layer on the etch stop layer;etching the hanging vias that land on the etch stop layer; andA through etch is performed through the etch stop layer to form trenches and via openings in the ILD layer and the etch stop layer.7.6. The method of claim 6, wherein performing the through etch extends the portion of the trench deeper into the ILD layer.8.The method according to claim 6 or 7, further comprising:A single nitrogen-free tantalum (Ta) barrier layer is formed along the surface of the trench and the via opening.9.The method of claim 8, further comprising:A conductive filler is formed on the single nitrogen-free tantalum (Ta) barrier layer, the conductive filler including copper directly on the single nitrogen-free tantalum (Ta) barrier layer.10.The method of claim 9, further comprising:The thickness of the single nitrogen-free tantalum (Ta) barrier layer is reduced prior to forming the conductive filler.11.A computing device comprising:board; andA component coupled to the board, the component including an integrated circuit structure including:a first conductive interconnect in a first interlayer dielectric (ILD) layer over the substrate;a second conductive interconnect in a second ILD layer over the first ILD layer; andA conductive via coupling the first conductive interconnect and the second conductive interconnect, the conductive via having a single nitrogen-free tantalum (Ta) barrier.12.The computing device of claim 11, further comprising:A memory coupled to the board.13.The computing device of claim 11 or 12, further comprising:A communication chip coupled to the board.14.The computing device of claim 11 or 12, further comprising:A camera coupled to the board.15.12. The computing device of claim 11 or 12, wherein the component is a packaged integrated circuit die.16.A computing device comprising:board; andA component coupled to the board, the component comprising an integrated circuit structure fabricated according to a method comprising the steps of:forming a portion of the trench in an interlayer dielectric (ILD) layer, the ILD layer on the etch stop layer;etching the hanging vias that land on the etch stop layer; andA through etch is performed through the etch stop layer to form trenches and via openings in the ILD layer and the etch stop layer.17.The computing device of claim 16, further comprising:A memory coupled to the board.18.The computing device of claim 16 or 17, further comprising:A communication chip coupled to the board.19.The computing device of claim 16 or 17, further comprising:A camera coupled to the board.20.17. The computing device of claim 16 or 17, wherein the component is a packaged integrated circuit die. |
Metal line and via barriers for advanced integrated circuit structure fabrication, and vias
hole shapeCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Application No. 63/072,811, filed August 31, 2020, entitled "METAL LINE AND VIA BARRIER LAYERSFOR ADVANCED INTEGRATED CIRCUIT STRUCTURE FABRICATION," and claims title The benefit of US Provisional Application No. 63/072,826 for "VIA PROFILES FOR ADVANCEDINTEGRATED CIRCUIT STRUCTURE FABRICATION," the entire contents of which are hereby incorporated by reference herein.technical fieldEmbodiments of the present disclosure belong to the field of advanced integrated circuit structure fabrication, and in particular to the fabrication of integrated circuit structures at the 10 nanometer node and smaller, and the resulting structures.Background techniqueScaling of features in integrated circuits has been the driving force behind the growing semiconductor industry over the past few decades. Scaling to smaller and smaller features enables an increase in the density of functional units over the limited chip area of a semiconductor chip. For example, shrinking transistor size allows a greater number of memory or logic devices to be incorporated on a chip, thereby facilitating the manufacture of products with increased capacity. However, the drive to larger capacities is not without problems. The need to optimize the performance of each device becomes increasingly important.Variability in conventional and currently known fabrication processes may limit the possibility of extending them further into the 10nm node or sub-10nm node range. Therefore, the fabrication of functional components required for future technology nodes may require the introduction of new methods or integration of new technologies or replacement of current manufacturing processes.Description of drawingsFigure 1A shows a cross-sectional view of a typical interconnect with barrier layers and copper layers.Figure IB shows a cross-sectional view of a typical copper and TaN/Ta barrier in a dual damascene interconnect.Figure 2 shows a cross-sectional view of a structure formed using TaN/Ta deposited by PVD (left) and subsequent sputter etching to reduce the bottom barrier (right).3 includes a cross-sectional image of a structure formed using PVD deposited Ta (left) and subsequent sputter etching to reduce Ta thickness (right) in accordance with an embodiment of the present disclosure.4 is a graph illustrating the reduction in Kelvin via resistance by approximately 30% with thinner barriers, in accordance with an embodiment of the present disclosure.5A shows cross-sectional views representing various operations in a full trench plus full via process scheme.5B shows cross-sectional views representing various operations in a partial trench plus suspended via plus through (BT) etch process scheme in accordance with an embodiment of the present disclosure.6 is a schematic diagram of a pitch quartering method for fabricating trenches for interconnect structures in accordance with an embodiment of the present disclosure.7A shows a cross-sectional view of a metallization layer fabricated using a pitch quartering scheme in accordance with an embodiment of the present disclosure.7B illustrates a cross-sectional view of a metallization layer fabricated using a pitch halving scheme over a metallization layer fabricated using a pitch quartering scheme, according to an embodiment of the present disclosure.8A illustrates a cross-sectional view of an integrated circuit structure having a metallization layer having a different metal line composition over a metallization layer having a metal line composition in accordance with an embodiment of the present disclosure.8B illustrates a cross-sectional view of an integrated circuit structure having metallization layers having different metal line compositions coupled to metallization layers having metal line compositions, in accordance with an embodiment of the present disclosure.9A-9C illustrate cross-sectional views of individual interconnect lines having various arrangements of pads and conductive overlay structures, according to embodiments of the present disclosure.10 illustrates a cross-sectional view of an integrated circuit structure having four metallization layers over two metallization layers with metal line composition and spacing in accordance with an embodiment of the present disclosure , the two metallization layers have different metal line compositions and smaller spacing.11A shows a plan view of a metallization layer and a corresponding cross-sectional view taken along the a-a&apos; axis of the plan view, according to an embodiment of the present disclosure.11B shows a cross-sectional view of a wire end or plug in accordance with an embodiment of the present disclosure.11C shows another cross-sectional view of a wire end or plug in accordance with an embodiment of the present disclosure.12A-12F illustrate plan views and corresponding cross-sectional views representing various operations in a plug finishing scheme, according to embodiments of the present disclosure.13A shows a cross-sectional view of a conductive wire plug having a seam therein, according to an embodiment of the present disclosure.13B shows a cross-sectional view of a stack of metallization layers including conductive line plugs at lower metal line locations in accordance with an embodiment of the present disclosure.14 illustrates a computing device according to one embodiment of the present disclosure.15 illustrates an interpolator including one or more embodiments of the present disclosure.16 is an isometric view of a mobile computing platform employing an IC fabricated according to one or more processes described herein or including one or more features described herein, according to an embodiment of the present disclosure.17 shows a cross-sectional view of a flip chip mounted die in accordance with an embodiment of the present disclosure.detailed descriptionFabrication of advanced integrated circuit structures is described. In the following description, numerous specific details are set forth, such as specific integrations and material regimes, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features such as integrated circuit design layouts have not been described in detail so as not to unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be understood that the various embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of these embodiments. As used herein, the word "exemplary" means "serving as an example, instance, or illustration." Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.This specification includes references to "one embodiment" or "an embodiment." The appearances of the phrases "in one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. The particular features, structures or characteristics may be combined in any suitable manner consistent with this disclosure.the term. The following paragraphs provide definitions or contexts for terms appearing in this disclosure, including the appended claims:"include". This term is open ended. As used in the appended claims, this term does not exclude additional structures or acts."configured as". Various units or components may be described or claimed to be "configured to" perform one or more tasks. In this context, "configured to" is used to imply structure by indicating that a unit or component includes structure that performs these one or more tasks during operation. In this way, a specified unit or component may be considered to be configured to perform a task even when the specified unit or component is not currently running (eg, not turned on or inactive). Reference to a unit, circuit, or component being "configured to" perform one or more tasks is expressly not intended to invoke paragraph six of 35 U.S.C. §112 for that unit or component."First", "Second", etc. As used herein, these terms are used as markers for the nouns that follow them, and do not imply any type of ordering (eg, spatial, temporal, logical, etc.)."Coupled" - The following description refers to elements or nodes or features that are "coupled" together. As used herein, unless expressly stated otherwise, "coupled" means that one element or node or feature is directly or indirectly connected to (or in direct or indirect communication with) another element or node or feature. ), and not necessarily mechanically.In addition, certain terms may also be used in the following description for the purpose of reference only, and are not intended to be limiting. For example, terms such as "upper," "lower," "above," and "below" refer to the directions in the figures to which they are referenced. Terms such as "front", "back", "back", "side", "outside" and "inside" describe the orientation or position or both of parts of a component within a consistent but arbitrary frame of reference, by reference The text and associated drawings describing the components in question are made to make the orientation or location or both clear. Such terms may include the words specifically mentioned above, derivatives thereof, and words of similar import."Inhibit" - As used herein, inhibit is used to describe reducing or minimizing an effect. When a component or feature is described as inhibiting an action, movement or condition, it may prevent an outcome or consequence or future state altogether. In addition, "inhibit" can also refer to a reduction or amelioration of a consequence, property, or effect that might otherwise occur. Thus, when a component, element or feature is referred to as inhibiting a result or condition, it need not completely prevent or eliminate that result or condition.Embodiments described herein may be directed to front-end-of-the-line (FEOL) semiconductor processing and structures. FEOL is the first part of integrated circuit (IC) fabrication in which individual devices (eg, transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate or layer. FEOL generally covers every process up to, but not including, the deposition of metal interconnect layers. After the final FEOL operation, the result is usually a wafer with isolated transistors (eg, without any wires).Embodiments described herein may relate to back-end-of-the-line (BEOL) semiconductor processing and structures. BEOL is the second part of IC fabrication, where individual devices (eg, transistors, capacitors, resistors, etc.) are interconnected with wires (eg, one or more metallization layers) on a wafer. A BEOL includes contacts for chip-to-package connections, insulating layers (dielectrics), metal layers, and bonding sites. In the BEOL portion of the manufacturing stage, contacts (pads), interconnects, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers may be added to the BEOL.The embodiments described below may apply to FEOL processes and structures, BEOL processes and structures, or both FEOL and BEOL processes and structures. In particular, while FEOL processing scenarios may be used to illustrate exemplary processing schemes, such methods may also be applicable to BEOL processing. Likewise, while a BEOL processing scenario may be used to illustrate an exemplary processing scheme, such an approach may also be applicable to FEOL processing.It should be understood that FEOL is the technological driver for a given process. In other embodiments, FEOL considerations are driven by BEOL 10 nanometer or sub-10 nanometer processing requirements. For example, material selection and layout for FEOL layers and devices may need to accommodate BEOL processing. In one such embodiment, the material selection and gate stack architecture are selected to accommodate the high density metallization of the BEOL layer, eg, to reduce the amount of metallization formed in the FEOL layer but coupled together by the high density metallization of the BEOL layer Fringing capacitance in transistor structures.The back-end-of-process (BEOL) layers of integrated circuits typically include conductive microelectronic structures, known in the art as vias, which are used to electrically connect metal lines or other interconnects above the vias to metal below the vias line or other interconnection. Vias can be formed by a photolithography process. Typically, a photoresist layer can be spin-coated over the dielectric layer, the photoresist layer can be exposed to patterned actinic radiation through a patterned mask, and the exposed layer can then be developed for lithography An opening is formed in the adhesive layer. Next, openings for vias can be etched in the dielectric layer by using the openings in the photoresist layer as an etch mask. This opening is called a via opening. Finally, via openings may be filled with one or more metals or other conductive materials to form vias.For at least some types of integrated circuits (eg, advanced microprocessors, chipset components, graphics chips, etc.), the size and spacing of vias has gradually decreased, and it is expected that the size and spacing of vias will continue to decrease in the future. When patterning very small vias with very small pitches by this lithography process, it presents several challenges of its own. One such challenge is that overlying layers between vias and overlying interconnects, and between vias and lower-level landing interconnects, typically need to be controlled to be on the order of a quarter of the via pitch. High tolerance. As the via pitch scales smaller and smaller over time, the overlay tolerance tends to scale with it faster than the lithography tool can keep up.Another such challenge is that the critical dimensions of via openings generally tend to scale faster than the resolving power of lithography scanners. Shrinking techniques exist for shrinking the critical dimensions of via openings. However, the amount of shrinkage tends to be limited by the minimum via spacing, as well as by the shrinkage process' ability to be sufficiently optically proximity corrected (OPC) neutral and not significantly compromise line width roughness (LWR) or critical dimension uniformity (CDU) or both. Another such challenge is that the LWR characteristics or CDU characteristics of the photoresist, or both, typically need to improve as the critical dimension of the via opening decreases to maintain the same overall proportion of the critical dimension budget.The above factors are also related to the consideration of non-conductive spaces or interruptions between metal lines (called "plugs", "dielectric plugs" or "metal line ends") among the metal lines of a back-end-of-process (BEOL) metal interconnect structure. ”) placement is related to scaling. Accordingly, there is a need for improvement in the field of back-end metallization fabrication techniques for fabricating metal lines, metal vias, and dielectric plugs.In a first aspect, a process for implementing a thin and nitrogen-free tantalum (Ta) barrier for via resistance reduction is described.To provide context, interconnect scaling in the back-end process for higher density and better performance brings RC and via resistances into focus as they affect signal delays and cause performance penalties. Reducing via resistance while maintaining short-circuit margin can help performance without forcing design rule changes.One or more embodiments disclosed herein relate to a process that addresses via resistance reduction by scaling the barrier thickness and also by removing the nitride component of the bilayer barrier while integrating with the process stack (TaN: about 200 microohm-cm resistivity) to provide a solution without adding any reliability or yield risk.To provide further context, standard process solutions include a double layer (TaN plus Ta) barrier to prevent copper (Cu) from diffusing into the interlayer dielectric and provide reliability for microprocessors. In order to reduce the thickness, an etching operation is added after the deposition of the barrier film in some cases. However, to prevent Cu and TaN interactions, a very thin final Ta step is usually added. This two-layer barrier process has limitations on scaling because two films need to be deposited as barriers and extra care is needed to prevent TaN and Cu from interacting due to Cu coalescing on TaN.For reference, Figure 1A shows a cross-sectional view of a typical interconnect with barrier layers and copper layers. Referring to FIG. 1A , an integrated circuit structure 100 includes a lower metallization layer 102 and an upper metallization layer 106 , where the latter may include an etch stop layer 104 . Lower metallization layer 102 includes interconnect lines or trenches 108 including copper fill 112 on Ta layer 114 on TaN layer 116 . The upper metallization layer 106 includes interconnect lines or trenches 120 and interconnect lines or trenches with corresponding vias (collectively 122). Both 120 and 122 include copper fill 112 on Ta layer 114 on TaN layer 116 . It should be understood that the line direction of the upper metallization layer 106 may be orthogonal to the line direction of the lower metallization layer 102, as depicted.In accordance with embodiments of the present disclosure, a thin Ta-only barrier layer is fabricated for a Cu layer closer to the performance critical interconnect of the transistor. Thinner Ta and elimination of TaN can reduce the via resistance of this critical interconnect layer.Advantages of implementing the embodiments described herein may include, but are not limited to: (1) Single barrier layer with Ar etch for controlling via bottom thickness: Switching from bilayer (TaN+Ta) to monolayer (Ta) enables barrier film becomes thinner, and additional argon etch can be used to target minimum bottom thickness to meet reliability goals; (2) lower via resistance: thinner barrier reduces via resistance by up to 30% and reduced the chain resistance by up to about 10%. Detection may include the absence of nitrogen in the barrier layer as detected by TEM. Cross-sections of interconnected features with compositional analysis can indicate the absence of nitrogen in the features.To provide further context, in BEOL interconnects, PVD TaN/Ta barriers are often used, which can be thicker at the bottom of the via. For example, Figure IB shows a cross-sectional view of a typical copper and TaN/Ta barrier in a dual damascene interconnect. Referring to FIG. 1B , the integrated circuit structure 150 includes a lower metallization layer 152 and an upper metallization layer 156 , where the latter may include an etch stop layer 154 . Lower metallization layer 152 includes interconnect lines or trenches 158 including copper fill 162 on Ta layer 164 on TaN layer 166 . The upper metallization layer 156 includes interconnect lines or trenches 172A (collectively 172) with corresponding vias 172B. Interconnect lines or trenches 172 with corresponding vias include copper fill 162 on Ta layer 164 on TaN layer 166 . As depicted, the bottom of the via portion of 172 may be relatively thick compared to other locations of the membrane and may result in increased via resistance. It should be understood that the line direction of the upper metallization layer 156 may be orthogonal to the line direction of the lower metallization layer 152, as depicted.The via resistance of the interconnect is the sum of the resistances of the Cu and corresponding TaN/Ta barrier films. Since the resistivity of the barrier film can be several orders of magnitude higher than that of copper, the via resistance is usually determined by the barrier film thickness, where via resistance = (TaN/Ta resistivity * barrier thickness)/area at the bottom of the via. According to one or more embodiments of the present disclosure, in order to obtain an improvement in via resistance, a combination of the following changes may be implemented: (1) reducing the barrier thickness; (2) eliminating TaN; and/or (3) increasing the of incident energy to deposit Ta to form a stable bond with the ILD.To provide further context, for previous approaches, reducing the barrier thickness with sputter etching has limitations because Cu can interact directly with TaN at the trench via interface, which causes Cu coalescence and renders the interconnect ineffective . This may prevent further thinning of the barrier, or this may require repeated Ta deposition after the etch is performed. For example, Figure 2 shows a cross-sectional view of a structure formed using TaN/Ta deposited by PVD (left) and subsequent sputter etching to reduce bottom barrier (right).Referring to the left side of FIG. 2 , a conventional starting structure 200 includes trenches 204 in an interlayer dielectric (ILD) layer 202 . A TaN layer 206 lines the trenches 204 . Ta layer 208 is on TaN layer 206 . Referring to the right side of FIG. 2, the structure 200 is subjected to an etching process, such as an Ar process, to form a modified structure 250 having an etched Ta layer 208A. Etching can reduce the thickness of the Ta layer 208 at the bottom of the via to, for example, a thickness 208D that can reduce the resistance of the via in that location. However, such an etching process may result in thickening (eg, accumulated by local sputtering), eg, at location 208B, or may result in complete removal, eg, at location 208C. In some examples, TaN barrier layer 206 is also modified by etching to form TaN layer 206A, which may include eroded region 206B. These consequences of etching may hinder scaling and may limit the degree of resistance reduction that can be achieved by performing etching.According to one or more embodiments, if only a single layer of Ta is used instead of the double layer TaN/Ta combination as the barrier, the thickness of the barrier can be further reduced. In one embodiment, a process involves depositing Ta using higher kinetic energy to deposit directly on the ILD and still meet reliability and yield criteria. This enables further thinning of the Ta-only barrier, resulting in via resistance benefits.As an example, FIG. 3 includes a cross-sectional image of a structure formed using Ta deposited by PVD (left) and subsequent sputter etching to reduce Ta thickness (right) in accordance with an embodiment of the present disclosure.Referring to the left side of FIG. 3 , a light field image A and a dark field image B are provided for an integrated circuit structure 300 including interconnects/vias with Ta-only barriers in the ILD layer 306 Copper fill 304 on layer 302 . In one embodiment, the Ta-only barrier layer 302 is deposited by physical vapor deposition (PVD). It should be understood that the Ta-only barrier layer 302 of the integrated circuit structure 300 may be used at the time of deposition. However, in another embodiment, the Ta-only barrier layer 302 may be thinned. For example, referring to the right side of FIG. 3 , a light field image A and a dark field image B are provided for an integrated circuit structure 350 including interconnects/vias with thinned in the ILD layer 356 Copper fill 354 on barrier layer 352 of Ta only. In one embodiment, the Ta-only barrier layer 352 is deposited by physical vapor deposition (PVD) and then thinned using sputter etching such as argon sputter etching.Referring again to FIG. 3 , in accordance with an embodiment of the present disclosure, an integrated circuit structure 300 or 350 includes a first conductive interconnect 310 or 360 in a first interlayer dielectric (ILD) layer 312 or 362 over a substrate. The second conductive interconnect line 308 or 358 is in the second ILD layer 306 or 356 over the first ILD layer 312 or 362 . The conductive via 309 or 359 couples the first conductive interconnect 310 or 360 and the second conductive interconnect 308 or 358 . In an embodiment, conductive via 309 or 359 has a single nitrogen-free tantalum (Ta) barrier layer 302 or 352 .In an embodiment, the single nitrogen-free tantalum (Ta) barrier layer 302 or 352 has a thickness in the range of 1-5 nanometers. In an embodiment, a single nitrogen-free tantalum (Ta) barrier layer 302 or 352 extends from conductive via 309 or 359 to second conductive interconnect 308 or 358, as depicted.In an embodiment, integrated circuit structure 300 or 350 further includes a single nitrogen-free tantalum (Ta) barrier layer 302 or 352 in conductive via 309 or 359 and a conductive filler within second conductive interconnect line 308 or 358 304 or 354. In one such embodiment, the conductive filler 304 or 354 includes copper directly on the single nitrogen-free tantalum (Ta) barrier layer 302 or 352 .In an embodiment, a single nitrogen-free tantalum (Ta) barrier layer 302 or 352 is directly on the conductive fill of first conductive interconnect 310 or 360 . In one embodiment, the conductive filler of the first conductive interconnect 310 or 360 is a copper filler or a cobalt filler.By eliminating TaN and thus successfully reducing the thickness of only Ta at the bottom of the via by a factor of about 2, the improved process can yield about a 30% reduction in via resistance compared to the standard process. 4 is a graph 400 showing an approximately 30% reduction in Kelvin via resistance with a thinner Ta-only barrier layer (Sample B, relative to Samples A, C, and D) in accordance with an embodiment of the present disclosure.In a second aspect, a partial trench, hanging via, final trench process flow for the pitch division flow is described.To provide context, with aggressive scaling, copper (Cu) gapfill is becoming increasingly challenging in dual damascene flows. While using the all-trench full-via process is simpler in terms of patterning, it presents a high challenge for gapfill, as Cu gapfill occurs with almost 90-degree corners. In an embodiment, the use of partial trenches, suspended vias, and final trench processes yields low to defect-free and good gap fill. Previous solutions have either used a full trench, full via flow, or very shallow first trenches, then hanging vias and most of the remaining trenches. This approach leads to defects in patterning or gap filling.Embodiments disclosed herein can be implemented to provide a low cost and low risk method for robust patterning and gap filling processes. Detectability can include the presence of tapered vias, which can enable robust gap filling, which can be observed using reverse engineering (eg, SEM, TEM).In an embodiment, the first trench is patterned more than 75% of the way. This minimizes additional defects from subsequent via cycles. Subsequently, vias are developed to selectively stop on the etch stop (ES) layer. Finally, a final operation called punch-through etch selectively etches more of the etch stop layer compared to trench ILD material to provide an additional process window as well as a robust profile for Cu gap fill. Different ES schemes can be used: a dielectric etch stop scheme or a metal oxide etch stop scheme.As a comparative example, FIG. 5A shows cross-sectional views representing various operations in a full trench plus full via process scheme. Referring to part (a) of FIG. 5A , an interlayer dielectric (ILD) layer 504 is formed over an etch stop (ES) layer 502 . A hard mask (HM) layer 506 is formed over the ILD layer 504 . Etching is performed to form full trenches 508 through hard mask layer 506 and ILD layer 504 . Referring to part (b) of FIG. 5A , full vias 510 are etched, forming a patterned ILD layer 504A and a patterned etch stop layer 502A.Compared to FIG. 5A , FIG. 5B shows cross-sectional views representing various operations in a partial trench plus suspended via plus through (BT) etch process scheme in accordance with an embodiment of the present disclosure. Referring to part (a) of FIG. 5B , an interlayer dielectric (ILD) layer 554 is formed over the etch stop (ES) layer 552 . A first hard mask ( HM1 ) layer 556 is formed over the ILD layer 554 . A second hard mask ( HM2 ) layer 557 is formed over the first hard mask layer 556 . Etching is performed to form portions of trenches 558 through the second hard mask layer 557 , the first hard mask layer 556 and the ILD layer 554 . The target trench depth 558A is shown in dashed lines. Referring to part (b) of FIG. 5B , a hanging via etch is performed to land on the etch stop layer 552 , forming a patterned ILD layer 554A with vias 560 . Referring to part (c) of Figure 5B, an etch is performed to extend via 560 into etch stop layer 552, form patterned etch stop layer 552A, and form trench 558B and via 560B into the secondary patterned ILD in layer 554B.Referring again to FIG. 5B , in accordance with an embodiment of the present disclosure, a method of fabricating an integrated circuit structure includes forming a portion of a trench 558 in an interlayer dielectric (ILD) layer 554 over an etch stop layer 552 . The method also includes etching the hanging vias 560 that land on the etch stop layer 552 . The method also includes performing a through etch through etch stop layer 552 to form trench 558B and via 560B openings in ILD layer 554B and etch stop layer 552A. In one embodiment, a through etch is performed to extend a portion of trench 558A deeper into ILD layer 554B to form trench 558B.In an embodiment, the method further includes forming a single nitrogen-free tantalum (Ta) barrier along the surface of the trench 558B and via 560B openings. In one such embodiment, the method further includes forming a conductive filler on the single nitrogen-free tantalum (Ta) barrier layer. In certain such embodiments, the conductive filler includes copper directly on a single nitrogen-free tantalum (Ta) barrier layer. In an embodiment, the method further includes reducing the thickness of the single nitrogen-free tantalum (Ta) barrier layer prior to forming a conductive filler such as described above.In another aspect, a pitch quartering method is implemented to pattern trenches in a dielectric layer for forming BEOL interconnect structures. According to embodiments of the present disclosure, pitch division is applied to fabricate metal lines in a BEOL fabrication scheme. Embodiments may enable continuous scaling of metal layer spacing beyond the resolution capabilities of prior art lithographic apparatus.6 is a schematic diagram of a pitch quartering method 600 for fabricating trenches for interconnect structures in accordance with an embodiment of the present disclosure.Referring to FIG. 6, at operation (a), trunk features 602 are formed using direct lithography. For example, a photoresist layer or stack can be patterned and transferred into a hardmask material to ultimately form the backbone features 602 . The photoresist layer or stack used to form the stem features 602 can be patterned using standard photolithographic processing techniques such as 193 immersion lithography. The first spacer features 604 are then formed adjacent to the sidewalls of the stem features 602 .At operation (b), the stem feature 602 is removed to leave only the first spacer feature 604 . At this stage, the first spacer features 604 are actually half-pitch masks, eg, representing a half-pitch process. The first spacer features 604 can be used directly in the pitch quartering process, or the pattern of the first spacer features 604 can be first transferred into a new hardmask material, the latter method being described here.At operation (c), the pattern of first spacer features 604 is transferred into a new hardmask material to form first spacer features 604&apos;. Second spacer features 606 are then formed adjacent to the sidewalls of the first spacer features 604&apos;.At operation (d), the first spacer feature 604&apos; is removed to leave only the second spacer feature 606. At this stage, the second spacer features 606 are actually quarter pitch masks, eg, representing a quarter pitch process.At operation (e), the second spacer features 606 are used as a mask to pattern a plurality of trenches 608 in the dielectric or hardmask layer. The trenches may eventually be filled with conductive material to form conductive interconnects in the metallization layers of the integrated circuit. Trenches 608 marked "B" correspond to stem features 602 . The trenches 608 marked with "S" correspond to the first spacer features 604 or 604&apos;. Trenches 608 marked "C" correspond to complementary regions 607 between stem features 602 .It will be appreciated that since each of the trenches 608 of FIG. 6 has a patterning start point corresponding to one of the stem feature 602, the first spacer feature 604 or 604', or the complementary region 607 of FIG. Differences in the width and/or pitch of features may appear as artifacts in the resulting conductive interconnects of the pitch quartering process in the metallization layers of the integrated circuit. As an example, FIG. 7A shows a cross-sectional view of a metallization layer fabricated using a pitch quartering scheme in accordance with an embodiment of the present disclosure.Referring to FIG. 7A , an integrated circuit structure 700 includes an interlayer dielectric (ILD) layer 704 over a substrate 702 . A plurality of conductive interconnect lines 706 are in the ILD layer 704 , and the respective conductive interconnect lines of the plurality of conductive interconnect lines 706 are spaced apart from each other by portions of the ILD layer 704 . Each of the plurality of conductive interconnect lines 706 includes a conductive barrier layer 708 and a conductive fill material 710 .Referring to FIGS. 6 and 7A , conductive interconnect lines 706B are formed in trenches having a pattern derived from backbone features 602 . Conductive interconnect lines 706S are formed in trenches having a pattern derived from the first spacer features 604 or 604&apos;. Conductive interconnect lines 706C are formed in trenches having a pattern derived from complementary regions 607 between backbone features 602 .Referring again to FIG. 7A, in an embodiment, the plurality of conductive interconnect lines 706 includes a first interconnect line 706B having a width (W1). The second interconnection line 706S is directly adjacent to the first interconnection line 706B, and the second interconnection line 706S has a width (W2) different from that of the first interconnection line 706B (W1). The third interconnection line 706C is directly adjacent to the second interconnection line 706S, and the third interconnection line 706C has a width (W3). The fourth interconnection line (the second 706S) is directly adjacent to the third interconnection line 706C, and the fourth interconnection line has the same width (W2) as the width (W2) of the second interconnection line 706S. The fifth interconnection line (second 706B) is directly adjacent to the fourth interconnection line (second 706S), and the fifth interconnection line (second 706B) has a width (W1) that is the same as the width of the first interconnection line 706B (W1) is the same.In an embodiment, the width (W3) of the third interconnection line 706C is different from the width (W1) of the first interconnection line 706B. In one such embodiment, the width (W3) of the third interconnection line 706C is different from the width (W2) of the second interconnection line 706S. In another of these embodiments, the width (W3) of the third interconnection line 706C is the same as the width (W2) of the second interconnection line 706S. In another embodiment, the width (W3) of the third interconnection line 706C is the same as the width (W1) of the first interconnection line 706B.In an embodiment, the pitch (P1) between the first interconnection line 706B and the third interconnection line 706C and the pitch (P2) between the second interconnection line 706S and the fourth interconnection line (the second 706S) same. In another embodiment, the pitch (P1) between the first interconnect line 706B and the third interconnect line 706C is different from the pitch between the second interconnect line 706S and the fourth interconnect line (the second 706S) (P2).Referring again to FIG. 7A, in another embodiment, the plurality of conductive interconnect lines 706 includes a first interconnect line 706B having a width (W1). The second interconnection line 706S is directly adjacent to the first interconnection line 706B, and the second interconnection line 706S has a width (W2). The third interconnection line 706C is directly adjacent to the second interconnection line 706S, and the third interconnection line 706S has a width (W3) different from that of the first interconnection line 706B (W1). The fourth interconnection line (the second 706S) is directly adjacent to the third interconnection line 706C, and the fourth interconnection line has the same width (W2) as the width (W2) of the second interconnection line 706S. The fifth interconnection line (second 706B) is directly adjacent to the fourth interconnection line (second 706S), and the fifth interconnection line (second 706B) has a width (W1) that is the same as the width of the first interconnection line 706B (W1) is the same.In an embodiment, the width (W2) of the second interconnection line 706S is different from the width (W1) of the first interconnection line 706B. In one such embodiment, the width (W3) of the third interconnection line 706C is different from the width (W2) of the second interconnection line 706S. In another of these embodiments, the width (W3) of the third interconnection line 706C is the same as the width (W2) of the second interconnection line 706S.In an embodiment, the width (W2) of the second interconnection line 706S is the same as the width (W1) of the first interconnection line 706B. In an embodiment, the pitch (P1) between the first interconnection line 706B and the third interconnection line 706C and the pitch (P2) between the second interconnection line 706S and the fourth interconnection line (the second 706S) same. In an embodiment, the pitch (P1) between the first interconnection line 706B and the third interconnection line 706C is different from the pitch (P2) between the second interconnection line 706S and the fourth interconnection line (the second 706S) ).7B illustrates a cross-sectional view of a metallization layer fabricated using a pitch halving scheme over a metallization layer fabricated using a pitch quartering scheme, according to an embodiment of the present disclosure.Referring to FIG. 7B , integrated circuit structure 750 includes a first interlayer dielectric (ILD) layer 754 over substrate 752 . The first plurality of conductive interconnect lines 756 are in the first ILD layer 754 , and the respective conductive interconnect lines of the first plurality of conductive interconnect lines 756 are spaced apart from each other by portions of the first ILD layer 754 . Each of the plurality of conductive interconnect lines 756 includes a conductive barrier layer 758 and a conductive fill material 760 . The integrated circuit structure 750 also includes a second interlayer dielectric (ILD) layer 774 over the substrate 752 . The second plurality of conductive interconnect lines 776 are in the second ILD layer 774 , and the respective conductive interconnect lines of the second plurality of conductive interconnect lines 776 are spaced apart from each other by portions of the second ILD layer 774 . Each of the plurality of conductive interconnect lines 776 includes a conductive barrier layer 778 and a conductive fill material 780 .7B , a method of fabricating an integrated circuit structure includes forming a second interlayer dielectric (ILD) layer 754 spaced apart by the first ILD layer 754 in a first interlayer dielectric (ILD) layer 754 over a substrate 752 in accordance with an embodiment of the present disclosure. A plurality of conductive interconnect lines 756 . The first plurality of conductive interconnect lines 756 are formed using a spacer-based pitch quartering process, such as the method described in connection with operations (a)-(e) of FIG. 6 . A second plurality of conductive interconnect lines 776 are formed in the second ILD layer 774 over the first ILD layer 754 , spaced apart by the second ILD layer 774 . The second plurality of conductive interconnect lines 776 are formed using a spacer-based pitch halving process, such as the method described in connection with operations (a) and (b) of FIG. 6 .In an embodiment, the first plurality of conductive interconnect lines 756 have a pitch (P1) between immediately adjacent lines of less than 40 nanometers. The second plurality of conductive interconnect lines 776 have a pitch (P2) of 44 nanometers or greater between immediately adjacent lines. In an embodiment, the spacer-based pitch quartering process and the spacer-based pitch halving process are based on an immersion 193 nm lithography process.In an embodiment, each conductive interconnect of the first plurality of conductive interconnects 754 includes a first conductive barrier liner 758 and a first conductive fill material 760 . Each conductive interconnect in the second plurality of conductive interconnects 756 includes a second conductive barrier liner 778 and a second conductive fill material 780 . In one such embodiment, the composition of the first conductive fill material 760 is different from that of the second conductive fill material 780 . In another embodiment, the composition of the first conductive filling material 760 is the same as that of the second conductive filling material 780 . In an embodiment, the first conductive barrier liner 758 and/or the second conductive barrier liner 778 is a single nitrogen-free tantalum (Ta) barrier layer.Although not depicted, in an embodiment, the method further includes forming a third plurality of conductive interconnect lines spaced apart by the third ILD layer in the third ILD layer over the second ILD layer 774 . The third plurality of conductive interconnect lines are formed without using pitch division.Although not depicted, in an embodiment, the method further includes, prior to forming the second plurality of conductive interconnect lines 776 , forming spaced by the third ILD layer in the third ILD layer over the first ILD layer 754 the third plurality of conductive interconnects. A third plurality of conductive interconnect lines are formed using a spacer-based pitch quartering process. In one such embodiment, after the second plurality of conductive interconnect lines 776 are formed, a fourth plurality of conductive interconnects spaced apart by the fourth ILD layer are formed in the fourth ILD layer over the second ILD layer 774 connect. A fourth plurality of conductive interconnect lines are formed using a spacer-based pitch halving process. In an embodiment, the method further includes forming a fifth plurality of conductive interconnect lines spaced apart by the fifth ILD layer in the fifth ILD layer over the fourth ILD layer, using a spacer-based pitch halved The process forms a fifth plurality of conductive interconnect lines. A sixth plurality of conductive interconnects are then formed in the sixth ILD layer over the fifth ILD layer, spaced apart by the sixth ILD layer, using a spacer-based pitch halving process to form the sixth plurality of conductive interconnects . A seventh plurality of conductive interconnect lines spaced by the seventh ILD layer are then formed in the seventh ILD layer over the sixth ILD layer. The seventh plurality of conductive interconnect lines are formed without using pitch division.In another aspect, the metal line composition varies between metallization layers. This arrangement may be referred to as a heterogeneous metallization layer. In an embodiment, copper is used as the conductive fill material for relatively large interconnect lines, while cobalt is used as the conductive fill material for relatively small interconnect lines. Smaller wires with cobalt as the filler material can provide reduced electromigration while maintaining low resistivity. For smaller interconnects, the use of cobalt instead of copper can solve the problem of copper scaling, where the conductive barrier consumes a greater amount of interconnect volume and reduces copper, which essentially hampers the advantages normally associated with copper interconnects .In a first example, FIG. 8A shows a cross-sectional view of an integrated circuit structure having a metallization layer having a different metal line composition over a metallization layer having a metal line composition in accordance with an embodiment of the present disclosure .Referring to FIG. 8A , an integrated circuit structure 800 includes a first plurality of conductive interconnect lines 806 in a first interlayer dielectric (ILD) layer 804 over a substrate 802 and separated by the first ILD layer 804 . One of the conductive interconnects 806A is shown with an underlying via 807 . Each conductive interconnect in the first plurality of conductive interconnects 806 includes a first conductive barrier material 808 along sidewalls and a bottom of the first conductive fill material 810 .A second plurality of conductive interconnect lines 816 are in and spaced apart by the second ILD layer 814 over the first ILD layer 804 . One of the conductive interconnects 816A is shown with an underlying via 817 . Each conductive interconnect in the second plurality of conductive interconnects 816 includes a second conductive barrier material 818 along sidewalls and a bottom of the second conductive fill material 820 . The composition of the second conductive filling material 820 is different from that of the first conductive filling material 810 . In an embodiment, the second conductive barrier material 818 is a single nitrogen-free tantalum (Ta) barrier layer. In an embodiment, interconnect lines 816A/lower vias 817 are formed using a partial trench, hanging via, final trench process flow.In an embodiment, the second conductive fill material 820 consists essentially of copper, and the first conductive fill material 810 consists essentially of cobalt. In one such embodiment, the composition of the first conductive barrier material 808 is different from the composition of the second conductive barrier material 818 . In another of these embodiments, the composition of the first conductive barrier material 808 is the same as that of the second conductive barrier material 818 .In an embodiment, the first conductive fill material 810 includes copper having a first concentration of dopant impurity atoms, and the second conductive fill material 820 includes copper having a second concentration of dopant impurity atoms. The second concentration of dopant impurity atoms is less than the first concentration of dopant impurity atoms. In one such embodiment, the dopant impurity atoms are selected from the group consisting of aluminum (Al) and manganese (Mn). In an embodiment, the first conductive barrier material 810 and the second conductive barrier material 820 have the same composition. In an embodiment, the first conductive barrier material 810 and the second conductive barrier material 820 have different compositions.Referring again to FIG. 8A , the second ILD layer 814 is on the etch stop layer 822 . Conductive vias 817 are in the second ILD layer 814 and in the openings of the etch stop layer 822 . In an embodiment, the first and second ILD layers 804 and 814 include silicon, carbon, and oxygen, and the etch stop layer 822 includes silicon and nitrogen. In an embodiment, each conductive interconnect line of the first plurality of conductive interconnect lines 806 has a first width (W1 ), and each conductive interconnect line of the second plurality of conductive interconnect lines 816 has a larger width than the first plurality of conductive interconnect lines 816 The second width (W2) of the width (W1).In a second example, FIG. 8B shows a cross-sectional view of an integrated circuit structure having a metallization layer having a different metal line composition coupled to a metallization layer having a metal line composition in accordance with an embodiment of the present invention .Referring to FIG. 8B , the integrated circuit structure 850 includes a first plurality of conductive interconnect lines 856 in a first interlayer dielectric (ILD) layer 854 over the substrate 852 and spaced apart by the first ILD layer 854 . One of the conductive interconnects 856A is shown with an underlying via 857 . Each conductive interconnect in the first plurality of conductive interconnects 856 includes a first conductive barrier material 858 along sidewalls and a bottom of the first conductive fill material 860 .A second plurality of conductive interconnect lines 866 are in and spaced apart by the second ILD layer 864 over the first ILD layer 854 . One of the conductive interconnects 866A is shown with an underlying via 867 . Each conductive interconnect in the second plurality of conductive interconnects 866 includes a second conductive barrier material 868 along sidewalls and a bottom of the second conductive fill material 870 . The composition of the second conductive filling material 870 is different from that of the first conductive filling material 860 . In an embodiment, the second conductive barrier material 868 is a single nitrogen-free tantalum (Ta) barrier layer. In an embodiment, interconnect lines 866A/underlying vias 867 are formed using a partial trench, hanging via, final trench process flow.In an embodiment, conductive vias 867 are on and electrically coupled to individual conductive interconnect lines 856B of the first plurality of conductive interconnect lines 856 to connect the second plurality of conductive interconnect lines Individual conductive interconnect lines 866A in 866 are electrically coupled to individual conductive interconnect lines 856B in first plurality of conductive interconnect lines 856 . In an embodiment, individual conductive interconnect lines of the first plurality of conductive interconnect lines 856 are along a first direction 898 (eg, into and out of the page), and individual conductive interconnect lines of the second plurality of conductive interconnect lines 866 are conductive The interconnect lines are along a second direction 899 orthogonal to the first direction 898, as depicted. In an embodiment, conductive via 867 includes second conductive barrier material 868 along sidewalls and bottom of second conductive fill material 870, as depicted.In an embodiment, the second ILD layer 864 is on the etch stop layer 872 on the first ILD layer 854 . Conductive vias 867 are in the second ILD layer 864 and in the openings of the etch stop layer 872 . In an embodiment, the first and second ILD layers 854 and 864 include silicon, carbon, and oxygen, and the etch stop layer 872 includes silicon and nitrogen. In an embodiment, each conductive interconnect line of the first plurality of conductive interconnect lines 856 has a first width, and each conductive interconnect line of the second plurality of conductive interconnect lines 866 has a first width that is greater than the first width. Second width.In an embodiment, the second conductive fill material 870 consists essentially of copper, and the first conductive fill material 860 consists essentially of cobalt. In one such embodiment, the composition of the first conductive barrier material 858 is different from the composition of the second conductive barrier material 868 . In another of these embodiments, the composition of the first conductive barrier material 858 is the same as that of the second conductive barrier material 868 .In an embodiment, the first conductive fill material 860 includes copper having a first concentration of dopant impurity atoms, and the second conductive fill material 870 includes copper having a second concentration of dopant impurity atoms. The second concentration of dopant impurity atoms is less than the first concentration of dopant impurity atoms. In one such embodiment, the dopant impurity atoms are selected from the group consisting of aluminum (Al) and manganese (Mn). In an embodiment, the first conductive barrier material 860 and the second conductive barrier material 870 have the same composition. In an embodiment, the first conductive barrier material 860 and the second conductive barrier material 870 have different compositions.9A-9C illustrate cross-sectional views of various interconnect lines having various barrier liner and conductive capping structure arrangements suitable for the structures described in connection with FIGS. 8A and 8B in accordance with embodiments of the present disclosure. In an embodiment, a via comprising a single nitrogen-free tantalum (Ta) barrier lands on the interconnect of Figures 9A-9C.Referring to FIG. 9A , interconnect lines 900 in dielectric layer 901 include conductive barrier material 902 and conductive fill material 904 . The conductive barrier material 902 includes an outer layer 906 remote from the conductive fill material 904 and an inner layer 908 adjacent to the conductive fill material 904 . In an embodiment, conductive fill material 904 includes cobalt, outer layer 906 includes titanium and nitrogen, and inner layer 908 includes tungsten, nitrogen, and carbon. In one such embodiment, the outer layer 906 has a thickness of about 2 nanometers and the inner layer 908 has a thickness of about 0.5 nanometers. In another embodiment, the conductive fill material 904 includes cobalt, the outer layer 906 includes tantalum, and the inner layer 908 includes ruthenium. In one such embodiment, the outer layer 906 also includes nitrogen.Referring to FIG. 9B , interconnect lines 920 in dielectric layer 921 include conductive barrier material 922 and conductive fill material 924 . A conductive cap layer 930 is on top of the conductive fill material 924 . In one such embodiment, the conductive cap layer 930 is also on top of the conductive barrier material 922, as depicted. In another embodiment, the conductive cap layer 930 is not on top of the conductive barrier material 922 . In an embodiment, the conductive cap layer 930 consists essentially of cobalt, and the conductive fill material 924 consists essentially of copper.Referring to FIG. 9C , interconnect lines 940 in dielectric layer 941 include conductive barrier material 942 and conductive fill material 944 . The conductive barrier material 942 includes an outer layer 946 remote from the conductive fill material 944 and an inner layer 948 adjacent to the conductive fill material 944 . A conductive cap layer 950 is on top of the conductive fill material 944 . In one embodiment, the conductive cap layer 950 is only on top of the conductive fill material 944 . In another embodiment, however, conductive cap layer 950 is also on top of inner layer 948 of conductive barrier material 942 , ie, at location 952 . In one such embodiment, conductive cap layer 950 is also on top of outer layer 946 of conductive barrier material 942 , ie, at location 954 .In an embodiment, referring to FIGS. 9B and 9C, a method of fabricating an integrated circuit structure includes forming an interlayer dielectric (ILD) layer 921 or 941 over a substrate. A plurality of conductive interconnect lines 920 or 940 are formed in trenches in the ILD layer and are spaced apart by the ILD layer, and the corresponding conductive interconnect lines in the plurality of conductive interconnect lines 920 or 940 correspond to each other in the trenches. in the groove. The plurality of conductive interconnect lines are formed by first forming a conductive barrier material 922 or 942 on the bottom and sidewalls of the trench, and then forming a conductive fill material 924 or 944 on the conductive barrier material 922 or 942, respectively, and filling A trench with conductive barrier material 922 or 942 along the bottom and sidewalls of conductive fill material 924 or 944, respectively. The top of conductive fill material 924 or 944 is then treated with a gas including oxygen and carbon. After treating the top of the conductive fill material 924 or 944 with a gas including oxygen and carbon, a conductive cap layer 930 or 950 is formed on top of the conductive fill material 924 or 944, respectively.In one embodiment, treating the top of the conductive fill material 924 or 944 with a gas including oxygen and carbon includes treating the top of the conductive fill material 924 or 944 with carbon monoxide (CO). In one embodiment, conductive fill material 924 or 944 includes copper, and forming conductive cap layer 930 or 950 on top of conductive fill material 924 or 944 includes forming a layer including cobalt using chemical vapor deposition (CVD). In one embodiment, conductive cap layer 930 or 950 is formed on top of conductive fill material 924 or 944 , but not on top of conductive barrier material 922 or 942 .In one embodiment, forming the conductive barrier material 922 or 942 includes forming a first conductive layer on the bottom and sidewalls of the trench, the first conductive layer including tantalum. A first portion of the first conductive layer is first formed using atomic layer deposition (ALD), and a second portion of the first conductive layer is then formed using physical vapor deposition (PVD). In one such embodiment, forming the conductive barrier material further includes forming a second conductive layer on the first conductive layer on the bottom and sidewalls of the trench, the second conductive layer includes ruthenium, and the conductive fill material includes copper. In one embodiment, the first conductive layer further includes nitrogen.10 illustrates a cross-sectional view of an integrated circuit structure having four metallization layers over two metallization layers with metal line composition and spacing in accordance with an embodiment of the present disclosure , the two metallization layers have different metal line compositions and smaller spacing.Referring to FIG. 10 , an integrated circuit structure 1000 includes a first plurality of conductive interconnect lines 1004 in a first interlayer dielectric (ILD) layer 1002 over a substrate 1001 and spaced apart by the first ILD layer 1002 . Each conductive interconnect in the first plurality of conductive interconnects 1004 includes a first conductive barrier material 1006 along sidewalls and a bottom of the first conductive fill material 1008 . Each conductive interconnect in the first plurality of conductive interconnects 1004 is along a first direction 1098 (eg, into and out of the page).A second plurality of conductive interconnect lines 1014 are in and spaced apart by the second ILD layer 1012 over the first ILD layer 1002 . Each conductive interconnect in the second plurality of conductive interconnects 1014 includes a first conductive barrier material 1006 along sidewalls and a bottom of the first conductive fill material 1008 . Each conductive interconnect in the second plurality of conductive interconnects 1014 is along a second direction 1099 that is orthogonal to the first direction 1098 .A third plurality of conductive interconnect lines 1024 are in and spaced apart by the third ILD layer 1022 over the second ILD layer 1012 . Each conductive interconnect in the third plurality of conductive interconnects 1024 includes a second conductive barrier material 1026 along sidewalls and a bottom of the second conductive fill material 1028 . The composition of the second conductive filler material 1028 is different from that of the first conductive filler material 1008 . Each conductive interconnect in the third plurality of conductive interconnects 1024 is along the first direction 1098 . In an embodiment, the second conductive barrier material 1026 is a single nitrogen-free tantalum (Ta) barrier layer.A fourth plurality of conductive interconnect lines 1034 are in and spaced apart by the fourth ILD layer 1032 above the third ILD layer 1022 . Each conductive interconnect in the fourth plurality of conductive interconnects 1034 includes a second conductive barrier material 1026 along sidewalls and a bottom of the second conductive fill material 1028 . Each conductive interconnect in the fourth plurality of conductive interconnects 1034 is along the second direction 1099 .A fifth plurality of conductive interconnect lines 1044 are in and spaced apart by the fifth ILD layer 1042 above the fourth ILD layer 1032 . Each of the fifth plurality of conductive interconnects 1044 includes a second conductive barrier material 1026 along sidewalls and a bottom of the second conductive fill material 1028 . Each conductive interconnect in the fifth plurality of conductive interconnects 1044 is along the first direction 1098 .A sixth plurality of conductive interconnect lines 1054 are in and spaced apart by the sixth ILD layer 1052 over the fifth ILD layer. Each conductive interconnect of the sixth plurality of conductive interconnects 1054 includes a second conductive barrier material 1026 along sidewalls and a bottom of the second conductive fill material 1028 . Each conductive interconnect in the sixth plurality of conductive interconnects 1054 is along the second direction 1099 .In an embodiment, the second conductive fill material 1028 consists essentially of copper, and the first conductive fill material 1008 consists essentially of cobalt. In an embodiment, the first conductive fill material 1008 includes copper having a first concentration of dopant impurity atoms, and the second conductive fill material 1028 includes copper having a second concentration of dopant impurity atoms, the dopant impurity The second concentration of atoms is less than the first concentration of dopant impurity atoms.In an embodiment, the composition of the first conductive barrier material 1006 is different from that of the second conductive barrier material 1026 . In another embodiment, the first conductive barrier material 1006 and the second conductive barrier material 1026 have the same composition.In an embodiment, the first conductive via 1019 is on and electrically coupled to an individual conductive interconnect 1004A of the first plurality of conductive interconnects 1004 . Individual conductive interconnect lines 1014A of the second plurality of conductive interconnect lines 1014 are on and electrically coupled to the first conductive vias 1019 .The second conductive via 1029 is on and electrically coupled to an individual conductive interconnect 1014B of the second plurality of conductive interconnects 1014 . Individual conductive interconnect lines 1024A of the third plurality of conductive interconnect lines 1024 are on and electrically coupled to the second conductive vias 1029 .A third conductive via 1039 is on and electrically coupled to an individual conductive interconnect 1024B of the third plurality of conductive interconnects 1024 . The individual conductive interconnect lines 1034A of the fourth plurality of conductive interconnect lines 1034 are on and electrically coupled to the third conductive vias 1039 .The fourth conductive via 1049 is on and electrically coupled to an individual conductive interconnect 1034B of the fourth plurality of conductive interconnects 1034 . Individual conductive interconnect lines 1044A of the fifth plurality of conductive interconnect lines 1044 are on and electrically coupled to fourth conductive vias 1049 .The fifth conductive via 1059 is on and electrically coupled to an individual conductive interconnect 1044B of the fifth plurality of conductive interconnects 1044 . Individual conductive interconnect lines 1054A of the sixth plurality of conductive interconnect lines 1054 are on and electrically coupled to fifth conductive vias 1059 .In one embodiment, the first conductive via 1019 includes a first conductive barrier material 1006 along the sidewalls and bottom of the first conductive fill material 1008 . The second conductive via 1029 , the third conductive via 1039 , the fourth conductive via 1049 and the fifth conductive via 1059 include a second conductive barrier material 1026 along the sidewalls and bottom of the second conductive fill material 1028 .In an embodiment, the first ILD layer 1002, the second ILD layer 1012, the third ILD layer 1022, the fourth ILD layer 1032, the fifth ILD layer 1042, and the sixth ILD layer 1052 pass through the corresponding ILD layers between adjacent ILD layers The etch stop layers 1090 are separated from each other. In an embodiment, the first ILD layer 1002, the second ILD layer 1012, the third ILD layer 1022, the fourth ILD layer 1032, the fifth ILD layer 1042, and the sixth ILD layer 1052 include silicon, carbon, and oxygen.In an embodiment, each conductive interconnect of the first plurality of conductive interconnects 1004 and the second plurality of conductive interconnects 1014 has a first width (W1). Each of the third plurality of conductive interconnects 1024 , the fourth plurality of conductive interconnects 1034 , the fifth plurality of conductive interconnects 1044 , and the sixth plurality of conductive interconnects 1054 have a greater number of conductive interconnects than the first plurality of conductive interconnects 1054 . The second width (W2) of the width (W1).In another aspect, techniques for patterning metal line ends are described. To provide context, in advanced nodes of semiconductor fabrication, lower-level interconnects may be created through separate patterning processes for wire grids, wire terminations, and vias. However, the fidelity of the composite pattern may be reduced when vias invade the end of the line, and vice versa. Embodiments described herein provide a line-end process, also known as a plug process, that eliminates the associated proximity rules. Embodiments may allow vias to be placed at the wire ends and allow large vias to be secured across the wire ends.To provide further context, FIG. 11A shows a plan view of a metallization layer and a corresponding cross-sectional view taken along the a-a&apos; axis of the plan view, according to an embodiment of the present disclosure. 11B shows a cross-sectional view of a wire end or plug in accordance with an embodiment of the present disclosure. 11C shows another cross-sectional view of a wire end or plug in accordance with an embodiment of the present disclosure.Referring to FIG. 11A , metallization layer 1100 includes metal lines 1102 formed in dielectric layer 1104 . Metal lines 1102 may be coupled to underlying vias 1103 . Dielectric layer 1104 may include line end or plug regions 1105 . Referring to FIG. 11B , the line ends or plug regions 1105 of the dielectric layer 1104 may be fabricated by patterning the hard mask layer 1110 on the dielectric layer 1104 and then etching the exposed portions of the dielectric layer 1104 . Exposed portions of dielectric layer 1104 may be etched to a depth suitable for forming line trenches 1106 or further etched to a depth suitable for forming via trenches 1108 . Referring to FIG. 11C , two vias adjacent to opposite sidewalls of the wire ends or plugs 1105 may be fabricated in a single large exposed area 1116 to ultimately form wire trenches 1112 and via trenches 1114 .However, referring again to Figures 11A-11C, fidelity issues and/or hardmask corrosion issues may result in imperfect patterning regimes. In contrast, one or more embodiments described herein include implementations of process flows that involve constructing line end dielectrics (plugs) after trench and via patterning processes.In one aspect, then, one or more embodiments described herein relate to creating non-conductive spaces or interruptions between metal wires (referred to as "wire ends," "plugs," or "cuts"), and (In some embodiments) a method of associated conductive vias. By definition, conductive vias are used to land on the previous metal pattern. In this regard, the embodiments described herein enable a more robust interconnect fabrication scheme because it relies less on the alignment of the lithographic apparatus. This interconnect fabrication scheme can be used to relax alignment/exposure constraints, can be used to improve electrical contact (eg, by reducing via resistance), and can be used to reduce the The total process operations and processing time required to pattern the feature.12A-12F illustrate plan views and corresponding cross-sectional views representing various operations in a plug finishing scheme, according to embodiments of the present disclosure.Referring to FIG. 12A , a method of fabricating an integrated circuit structure includes forming line trenches 1206 in an upper portion 1204 of an interlayer dielectric (ILD) material layer 1202 formed over an underlying metallization layer 1200 . Via trenches 1208 are formed in the lower portion 1210 of the ILD material layer 1202 . The via trenches 1208 expose the metal lines 1212 of the underlying metallization layer 1200 .Referring to FIG. 12B , sacrificial material 1214 is formed over ILD material layer 1202 and in line trenches 1206 and via trenches 1208 . The sacrificial material 1214 may have a hard mask 1215 formed thereon, as shown in FIG. 12B. In one embodiment, the sacrificial material 1214 includes carbon.Referring to FIG. 12C , the sacrificial material 1214 is patterned to break the continuity of the sacrificial material 1214 in the line trenches 1206 , eg, to provide openings 1216 in the sacrificial material 1214 .Referring to FIG. 12D , openings 1216 in sacrificial material 1214 are filled with a dielectric material to form dielectric plugs 1218 . In an embodiment, after filling the openings 1216 in the sacrificial material 1214 with a dielectric material, the hard mask 1215 is removed to provide a dielectric plug 1218 having an upper surface 1220 above the upper surface 1222 of the ILD material 1202, as shown in Figure 12D Show. The sacrificial material 1214 is removed to leave the dielectric plug 1218 .In an embodiment, filling the openings 1216 of the sacrificial material 1214 with a dielectric material includes filling with a metal oxide material. In one such embodiment, the metal oxide material is alumina. In an embodiment, filling the openings 1216 of the sacrificial material 1214 with a dielectric material includes filling using atomic layer deposition (ALD).Referring to FIG. 12E , line trenches 1206 and via trenches 1208 are filled with conductive material 1224 . In an embodiment, conductive material 1224 is formed over and over dielectric plug 1218 and ILD layer 1202, as depicted.12F, the conductive material 1224 and the dielectric plug 1218 are planarized to provide a planarized dielectric plug 1218&apos; that breaks the continuity of the conductive material 1224 in the wire trench 1206.Referring again to FIG. 12F, according to an embodiment of the present disclosure, an integrated circuit structure 1250 includes an interlayer dielectric (ILD) layer 1202 over a substrate. Conductive interconnect lines 1224 are in trenches 1206 in ILD layer 1202 . The conductive interconnect 1224 has a first portion 1224A and a second portion 1224B laterally adjacent to the second portion 1224B. Dielectric plug 1218&apos; is between and laterally adjacent to first portion 1224A and second portion 1224B of conductive interconnect 1224. Although not depicted, in an embodiment, the conductive interconnect 1224 includes a conductive barrier liner and a conductive fill material, exemplary materials of which are described above. In one such embodiment, the conductive filler material includes cobalt.In an embodiment, the dielectric plug 1218&apos; includes a metal oxide material. In one such embodiment, the metal oxide material is alumina. In an embodiment, the dielectric plug 1218&apos; is in direct contact with the first portion 1224A and the second portion 1224B of the conductive interconnect 1224.In an embodiment, the dielectric plug 1218&apos; has a bottom 1218A that is substantially coplanar with the bottom 1224C of the conductive interconnect 1224. In an embodiment, the first conductive via 1226 is in the trench 1208 in the ILD layer 1202 . In one such embodiment, the first conductive via 1226 is below the bottom portion 1224C of the interconnect 1224 and the first conductive via 1226 is electrically coupled to the first portion 1224A of the conductive interconnect 1224 .In an embodiment, the second conductive via 1228 is in the third trench 1230 in the ILD layer 1202 . The second conductive via 1228 is below the bottom portion 1224C of the interconnect 1224 and the second conductive via 1228 is electrically coupled to the second portion 1224B of the conductive interconnect 1224 .The dielectric plugs may be formed using a filling process such as a chemical vapor deposition process. Artifacts may remain in manufactured dielectric plugs. As an example, FIG. 13A shows a cross-sectional view of a conductive wire plug having a seam therein, according to an embodiment of the present disclosure.Referring to FIG. 13A , the dielectric plug 1318 has a substantially vertical seam 1300 that is substantially equally spaced from the first portion 1224A of the conductive interconnect 1224 and the second portion 1224B of the conductive interconnect 1224 .It should be understood that dielectric plugs that differ in composition from the ILD material in which the dielectric plugs are housed may only be included on selected metallization layers, such as in lower metallization layers. As an example, FIG. 13B shows a cross-sectional view of a stack of metallization layers including conductive line plugs at lower metal line locations in accordance with an embodiment of the present disclosure.Referring to FIG. 13B , the integrated circuit structure 1350 includes a first plurality of conductive interconnect lines 1356 in a first interlayer dielectric (ILD) layer 1354 over the substrate 1352 and separated by the first ILD layer 1354 . Each conductive interconnect in the first plurality of conductive interconnects 1356 has continuity broken by one or more dielectric plugs 1358 . In an embodiment, the one or more dielectric plugs 1358 comprise a different material than the ILD layer 1352 . A second plurality of conductive interconnect lines 1366 are in and spaced apart by the second ILD layer 1364 over the first ILD layer 1354 . In an embodiment, each conductive interconnect of the second plurality of conductive interconnects 1366 has continuity broken by one or more portions 1368 of the second ILD layer 1364 . It should be understood that other metallization layers may be included in the integrated circuit structure 1350, as depicted.In one embodiment, the one or more dielectric plugs 1358 include a metal oxide material. In one such embodiment, the metal oxide material is alumina. In one embodiment, the first ILD layer 1354 and the second ILD layer 1364 (and thus, one or more portions 1368 of the second ILD layer 1364) comprise carbon-doped silicon oxide material.In one embodiment, each conductive interconnect in the first plurality of conductive interconnects 1356 includes a first conductive barrier liner 1356A and a first conductive fill material 1356B. Each conductive interconnect in the second plurality of conductive interconnects 1366 includes a second conductive barrier liner 1366A and a second conductive fill material 1366B. In one such embodiment, the composition of the first conductive fill material 1356B is different from the composition of the second conductive fill material 1366B. In certain such embodiments, the first conductive fill material 1356B includes cobalt and the second conductive fill material 1366B includes copper.In one embodiment, the first plurality of conductive interconnect lines 1356 have a first pitch (P1, as shown in analogous layer 1370). The second plurality of conductive interconnect lines 1366 have a second pitch (P2, as shown in a similar layer 1380). The second pitch (P2) is greater than the first pitch (P1). In one embodiment, each conductive interconnect in the first plurality of conductive interconnects 1356 has a first width (W1, as shown in analogous layer 1370). Each conductive interconnect in the second plurality of conductive interconnects 1366 has a second width (W2, as shown in the analogous layer 1380). The second width (W2) is greater than the first width (W1).It should be understood that the layers and materials described above in connection with back-end-of-process (BEOL) structures and processes may be formed on or over underlying semiconductor substrates or structures (eg, underlying device layers of integrated circuits). In an embodiment, the underlying semiconductor substrate represents a generic workpiece object used to fabricate integrated circuits. Semiconductor substrates typically include wafers or other silicon wafers or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, polycrystalline silicon, and silicon-on-insulator (SOI), as well as similar substrates formed from other semiconductor materials, such as substrates including germanium, carbon, or III-V materials. Depending on the stage of manufacture, semiconductor substrates often include transistors, integrated circuits, and the like. The substrate may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates. Furthermore, the depicted structures may be fabricated on lower level interconnect layers of underlying layers.While the foregoing methods of fabricating metallization layers or portions of metallization layers of a BEOL metallization layer are described in detail with respect to selected operations, it is to be understood that additional or intermediate operations for fabrication may include standard microelectronic fabrication processes, such as Photolithography, etching, thin film deposition, planarization (e.g. chemical mechanical polishing (CMP)), diffusion, metrology, use of sacrificial layers, use of etch stop layers, use of planarization stop layers, or anything else related to microelectronic component fabrication associated action. Furthermore, it is to be understood that the process operations described with respect to the foregoing process flows may be practiced in alternate sequences, not every operation needs to be performed or that additional process operations may be performed, or both.In an embodiment, as used throughout this specification, an interlayer dielectric (ILD) material consists of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (eg, silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon-doped oxides of silicon, various art Known low-k dielectric materials, and combinations thereof. The interlayer dielectric material may be formed by techniques such as chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In embodiments, as also used throughout this specification, the metal line or interconnect material (and via material) consists of one or more metals or other conductive structures. A common example is the use of copper lines and structures, which may or may not include a barrier layer between the copper and the surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of metals. For example, metal interconnects may include barrier layers (eg, layers including one or more of Ta, TaN, Ti, or TiN), stacks of different metals or alloys, and the like. Thus, the interconnect may be a single layer of material, or may be formed from several layers, including conductive pad layers and fill layers. Any suitable deposition process, such as electroplating, chemical vapor deposition, or physical vapor deposition, can be used to form the interconnect lines. In an embodiment, the interconnect is composed of a conductive material such as, but not limited to, Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au, or the like alloy. Interconnects are also sometimes referred to in the art as traces, wires, wires, metals or simply interconnects.In an embodiment, as also used throughout this specification, the hardmask material consists of a dielectric material other than the interlayer dielectric material. In one embodiment, different hardmask materials may be used in different regions to provide the underlying dielectric layers and metal layers with different growth or etch selectivities from each other. In some embodiments, the hard mask layer includes a silicon nitride (eg, silicon nitride) layer or a silicon oxide layer, or both, or a combination thereof. Other suitable materials may include carbon-based materials. In another embodiment, the hardmask material includes a metal species. For example, the hardmask or other overlying material may include a layer of titanium or a nitride of another metal (eg, titanium nitride). Possibly smaller amounts of other materials, such as oxygen, may be included in one or more of these layers. Alternatively, other hard mask layers known in the art may be used depending on the particular implementation. The hard mask layer may be formed by CVD, PVD or by other deposition methods.In embodiments, as also used throughout this specification, lithography operations are performed using 193 nm immersion lithography (i193), extreme ultraviolet (EUV) lithography, or electron beam direct writing (EBDW) lithography, or the like. Positive or negative resists can be used. In one embodiment, the photolithography mask is a three-layer mask consisting of a topography mask portion, an anti-reflective coating (ARC) layer, and a photoresist layer. In certain such embodiments, the topography mask portion is a carbon hard mask (CHM) layer and the anti-reflective coating layer is a silicon ARC layer.Embodiments disclosed herein may be used to fabricate a wide variety of different types of integrated circuits or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, microcontrollers, and the like. In other embodiments, semiconductor memories may be fabricated. Additionally, integrated circuits or other microelectronic devices may be used in a variety of electronic devices known in the art. For example, in computer systems (eg, desktops, laptops, servers), cellular telephones, personal electronic devices, and the like. Integrated circuits may be coupled to buses and other components in the system. For example, a processor may be coupled to memory, a chipset, etc. through one or more buses. Each of the processors, memories, and chipsets can potentially be fabricated using the methods disclosed herein.FIG. 14 shows a computing device 1400 according to one embodiment of the present disclosure. Computing device 1400 houses board 1402 . Board 1402 may include various components including, but not limited to, processor 1404 and at least one communication chip 1406 . Processor 1404 is physically and electrically coupled to board 1402 . In some embodiments, at least one communication chip 1406 is also physically and electrically coupled to board 1402 . In other embodiments, the communication chip 1406 is part of the processor 1404 .Depending on its application, computing device 1400 may include other components that may or may not be physically and electrically coupled to board 1402 . These other components include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, displays, Touchscreen monitors, touchscreen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices such as hard drives , compact disc (CD), digital versatile disc (DVD), etc.).The communication chip 1406 implements wireless communication for transferring data to and from the computing device 1400 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data through a non-solid medium using modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not. Communication chip 1406 may implement any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO , HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and any other wireless protocol designated as 3G, 4G, 5G and above. Computing device 1400 may include multiple communication chips 1406 . For example, the first communication chip 1406 may be dedicated to short-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 1406 may be dedicated to longer-range wireless communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO et al.The processor 1404 of the computing device 1400 includes an integrated circuit die packaged within the processor 1404 . In some implementations of embodiments of the present disclosure, an integrated circuit die of a processor includes one or more structures, such as integrated circuit structures constructed in accordance with embodiments of the present disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers or memory or both to convert the electronic data into other electronic data that may be stored in registers or memory or both.The communication chip 1406 also includes an integrated circuit die packaged within the communication chip 1406 . According to another embodiment of the present disclosure, an integrated circuit die of a communication chip is constructed according to an embodiment of the present disclosure.In other implementations, another component housed within computing device 1400 may comprise an integrated circuit die constructed in accordance with implementations of embodiments of the present disclosure.In various embodiments, computing device 1400 may be a laptop, netbook, notebook, ultrabook, smartphone, tablet, personal digital assistant (PDA), ultra-mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In other implementations, computing device 1400 may be any other electronic device that processes data.FIG. 15 illustrates an interpolator 1500 that includes one or more embodiments of the present disclosure. The interposer 1500 is an intervening substrate for bridging the first substrate 1502 to the second substrate 1504 . The first substrate 1502 may be, for example, an integrated circuit die. The second substrate 1504 may be, for example, a memory module, a computer motherboard, or another integrated circuit die. Typically, the purpose of interposer 1500 is to expand connections to wider spacing or to reroute connections to different connections. For example, interposer 1500 may couple the integrated circuit die to ball grid array (BGA) 1506 , which may then be coupled to second substrate 1504 . In some embodiments, the first and second substrates 1502 / 1504 are attached to opposite sides of the interposer 1500 . In other embodiments, the first and second substrates 1502/1504 are attached to the same side of the interposer 1500. And in other embodiments, three or more substrates are interconnected by interposer 1500 .The interposer 1500 may be formed of epoxy, glass fiber reinforced epoxy, ceramic materials, or polymeric materials such as polyimide. In other embodiments, the interposer 1500 may be formed from alternating rigid or flexible materials, which may include the same materials described above for use in semiconductor substrates, such as silicon, germanium, and other III- Group V and Group IV materials.Interposer 1500 may include metal interconnects 1508 and vias 1510 , including but not limited to through-silicon vias (TSVs) 1512 . Interposer 1500 may also include embedded devices 1514, including passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices can also be formed on the interposer 1500 . According to embodiments of the present disclosure, the apparatus or processes disclosed herein may be used in the manufacture of interposer 1500 or the manufacture of components included in interposer 1500 .16 is an isometric view of a mobile computing platform 1600 manufactured according to one or more processes described herein or integrated including one or more features described herein, according to an embodiment of the present disclosure circuit (IC).Mobile computing platform 1600 may be any portable device configured for each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, mobile computing platform 1600 may be any of a tablet computer, smartphone, laptop, etc., and includes a display screen 1605, which in the exemplary embodiment is a touch screen (capacitive, inductive, resistive, etc.), A chip level (SoC) or package level integrated system 1610, and a battery 1613. As shown, the higher the level of integration in system 1610 enabled by higher transistor packing densities, the greater the portion of mobile computing platform 1600 that can be occupied by batteries 1613 or non-volatile storage (eg, solid state drives), or Larger transistor gate counts are used to improve platform functionality. Similarly, the greater the carrier mobility of each transistor in system 1610, the greater the functionality. Accordingly, the techniques described herein may enable performance and form factor improvements in mobile computing platform 1600 .Integrated system 1610 is further shown in enlarged view 1620 . In an exemplary embodiment, packaged device 1677 includes at least one memory chip (eg, RAM) or at least one processor chip fabricated according to one or more processes described herein or including one or more features described herein (eg, multi-core microprocessor and/or graphics processor). Packaged device 1677 is also integrated with power management integrated circuit (PMIC) 1615, RF (wireless) integrated circuit (RFIC) 1625 (including wideband RF (wireless) transmitters and/or receivers (eg, including digital baseband, and analog front-end modules) One or more of its controllers 1611 including a power amplifier on the transmit path and a low noise amplifier on the receive path) are coupled together to board 1660. Functionally, the PMIC 1615 performs battery power conditioning, DC to DC conversion, etc., and thus has an input coupled to the battery 1613 and an output that provides a current supply to all other functional modules. As further shown, in an exemplary embodiment, the RFIC 1625 has an output coupled to an antenna to provide implementation of any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and designated as 3G, Any other wireless protocol for 4G, 5G and above. In alternative embodiments, each of these board level modules may be integrated on a separate IC coupled to the package substrate of package device 1677 or within a single IC (SoC) coupled to the package substrate of package device 1677 .In another aspect, a semiconductor package is used to protect an integrated circuit (IC) chip or die, and also to provide the die with an electrical interface to external circuits. With the ever-increasing demand for smaller electronic devices, semiconductor packages are designed to be more compact and must support greater circuit densities. Additionally, the need for higher performance devices has led to a need for improved semiconductor packages that enable thin package profiles and low overall warpage that are compatible with subsequent assembly processes.In an embodiment, wire bonding to a ceramic or organic packaging substrate is used. In another embodiment, the die is mounted to a ceramic or organic packaging substrate using a C4 process. In particular, C4 ball bonding may be implemented to provide flip chip interconnection between the semiconductor device and the substrate. Flip chip or controlled collapse chip attachment (C4) is a type of mounting for semiconductor devices such as integrated circuit (IC) chips, MEMS or components that utilize solder bumps rather than wire bonding. Solder bumps are deposited on C4 pads on top of the substrate package. To mount the semiconductor device on the substrate, it is turned over so that the active side faces down on the mounting area. Solder bumps are used to connect semiconductor devices directly to the substrate.17 shows a cross-sectional view of a flip chip mounted die in accordance with an embodiment of the present disclosure.17, according to an embodiment of the present disclosure, an apparatus 1700 includes a die 1702, eg, an integrated circuit (IC) fabricated according to one or more processes described herein or including one or more features described herein. Die 1702 includes metallization pads 1704 thereon. A package substrate 1706, such as a ceramic or organic substrate, includes connections 1708 thereon. Die 1702 and package substrate 1706 are electrically connected by solder balls 1710 coupled to metallization pads 1704 and connections 1708 . Underfill material 1712 surrounds solder balls 1710 .Flip chip processing can be similar to conventional IC fabrication with some additional operations. Towards the end of the manufacturing process, the attach pads are metallized to make them more receptive to solder. This usually includes several treatments. A small dot of solder is then deposited on each metallized pad. The chips are then cut from the wafer as normal. To attach a flip chip into a circuit, the chip is flipped over to place the solder joints down onto the underlying electronics or connectors on the circuit board. The solder is then re-melted to create the electrical connection, typically using ultrasonic waves or, alternatively, a reflow process. This also leaves a small space between the chip's circuitry and the underlying mounting. In most cases, an electrically insulating adhesive is then "underfilled" to provide a stronger mechanical connection, provide a thermal bridge, and ensure that the solder joints are not stressed due to differential heating of the chip and the rest of the system.In other embodiments, newer packaging and die-to-die interconnect methods, such as through-silicon vias (TSVs) and silicon interposers, are implemented to fabricate high-performance multi-chip modules (MCMs) according to embodiments of the present disclosure ) and a system-in-package (SiP) that incorporates an integrated circuit (IC) fabricated according to one or more of the processes described herein or including one or more of the features described herein.Accordingly, embodiments of the present disclosure include advanced integrated circuit structure fabrication.Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the disclosure, even if only a single embodiment is described with respect to specific features. Unless stated otherwise, the examples of features provided in this disclosure are intended to be illustrative and not restrictive. The above description is intended to cover such alternatives, modifications and equivalents apparent to those skilled in the art having the benefit of this disclosure.The scope of the present disclosure includes any feature or combination of features disclosed herein (explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be filed during prosecution of this application (or an application claiming priority thereto) for any such combination of features. In particular, with reference to the appended claims, features of the dependent claims may be combined with features of the independent claims, and features of each independent claim may be combined in any suitable manner and not only specifically as recited in the appended claims combination.The following examples refer to other embodiments. The various features of the different embodiments may be combined in various ways, with some features included and others excluded, to suit a variety of different applications.Exemplary Embodiment 1: An integrated circuit structure comprising: a first conductive interconnect in a first interlayer dielectric (ILD) layer over a substrate, a first conductive interconnect in a second ILD layer over the first ILD layer Two conductive interconnects, and a conductive via coupling the first conductive interconnect and the second conductive interconnect, the conductive via having a single nitrogen-free tantalum (Ta) barrier layer.Exemplary Embodiment 2: The integrated circuit structure of Exemplary Embodiment 1, wherein the single nitrogen-free tantalum (Ta) barrier layer has a thickness in the range of 1-5 nanometers.Exemplary Embodiment 3: The integrated circuit structure of Exemplary Embodiment 1 or 2, wherein a single nitrogen-free tantalum (Ta) barrier layer extends from the conductive via to the second conductive interconnect.Exemplary Embodiment 4: The integrated circuit structure of Exemplary Embodiment 3, further comprising a conductive filler within the single nitrogen-free tantalum (Ta) barrier layer of the conductive via and the second conductive interconnect, the conductive filler The material includes copper directly on a single nitrogen-free tantalum (Ta) barrier layer.Exemplary Embodiment 5: The integrated circuit structure of Exemplary Embodiments 1, 2, 3, or 4, wherein a single nitrogen-free tantalum (Ta) barrier layer is directly on the conductive fill of the first conductive interconnect, the conductive fill Substances include copper or cobalt.Exemplary Embodiment 6: A method of fabricating an integrated circuit structure comprising: forming a portion of a trench in an interlayer dielectric (ILD) layer, the ILD layer on an etch stop layer; etching a hanging via that landed on the etch stop layer and performing a through etch through the etch stop layer to form trenches and via openings in the ILD layer and the etch stop layer.Exemplary Embodiment 7: The method of Exemplary Embodiment 6, wherein a through etch is performed to extend portions of the trenches deeper into the ILD layer.Exemplary Embodiment 8: The method of Exemplary Embodiment 6 or 7, further comprising forming a single nitrogen-free tantalum (Ta) barrier along surfaces of the trenches and via openings.Exemplary Embodiment 9: The method of Exemplary Embodiment 8, further comprising forming a conductive filler on the single nitrogen-free tantalum (Ta) barrier layer, the conductive filler layer comprising directly on the single nitrogen-free tantalum (Ta) barrier layer copper on.Exemplary Embodiment 10: The method of Exemplary Embodiment 9, further comprising reducing the thickness of the single nitrogen-free tantalum (Ta) barrier layer prior to forming the conductive filler.Exemplary Embodiment 11: A computing device includes a board and a component coupled to the board. The component includes an integrated circuit structure including a first conductive interconnect in a first interlayer dielectric (ILD) layer over a substrate, a second conductive interconnect in a second ILD layer over the first ILD layer A conductive interconnect, and a conductive via coupling the first and second conductive interconnects, the conductive via having a single nitrogen-free tantalum (Ta) barrier.Exemplary Embodiment 12: The computing device of Exemplary Embodiment 11, further comprising a memory coupled to the board.Exemplary Embodiment 13: The computing device of Exemplary Embodiment 11 or 12, further comprising a communication chip coupled to the board.Exemplary Embodiment 14: The computing device of Exemplary Embodiments 11, 12, or 13, further comprising a camera coupled to the board.Exemplary Embodiment 15: The computing device of Exemplary Embodiments 11, 12, 13, or 14, wherein the component is a packaged integrated circuit die.Exemplary Embodiment 16: A computing device includes a board and a component coupled to the board. The component includes an integrated circuit structure fabricated according to a method comprising the steps of: forming a portion of a trench in an interlayer dielectric (ILD) layer, the ILD layer being on an etch stop layer; etching landing on the etch stop layer and performing a through etch through the etch stop layer to form trenches and via openings in the ILD layer and the etch stop layer.Exemplary Embodiment 17: The computing device of Exemplary Embodiment 16, further comprising a memory coupled to the board.Exemplary Embodiment 18: The computing device of Exemplary Embodiment 16 or 17, further comprising a communication chip coupled to the board.Exemplary Embodiment 19: The computing device of Exemplary Embodiments 16, 17, or 18, further comprising a camera coupled to the board.Exemplary Embodiment 20: The computing device of Exemplary Embodiments 16, 17, 18, or 19, wherein the component is a packaged integrated circuit die. |
A method used during the formation of a semiconductor device comprises the steps of forming a polycrystalline silicon layer over a semiconductor substrate assembly and forming a silicon nitride layer over the polycrystalline silicon layer. A silicon dioxide layer is formed over the silicon nitride layer and the silicon dioxide and silicon nitride layers are patterned using a patterned mask having a width, thereby forming sidewalls in the two layers. The nitride and oxide layers are subjected to an oxygen plasma which treats the sidewalls and leaves a portion of the silicon nitride layer between the sidewalls untreated. The silicon dioxide and the untreated portion of the silicon nitride layer are removed thereby resulting in pillars of treated silicon nitride. Finally, the polycrystalline silicon is etched using the pillars as a mask. The patterned polycrystalline silicon layer thereby comprises features having widths narrower than the width of the original mask. |
What is claimed is: 1. An in-process semiconductor device comprising:a semiconductor wafer; a blanket polycrystalline silicon layer overlying said wafer; a patterned silicon nitride layer having first and second sidewalls and a middle portion interposed between said first and second sidewalls, wherein said sidewalls are treated with an oxygen plasma and said middle portion remains untreated; a dielectric layer overlying and coextensive with said silicon nitride layer, wherein said dielectric layer overlying said silicon nitride layer is treated with an oxygen plasma. 2. The in-process semiconductor device of claim 1 wherein said polycrystalline silicon layer is a transistor gate layer.3. The in-process semiconductor device of claim 1 wherein said polycrystalline silicon layer is a transistor floating gate layer.4. The in-process semiconductor device of claim 1 wherein said dielectric layer overlying said silicon nitride layer is a treated silicon dioxide layer.5. An in-process semiconductor device comprising:a conductive layer; a patterned masking layer overlying said conductive layer comprising first and second sidewalls treated with an oxygen plasma and further comprising an untreated middle portion interposed between said first and second sidewalls; and an oxygen plasma treated protective layer overlying and coextensive with said masking layer. 6. The in-process semiconductor device of claim 5 wherein said conductive layer comprises a portion of a transistor gate layer.7. The in-process semiconductor device of claim 5 wherein said conductive layer comprises a portion of a transistor floating gate layer.8. The in-process semiconductor device of claim 5 wherein said protective layer comprises a silicon dioxide layer. |
This is a division of U.S. Ser. No. 09/370,064 filed Aug. 6, 1999 and issued Sep. 26, 2000 as U.S. Pat. No. 6,124,167.FIELD OF THE INVENTIONThis invention relates to the field of semiconductor processing, and more particularly to a method for forming an etch mask and exemplary uses therefor.BACKGROUND OF THE INVENTIONDuring the manufacture of a semiconductor device a large number of transistors and other structures are formed over a semiconductor substrate assembly such as a semiconductor wafer. As manufacturing techniques improve and transistor density increases as feature size decreases, one manufacturing step which can create difficulties is photolithography, as there is a limit to the minimum feature size which can be formed with conventional equipment.Various attempts have been made to overcome the limitations of conventional photolithography. For example, U.S. Pat. No. 5,750,441 by Figura et al., assigned to Micron Technology, Inc. and incorporated herein by reference in its entirety, describes various patterning techniques which have been developed in an attempt to decrease the allowable feature size using conventional lithographic equipment.Using an oxygen plasma treatment to alter the etch characteristics of a material has been demonstrated. Hicks, et al. (S. E. Hicks, S. K. Murad, I. Sturrock, and C. D. W. Wilkinson, "Improving the Resistance of PECVD Silicon Nitride to Dry Etching Using an Oxygen Plasma," Microelectronic Engineering, 35, pp. 41-44, 1997) teaches the treatment of a silicon nitride layer to increase its resistance to an etch. After the silicon nitride layer is formed using plasma enhanced chemical vapor deposition, it is subjected to a treatment in a reactive ion etch chamber comprising a radio frequency power of 50 Watts, an oxygen flow rate of 25 standard cubic centimeters, and a gas pressure of 100 millitorr thereby resulting in a direct current bias of 110 volts. Hicks teaches a layer which is homogeneously densified. The etch rate of an untreated silicon nitride layer using SF6 reactive ion etching was demonstrated to be up to 1,000 times greater than a treated silicon nitride subjected to the same etch conditions.A patterning technique which can form device features smaller than those allowable by conventional photolithography equipment would be desirable.SUMMARY OF THE INVENTIONThe present invention provides a new method which decreases the minimum device feature size that can be formed with conventional photolithography equipment. In accordance with one embodiment of the invention a semiconductor substrate assembly is provided and a layer of polycrystalline silicon (poly) is formed thereover. A silicon nitride layer is formed over the poly layer, and a silicon dioxide layer is formed over the silicon nitride. The nitride and oxide layers are patterned with an etch mask having a width, thereby resulting in cross sectional sidewalls in the silicon nitride. The sidewalls of the silicon nitride are treated with an oxygen plasma which alters the etch characteristics of the silicon nitride sidewalls.The silicon dioxide is removed, as is the untreated portion of the silicon nitride layer thereby resulting in pillars of treated silicon nitride having a width less than the width of the mask. The poly layer is etched using the silicon nitride pillars as an etch mask, thereby resulting in poly features, each of which has a width less than the width of the original etch mask.Objects and advantages will become apparent to those skilled in the art from the following detailed description read in conjunction with the appended claims and the drawings attached hereto.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a cross section depicting a semiconductor substrate assembly having a polycrystalline silicon (poly) layer, a nitride layer, an oxide layer, and a resist layer formed thereover;FIG. 2 is a cross section depicting the FIG. 1 structure after patterning the nitride and oxide layers;FIG. 3 depicts the FIG. 2 structure after removal of the resist layer and the treatment of the nitride with an oxygen plasma;FIG. 4 depicts the FIG. 3 structure after removal of the oxide and the untreated nitride which results in pillars of treated nitride;FIG. 5 depicts the FIG. 4 structure after etching of the poly layer using the pillars as a mask;FIG. 6 depicts the FIG. 5 structure after removal of the nitride pillars;FIG. 7 is a cross section depicting a first step in the formation of a floating gate device comprising a semiconductor substrate assembly having a gate oxide, a floating gate poly, a nitride, an oxide, and a patterned resist thereover;FIG. 8 depicts the FIG. 7 structure after removal of the resist and treatment of the nitride layer;FIG. 9 depicts the FIG. 8 structure after etching of the oxide and removal of the untreated nitride;FIG. 10 depicts the FIG. 9 structure after etching of the poly layer using the pillars as an etch mask;FIG. 11 depicts the FIG. 10 structure after the removal of nitride pillars, and the formation of an intergate oxide, control gate poly, and patterned resist;FIG. 12 depicts the FIG. 11 structure after etching the control gate poly and intergate dielectric; andFIG. 13 depicts an alternate floating gate device.It should be emphasized that the drawings herein may not be to exact scale and are schematic representations. The drawings are not intended to portray the specific parameters, materials, particular uses, or the structural details of the invention, which can be determined by one of skill in the art by examination of the information herein.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTA first embodiment of the invention which decreases the device feature size that can be produced with conventional semiconductor photolithography equipment is depicted in FIGS. 1-6. FIG. 1 depicts a semiconductor wafer substrate assembly 10 which can comprise a semiconductor wafer and one or more layers, such as an overlying dielectric, depending on the use of the invention. FIG. 1 further depicts a layer 12 such as polycrystalline silicon (poly) which is to be patterned. A 2,000 angstrom (Å) poly layer can be formed in a low pressure chemical vapor deposition (LPCVD) furnace at about 620[deg.] C. in about 30 minutes using an atmosphere of silane gas (SiH4), or by using another workable method.Next, a layer of plasma enhanced (PE) silicon nitride (Si3N4) 14 is deposited by chemical vapor deposition (CVD). The deposition can be performed in an Oxford PlasmaTechnology [mu]P80 Plus by placing the wafer substrate assembly on the grounded electrode and heating the assembly to between about 200[deg.] C. and about 600[deg.] C., preferably about 400[deg.] C. The other electrode is driven at a frequency of about 13.56 MHz with a power of between about 100 watts (W) and about 800 W, preferably about 500 W. Silane gas, for example having a purity of 99.999%, ammonia (NH3) having a purity of 99.995%, and nitrogen (N2) having a purity of 99.995% are each pumped into the chamber. Silane can be pumped at a flow rate of between about 5 standard cm<3 >(sccm) and about 300 sccm, more preferably between about 10 sccm and about 100 sccm, and most preferably about 20 sccm, ammonia can be pumped at between about 5 sccm and about 300 sccm, more preferably between about 10 sccm and about 150 sccm, and most preferably about 20 sccm, and nitrogen can be pumped at a flow rate of between about 1,500 sccm and about 5,000 sccm, preferably at about 4,000 sccm. Silicon nitride forms at a rate of between about 500 Å/min and about 5,000 Å/min, and thus the process can be continued for between about 1 minute and about 10 minutes, depending on the flow rates, power, etc., to form a 5,000 Å thick layer.Next, a layer of silicon dioxide 16 is formed using a PE tetraethyl orthosilicate (TEOS) source. For example, an SiO2 layer between about 1000 Å and about 3,000 Å thick, for example about 2,000 Å thick can, be formed by providing a supply of liquid TEOS at a temperature of between about 200[deg.] C. and about 550[deg.] C., preferably about 400[deg.] C. for about 20 seconds.Finally, a patterned photoresist layer 18 is formed over the TEOS layer 16 using means known in the art. For purposes of illustration, the photoresist layer provided has a width which is the minimum allowable by current photolithography.The structure of FIG. 1 is etched with a dry anisotropic etch to result in the structure of FIG. 2. An etch which would adequately remove the SiO2 layer 16 and the Si3N4 layer 14 having the thicknesses described above includes C4F8 at a flow rate of between about 5 sccm and 50 sccm, preferably between about 5 sccm and about 20 sccm, and more preferably about 10 sccm, and argon at a flow rate of between about 50 sccm and about 500 sccm, preferably about 300 sccm, for between about 20 seconds and 200 seconds, preferably about 100seconds. This etch has a selectivity to poly, which facilitates stopping the etch on the underlying poly layer 12. In other uses of the invention where layer 12 is not poly, the etch of the FIG. 1 structure may be performed with another suitable etch which facilitates stopping on the selected layer. After etching, the resist layer 18 is removed according to means known in the art.The structure of FIG. 2 with resist 18 removed is treated with an oxygen plasma. The structure to be treated is placed in a down flow plasma chamber. The operating conditions include a radio frequency (RF) power of between about 300 W and about 3,000 W, and more preferably between about 700 W and about 2,000 W, most preferably about 1,000 W, an oxygen flow rate of between about 50 sccm and about 5,000 sccm, preferably between about 50 sccm and about 1,000 sccm, most preferably about 200 sccm, and a gas pressure of between about 10 millitorr (mTorr) and about 1,000 mTorr, preferably about 100 mTorr for a duration of between about 1 minute and about 10 minutes for the structure described above. After the oxygen plasma treatment the structure of FIG. 3 results wherein the silicon nitride layer comprises treated areas 30 along the sidewalls and untreated areas 32 in the center where the treatment does not penetrate. The width of the treated material is self-limiting as it forms and protects the center portion of the nitride so that the center portion remains untreated. Treating the silicon dioxide does not substantially alter the etch characteristics of the material over an untreated portion.Contrary to the process and results of Hicks et al. previously described which uses a layer treated throughout, the instant embodiment of the invention uses an untreated or minimally treated center portion. Thus a shorter duration and a lighter treatment may be used to prevent or reduce treatment of the center portion of the silicon nitride layer. In further contrast to Hicks, an overlying layer (in the instant embodiment, TEOS layer 16 reduces treatment of the center portion. It is believed that an untreated to treated etch ratio of at least 30:1 will be obtained with the process and structure as described above.Subsequently, the structure of FIG. 3 is etched, for example with an anisotropic RIE etch using SF6 at a flow rate of from about 5 sccm to about 50 sccm, more preferably between about 5 sccm and about 30 sccm, and most preferably about 20 sccm, an RF power of between about 50 W and about 300 W, more preferably between about 50 W and 200 W, most preferably about 100W, and a pressure in the range of from about 10 mTorr to about 200 mTorr, more preferably from about 20 mTorr to about 200 mTorr, most preferably about 100 mTorr. Generally, increasing the pressure will decrease the etching of the treated Si3N4 and increase etching of the untreated Si3N4. Both the treated and untreated portions of the silicon dioxide layer 16 are readily etched, as is the untreated silicon nitride 32 after the oxide has been removed. Etching the oxide 16 and untreated silicon nitride 32 result in the structure of FIG. 4 including pillars of treated silicon nitride 40.The FIG. 4 structure is subsequently etched to pattern the poly layer using the Si3N4 pillars 40 as an etch mask to result in the poly structures 50 of FIG. 5. An anisotropic etch using Cl2 at a flow rate of between about 10 sccm and about 200 sccm, preferably about 80 sccm and a power of between about 2 mTorr and about 50 mTorr, preferably about 10 mTorr will clear most of the exposed poly. The etch characteristic can be altered to produce a more isotropic etch to ensure removal of the residual material in the narrow areas between pillars 40.Finally, the Si3N4 pillars 40 are removed to result in the structure of FIG. 6. A hydrofluoric acid (HF) wet etch in a 10% HF acid for between about 5 minutes and about 20 minutes will sufficiently remove the treated silicon nitride without excessively removing the poly.It can be seen that the above process yields poly features having a width less than the width of the photoresist 18 of FIG. 1. The width of the pillars 40 depends in part on the length of time the silicon nitride is treated. For example, treating 40% of the nitride at each sidewall results in 20% untreated nitride in the center, and thus the width of the pillars will be 40% of the width of the original mask 18. Treating 25% of the nitride at the sidewalls will similarly result in pillars having a width which is 25% of the width of the original resist mask. Further, the pillars can be narrowed after their formation using an isotropic etch which etches but does not completely remove them.Various other uses and embodiments can be determined for this process by one of ordinary skill in the art. For example, another embodiment of the invention which is used to form a floating gate memory structure is depicted in FIGS. 7-12. Floating gate memory devices such as erasable programmable read-only memories (EPROMs) electrically-erasable PROMs (EEPROMs), and flash EEPROMs are well known in the art. For example the following US Patents by Roger R. Lee, assigned to Micron Technology, Inc. and incorporated herein by reference in their entirety, describe various read-only memory cells and their methods of manufacture: U.S. Pat. No. 5,089,867 issued Feb. 18, 1992; U.S. Pat. No. 5,149,665 issued Sep. 22, 1992; U.S. Pat. No. 5,192,872 issued Mar. 9, 1993; U.S. Pat. No. 5,241,202 issued Aug. 31, 1993; U.S. Pat. No. 5,260,593 issued Nov. 9, 1993; U.S. Pat. No. 5,444,279 issued Aug. 22, 1995; U.S. Pat. No. 5,658,814 issued Aug. 19, 1997.FIG. 7 depicts a semiconductor wafer assembly 70, a gate oxide layer 72 such as a tunnel oxide, a floating gate poly layer 74 between about 2,000 Å and about 8,000 Å thick, a silicon nitride layer 76 between about 2,000 Å and about 8,000 Å thick, a TEOS layer 78 between about 500 Å and about 2,000 Å thick, and a patterned resist layer 80. The TEOS oxide layer 78 and Si3N4 layer 76 are etched using the resist layer 80 as a pattern to result in a structure similar to that of FIG. 2. The resist layer is removed and the structure is treated with an oxygen plasma as described above for between about 3 minutes and about 30 minutes to result in the structure of FIG. 8, which comprises a Si3N4 layer having oxygen plasma treated portions 82 and an untreated portion 84.An anisotropic etch is performed on the FIG. 8 structure to remove the TEOS layer and the untreated Si3N4 layer and to result in the structure of FIG. 9 having treated Si3N4 pillars 90. An RIE etch as described above using SF6 at a flow rate of from about 10 sccm to about 30 sccm, an RF power of between about 50 W and about 500 W, and a pressure of between about 20 mTorr and about 200 mTorr would be sufficient.Next, the FIG. 9 structure is etched to result in the structure of FIG. 10 using the Si3N4 pillars 90 as an etch mask to result in the poly structure 100 of FIG. 10. In this embodiment the floating gate comprises a generally U-shaped vertical cross section. An anisotropic etch using Cl2 at a flow rate of between about 10 sccm and about 200 sccm, preferably 80 sccm, and a pressure power of between about 2 mTorr and about 50 mTorr for about 2 minutes will clear most of the exposed portion of an 8,000 Å poly layer and will leave a horizontally-oriented poly "stringer" bridging or spanning the two vertically-oriented poly pillars as depicted. Subsequently, the Si3N4 pillars 90 are removed, for example with a wet HF etch as described above.As depicted in FIG. 11, an intergate dielectric layer 110, for example an oxide-nitride-oxide stack, is formed, and a control gate layer 112 is formed according to means known in the art. A layer of tungsten silicide (not depicted) can be formed according to means in the art to decrease the sheet resistance of the control gate. A patterned photoresist layer 114 is provided which will define the control gate and further define the floating gate. The FIG. 11 structure is etched to form the structure of FIG. 12, then the resist is removed. As depicted in FIG. 12, the control gate has a lower surface which is generally conformal with the floating gate. Next, wafer processing continues according to means in the art which will result in a quasi split gate memory device.In an alternate embodiment, the poly layer 112 of FIG. 11 is formed thick enough to impinge on itself within the concave surface formed by the floating gate. This effectively forms a thicker poly over the floating gate than between adjacent transistors. The mask 114 is omitted thereby eliminating a masking step. An anisotropic etch of the poly is performed to stop on oxide 110 and to clear the poly from between adjacent transistors. The etch is timed such that a portion of the poly 112 remains in the recess defined by the floating gate as depicted in FIG. 13. In this embodiment the floating gate and control gate each have an upper surface, and the upper surfaces are generally coplanar as depicted. Further, the floating gate has a generally concave profile and the lower surface of the control gate is conformal with the concave profile. A process using chemical mechanical polishing (CMP) can also be used to form the structure of FIG. 13 from the description herein by one of ordinary skill in the art.While this invention has been described with reference to illustrative embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as additional embodiments of the invention, will be apparent to persons skilled in the art upon reference to this description. It is therefore contemplated that the appended claims will cover any such modifications or embodiments as fall within the true scope of the invention. |
In a particular embodiment, a method is disclosed that includes receiving an interrupt at a first thread, the first thread including a lowest priority thread of a plurality of executing threads at a processor at a first time. The method also includes identifying a second thread, the second thread including a lowest priority thread of a plurality of executing threads at a processor at a second time. The method further includes directing a subsequent interrupt to the second thread. |
WHAT IS CLAIMED IS: 1. A method comprising: receiving an interrupt at a first thread, the first thread comprising a lowest priority thread of a plurality of executing threads at a processor at a first time; identifying a second thread, the second thread comprising a lowest priority thread of the plurality of executing threads at a processor at a second time; and directing a subsequent interrupt to the second thread. 2. The method of claim 1, wherein the interrupt indicates at least one task is ready to be executed. 3. The method of claim 1, wherein the first thread is executing a lowest priority task of a plurality of executing tasks. 4. The method of claim 1, wherein the first thread is an idle thread. 5. The method of claim 1, wherein the first thread receives the interrupt and initiates an interrupt routine to select a highest priority task from a ready task list and wherein a first priority of the highest priority task from the ready task list is compared to a second priority of a lowest priority task from an executing task list and tasks are swapped only when the first priority is higher than the second priority. 6. The method of claim 5, further comprising: after swapping tasks to execute the highest priority task at the first thread and to return an executing task to the ready task list, determining if any ready tasks of the ready task list have a higher priority than a lowest priority executing task. 7. The method of claim 6, further comprising checking a schedule to determine if any ready task has a higher priority than the lowest priority executing task by comparing the lowest priority executing task to a highest priority ready task. 8. The method of claim 6, further comprising: determining the lowest priority executing task; and using the interrupt routine to direct the subsequent interrupt to the second thread, wherein the second thread is executing the lowest priority executing task. 9. The method of claim 8, further comprising: selectively setting the subsequent interrupt based on a result of checking a schedule to determine if any ready task has a higher priority than the lowest priority executing task. 10. The method of claim 1, wherein an interrupt controller directs threads of the plurality of executing threads other than the first thread to be masked from receiving the interrupt. 11. A method comprising: receiving an interrupt at a first thread of a set of threads, wherein each thread of the set of threads executes a respective task of a set of executing tasks, and wherein each task of the set of executing tasks and each task of a set of ready tasks has a respective priority; and iteratively swapping a lowest priority task of the set of executing tasks with a highest priority task of the set of ready tasks until each task of the set of executing tasks has a priority that is greater than or equal to a priority of every task of the set of ready tasks. 12. The method of claim 11 , wherein each iteration of the iterative swapping includes, after swapping each executing task with a ready task to form a next set of executing tasks and a next set of ready tasks: determining a lowest priority task of the next set of executing tasks; and sending an interrupt to the lowest priority thread to perform a next iteration of the iterative swapping when a priority of the lowest priority task of the next set of executing tasks is less than a priority of a highest priority task of the next set of ready tasks. 13. A system comprising : a multithreaded processor configured to execute a plurality of threads such that a plurality of executing threads are running highest priority tasks, wherein the multithreaded processor is configured to schedule tasks such that all executing tasks have a priority at least as high as a highest priority of all ready tasks. 14. The system of claim 13, wherein the multithreaded processor is configured such that an interrupt directed to a lowest priority thread of the plurality of executing threads does not impact performance of highest priority threads of the plurality of executing threads. 15. The system of claim 13, wherein the multithreaded processor is configured such that the lowest priority thread of the plurality of executing threads receives an interrupt and launches an interrupt routine to select a highest priority task from a ready task list. 16. The system of claim 13, wherein the multithreaded processor is configured such that, after each swapping of tasks to execute a highest priority task of a ready task list and to return a prior executing task to the ready task list, a schedule is checked to determine if any ready task has a higher priority than any executing task. 17. The system of claim 13, wherein the multithreaded processor is configured such that a schedule is updated using a minimum possible number of swaps of executing tasks and ready tasks so that the updated schedule has every executing task having a priority at least as high as a highest priority of the ready tasks. 18. The system of claim 13, further comprising: a first data structure including a prioritized executing task list of tasks executing on the plurality of executing threads; and a second data structure including a prioritized ready task list of tasks ready to execute on the plurality of executing threads. 19. The system of claim 18, further comprising: an interrupt controller configured to direct an interrupt to a lowest priority thread of the plurality of executing threads. 20. The system of claim 18, wherein an interrupt mask is configured to direct an interrupt to a lowest priority thread of the plurality of executing threads. 21. The system of claim 19, further comprising: a scheduler configured to move a highest priority task from the prioritized ready task list of the second data structure to the prioritized executing task list of the first data structure to execute the highest priority task on the interrupted lowest priority thread and to check a schedule to determine whether any ready tasks in the second data structure have a higher priority than a lowest priority executing task in the first data structure. 22. The system of claim 21, wherein the lowest priority thread of the plurality of executing threads is executing a lowest priority task from the prioritized executing task list of the first data structure. 23. The system of claim 21 , wherein the lowest priority thread of the plurality of executing threads is an idle thread executing an idle task having the lowest priority on the prioritized executing task list of the first data structure. 24. A computer-readable medium containing computer executable instructions that are executable to cause a computer to: direct an interrupt to a lowest priority thread of a plurality of executing threads, wherein the interrupt indicates at least one task is ready to be executed, wherein the lowest priority thread of the plurality of executing threads is either executing a lowest priority task or is an idle thread, and wherein the lowest priority thread of the plurality of executing threads receives the interrupt and initiates an interrupt routine to select a highest priority task from a ready task list. 25. The computer-readable medium of claim 24, wherein the computer executable instructions are further executable to cause the computer to: after swapping tasks to execute the highest priority task from the ready task list and to return an executing task to the ready task list, check a schedule to determine whether any ready tasks have higher priority than any executing task, wherein checking the schedule to determine if any ready task has a higher priority than any executing task includes comparing a lowest priority executing task to a highest priority ready task; use an interrupt routine to direct a subsequent interrupt to a particular thread of the plurality of executing threads that is executing the lowest priority task; and selectively raise the subsequent interrupt based on a result of checking the schedule to determine if any ready task has a higher priority than the lowest priority executing task. |
REAL-TIME MULTITHREADED SCHEDULER AND SCHEDULING METHOD /. Field of the Disclosure [0001] The present disclosure is generally directed to a real-time multithreaded scheduler and scheduling method. II. Background [0002] Advances in technology have resulted in smaller and more powerful computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and easily carried by users. More specifically, portable wireless telephones, such as cellular telephones and Internet Protocol (IP) telephones, can communicate voice and data packets over wireless networks. Further, many such wireless telephones include other types of devices that are incorporated therein. For example, wireless telephones can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such wireless telephones can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these wireless telephones can include significant computing capabilities. [0003] Digital signal processors (DSPs), image processors, and other processing devices are frequently used in portable personal computing devices and operate in conjunction with an operating system. One requirement of a real-time operating system (RTOS) is strict priority scheduling. On a single processor, the requirement is that the highest priority executable task should be scheduled. Typically, in a multithreaded or multiprocessor system with multiple central processing units (CPUs), a specific task is bound to a specific hardware thread or CPU and a single-processor scheduler algorithm is run on each hardware thread or CPU independently. This approach does not satisfy the RTOS constraint that the highest priority executable tasks overall should bescheduled, and it requires knowledge of what hardware thread or CPU to schedule the task on ahead of time, knowledge that may not be available. III. Summary [0004] In a particular embodiment, a method is disclosed that includes receiving an interrupt at a first thread, the first thread including a lowest priority thread of a plurality of executing threads at a processor at a first time. The method also includes identifying a second thread, the second thread including a lowest priority thread of a plurality of executing threads at a processor at a second time. The method further includes directing a subsequent interrupt to the second thread. [0005] In another embodiment, a method is disclosed that includes receiving an interrupt at a first thread of a set of threads. Each thread of the set of threads executes a respective task of a set of executing tasks. Each task of the set of executing tasks and each task of a set of ready tasks has a respective priority. The method also includes iteratively swapping a lowest priority task of the set of executing tasks with a highest priority task of the set of ready tasks until each task of the set of executing tasks has a priority that is greater than or equal to a priority of every task of the set of ready tasks. [0006] In another embodiment, a system is disclosed that includes a multithreaded processor configured to execute a plurality of threads such that a plurality of executing threads are running the highest priority tasks. The multithreaded processor is configured to schedule tasks such that executing tasks have a priority at least as high as a highest priority of all ready tasks. [0007] In another embodiment, a computer-readable medium is disclosed. The computer-readable medium contains computer executable instructions that are executable to cause a computer to direct an interrupt to a lowest priority thread of a plurality of executing threads. The interrupt indicates at least one task is ready to be executed. The lowest priority thread of the plurality of executing threads is either executing a lowest priority task or is an idle thread. The lowest priority thread of the plurality of executing threads receives the interrupt and initiates an interrupt routine to select a highest priority task from a ready task list.[0008] One particular advantage provided by disclosed embodiments is that tasks are scheduled so that the highest priority threads are executed with reduced disturbance from interrupts. Because a low priority thread receives interrupts, raising a reschedule interrupt automatically reschedules the low priority thread without disturbing the highest priority threads. Additionally, an external interrupt will interrupt the lowest priority thread rather than the highest priority threads. [0009] Another particular advantage provided by disclosed embodiments is that in the scenario where multiple tasks are made ready to execute and are higher priority than more than one running task, the low priority running tasks are swapped with the new higher priority tasks with the minimum number of swaps, and without overhead to the other high priority tasks running. [0010] Another advantage provided by disclosed embodiments is that a priority of the interrupt handler thread may be compared directly to a priority of the currently-running thread. If the currently-running thread has a lower priority than the interrupt handler thread, then the interrupt handler thread may be scheduled immediately. [0011] Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims. IV. Brief Description of the Drawings [0012] FIG. 1 is a block diagram of a particular illustrative embodiment of a processing system; [0013] FIG. 2 is a block diagram of another particular illustrative embodiment of a processing system showing an interrupt directed to a lowest priority executing thread; [0014] FIG. 3 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing task swapping; [0015] FIG. 4 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing another interrupt directed to a lowest priority executing thread;[0016] FIG. 5 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing a task moving from a ready task list to an executing task list; [0017] FIG. 6 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing another interrupt directed to a lowest priority executing thread; [0018] FIG. 7 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing task swapping; [0019] FIG. 8 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing another interrupt directed to a lowest priority executing thread; [0020] FIG. 9 is a block diagram of the particular illustrative embodiment of the processing system of FIG. 2, showing task swapping; [0021] FIGS. lOA-lOC are flow diagrams of a first illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor; [0022] FIG. 11 is a flow diagram of a second illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor; [0023] FIG. 12 is a flow diagram of a third illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor; [0024] FIG. 13 is a flow diagram of a fourth illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor; and [0025] FIG. 14 is a block diagram of a particular embodiment of a portable communication device including a real-time multithreaded scheduler module. V. Detailed Description [0026] Referring to FIG. 1 , a particular illustrative embodiment of a processing system is depicted and generally designated 100. The multithreaded processor 100 includes a memory 102 that is coupled to an instruction cache 110 via a bus interface 108. Themultithreaded processor 100 also includes a data cache 112 that is coupled to the memory 102 via the bus interface 108. The instruction cache 110 is coupled to a sequencer 114 via a bus 111. In a particular example, the sequencer 114 can also receive general interrupts 116, which may be retrieved from an interrupt register (not shown). In a particular embodiment, the instruction cache 110 may be coupled to the sequencer 114 via a plurality of current instruction registers, which may be coupled to the bus 111 and associated with particular threads of the multithreaded processor 100. In a particular embodiment, the multithreaded processor 100 is an interleaved multithreaded processor including six threads. [0027] In a particular embodiment, the bus 111 is a sixty-four (64)-bit bus and the sequencer 114 is configured to retrieve instructions from the memory 102 via instruction packets that include multiple instructions having a length of thirty-two (32) bits each. The bus 111 is coupled to a first instruction execution unit 118, a second instruction execution unit 120, a third instruction execution unit 122, and a fourth instruction execution unit 124. Each instruction execution unit 118, 120, 122, 124 can be coupled to a general register file 126 via a second bus 128. The general register file 126 can also be coupled to the sequencer 114 and to the data cache 112 and to the memory 102 via a third bus 130. [0028] The multithreaded processor 100 may also include supervisor control registers 132 to store one or more priority settings that may be accessed by a control unit 150 that includes a real-time priority scheduler 158 and an interrupt controller 156 to determine what tasks to execute on each of the processing threads. The real-time priority scheduler 158 may be implemented as a software routine. Each processing thread may have one or more associated priority settings, such as one or more bit values stored at a supervisor status register that is dedicated to the particular thread. [0029] During operation, the multithreaded processor 100 executes a plurality of threads such that a plurality of executing threads are running highest priority tasks, where the multithreaded processor 100 schedules tasks such that all executing tasks have a priority at least as high as a highest priority of all ready tasks. In a particular embodiment, the real-time priority scheduler 158 schedules tasks such that all executing tasks have a priority at least as high as a highest priority of all ready tasks. In a particularembodiment, the multithreaded processor 100 is configured such that an interrupt directed to a lowest priority thread of the plurality of executing threads does not impact performance of highest priority threads of the plurality of executing threads. For example, the interrupt controller 156 may be configured such that an interrupt directed to the lowest priority thread of the plurality of executing threads does not impact performance of the highest priority threads of the plurality of executing threads. As used herein, an interrupt may be anything that stops normal execution and begins execution of a special handler. An interrupt may mean any breaking of the normal program flow. [0030] In a particular embodiment, the multithreaded processor 100 is configured such that the lowest priority thread of the plurality of executing threads receives an interrupt and runs an interrupt routine to select a highest priority task from a ready task list. For example, the interrupt controller 156 may be configured such that the lowest priority thread of the plurality of executing threads receives an interrupt and the real-time priority scheduler 158 may run an interrupt routine to select a highest priority task from a ready task list. The interrupt logic may be configured such that only the lowest priority thread is able to take the interrupt. In a particular embodiment, the multithreaded processor 100 is configured such that, after each swapping of tasks to execute a highest priority task of a ready task list and to return a prior executing task to the ready task list, a schedule is checked to determine if any ready task has a higher priority than any executing task. For example, the real-time priority scheduler 158 may check a schedule to determine if any ready task has a higher priority than any executing task, after each swapping of tasks to execute the highest priority task of the ready task list and to return the prior executing task to the ready task list. [0031] In a particular embodiment, the multithreaded processor 100 is configured such that a schedule is updated using a minimum possible number of swaps of executing tasks and ready tasks so that the updated schedule has every executing task having a priority at least as high as a highest priority of the ready tasks. For example, the real-time priority scheduler 158 may be configured such that the schedule is updated using the minimum possible number of swaps of executing tasks and ready tasks so that the updated schedule has every executing task having a priority at least as high as a highest priority of the ready tasks.[0032] Referring to FIG. 2, a particular illustrative embodiment of a processing system is depicted and generally designated 200. In a particular embodiment, the multithreaded processor 200 is substantially similar to the multithreaded processor 100 of FIG. 1. The multithreaded processor 200 includes a real-time priority scheduler 202 coupled to an interrupt controller 204, an interrupt mask 220, a first data structure 206, and a second data structure 210. The first data structure 206 includes a prioritized executing task list 208 of tasks executing on a plurality of executing threads 214. The tasks are labeled by their respective priorities, with 0 labeling the highest priority tasks, 1 labeling the next highest priority tasks, and so forth. In a particular embodiment, idle tasks have the lowest priority. The second data structure 210 includes a prioritized ready task list 212 of tasks ready to execute on the plurality of executing threads 214. The interrupt controller 204 may be configured to direct an interrupt to a lowest priority thread of the plurality of executing threads 214, as shown by the arrow 218. The interrupt mask 220 may be a bit mask to indicate an availability of each thread to receive an interrupt. In a particular embodiment, the interrupt mask 220 is configured to direct an interrupt to a lowest priority thread of the plurality of executing threads 214, as shown by the arrow 218. [0033] The real-time priority scheduler 202 may be configured to move a highest priority task from the prioritized ready task list 212 of the second data structure 210 to the prioritized executing task list 208 of the first data structure 206 to execute the highest priority task on an interrupted lowest priority thread. The real-time priority scheduler 202 may also be configured to check a schedule 222 to determine whether any ready tasks in the second data structure 210 have a higher priority than a lowest priority executing task in the first data structure 206. Checking the schedule 222 may include inspecting the first data structure 206 and the second data structure 210 to determine whether any ready tasks in the second data structure 210 have a higher priority than a lowest priority executing task in the first data structure 206. In a particular embodiment, the lowest priority thread of the plurality of executing threads 214 is executing a lowest priority task from the prioritized executing task list 208 of the first data structure 206. For example, the thread executing the task with a priority of 6 in FIG. 2 may be the lowest priority thread.[0034] In operation, one of the two tasks with a priority of 4 executing on the plurality of executing threads 214 may launch or "wake up" a task with a priority of 2, as shown by the arrow 216. This priority 2 task appears on the prioritized ready task list 212 of the second data structure 210. The interrupt mask 220 masks off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority thread, which is executing the priority 6 task. The interrupt controller 204 may direct an interrupt, as shown by the arrow 218, to the lowest priority thread of the plurality of executing threads 214, the thread that is executing the priority 6 task, in response to the launching or waking up of the priority 2 task. [0035] Referring to FIG. 3, the real-time priority scheduler 202 may move the highest priority task, the priority 2 task, from the prioritized ready task list 212 of the second data structure 210 to the prioritized executing task list 208 of the first data structure 206 to execute the highest priority task, the priority 2 task, on the interrupted lowest priority thread, as shown by the arrow 300. After swapping tasks to execute the highest priority task, the priority 2 task, from the prioritized ready task list 212 and to return a prior executing task, the priority 6 task, to the prioritized ready task list 212, as shown by the arrow 300, the real-time priority scheduler 202 may check the schedule 222 to determine whether any ready tasks in the second data structure 210 have a higher priority than a lowest priority executing task in the first data structure 206. In FIG. 3, after the swapping of the tasks shown by the arrow 300, the ready tasks in the second data structure 210 have either priority 6 or priority 8, where the lowest priority executing tasks in the first data structure 206 are the two priority 4 tasks, so none of the ready tasks in the second data structure 210 have a higher priority than the lowest priority executing tasks in the first data structure 206. The interrupt mask 220 masks off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority threads, which are executing the priority 4 tasks. [0036] Referring to FIG. 4, as another example of operation, a task with a priority of 6 executing on the plurality of executing threads 214 may launch or "wake up" a task with a priority of 2 and two tasks with a priority of 4, as shown by the arrows 400. The priority 2 task and the two priority 4 tasks appear on the prioritized ready task list 212 of the second data structure 210. The interrupt mask 220 masks off all the tasksexecuting on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority thread, which is idle. In response to the launching or waking up of the tasks, the interrupt controller 204 may direct an interrupt, as shown by the arrow 402, to the lowest priority thread of the plurality of executing threads 214. In a particular embodiment, the lowest priority thread of the plurality of executing threads 214 is an idle thread executing an idle task, which is defined as having the lowest priority on the prioritized executing task list 208 of the first data structure 206 due to its idle status. [0037] Referring to FIG. 5, the real-time priority scheduler 202 of FIG. 4 may move the highest priority task, the priority 2 task, from the prioritized ready task list 212 of the second data structure 210 to the prioritized executing task list 208 of the first data structure 206 to execute the highest priority task, the priority 2 task, on the interrupted lowest priority thread, as shown by the arrow 500. After moving the highest priority task, the priority 2 task, from the prioritized ready task list 212, as shown by the arrow 500, the real-time priority scheduler 202 may check the schedule 222 to determine whether any ready tasks in the second data structure 210 have a higher priority than a lowest priority executing task in the first data structure 206. In FIG. 5, after the moving of the task shown by the arrow 500, the ready tasks in the second data structure 210 both have priority 4, where the lowest priority executing task in the first data structure 206 is the priority 8 task, so both of the ready tasks in the second data structure 210 have a higher priority than the lowest priority executing task in the first data structure 206. The interrupt mask 220 masks off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority thread, which is executing the priority 8 task. [0038] Referring to FIG. 6, the two priority 4 tasks remain on the prioritized ready task list 212 of the second data structure 210. The interrupt mask 220 has masked off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority thread, which is executing the priority 8 task. The interrupt controller 204 may direct an interrupt, as shown by the arrow 600, to the lowest priority thread of the plurality of executing threads 214, the thread that is executing the priority 8 task.[0039] Referring to FIG. 7, the real-time priority scheduler 202 may move the highest priority task, either of the priority 4 tasks, from the prioritized ready task list 212 of the second data structure 210 to the prioritized executing task list 208 of the first data structure 206 to execute the highest priority task, one of the priority 4 tasks, on the interrupted lowest priority thread, as shown by the arrow 700. After swapping tasks to execute the highest priority task, one of the priority 4 tasks, from the prioritized ready task list 212 and to return a prior executing task, the priority 8 task, to the prioritized ready task list 212, as shown by the arrow 700, the real-time priority scheduler 202 may check the schedule 222 to determine whether any ready tasks in the second data structure 210 have a higher priority than a lowest priority executing task in the first data structure 206. In FIG. 7, after the swapping of the tasks shown by the arrow 700, the ready tasks in the second data structure 210 have priority 4 and priority 8, where the lowest priority executing task in the first data structure 206 is the priority 6 task, so one of the ready tasks in the second data structure 210, the priority 4 task, has a higher priority than the lowest priority executing task in the first data structure 206. The interrupt mask 220 masks off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority thread, which is executing the priority 6 task. [0040] Referring to FIG. 8, the priority 4 task remains on the prioritized ready task list 212 of the second data structure 210 along with the priority 8 task. The interrupt mask 220 has masked off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority thread, which is executing the priority 6 task. In response to the task swapping shown by the arrow 700 of FIG. 7, the interrupt controller 204 may direct an interrupt, as shown by the arrow 800, to the lowest priority thread of the plurality of executing threads 214, the thread that is executing the priority 6 task. [0041] Referring to FIG. 9, the real-time priority scheduler 202 may move the highest priority task, the priority 4 task, from the prioritized ready task list 212 of the second data structure 210 to the prioritized executing task list 208 of the first data structure 206 to execute the highest priority task, the priority 4 task, on the interrupted lowest priority thread, as shown by the arrow 900. After swapping tasks to execute the highest priority task, the priority 4 task, from the prioritized ready task list 212 and to return a priorexecuting task, the priority 6 task, to the prioritized ready task list 212, as shown by the arrow 900, the real-time priority scheduler 202 may check the schedule 222 to determine whether any ready tasks in the second data structure 210 have a higher priority than a lowest priority executing task in the first data structure 206. In FIG. 9, after the swapping of the tasks shown by the arrow 900, the ready tasks in the second data structure 210 have priority 6 and priority 8, where the lowest priority executing tasks in the first data structure 206 are the priority 4 tasks, so none of the ready tasks in the second data structure 210 have a higher priority than the lowest priority executing tasks in the first data structure 206. The interrupt mask 220 masks off all the tasks executing on the plurality of executing threads 214 from the interrupt controller 204, except for the lowest priority threads, which are executing the priority 4 tasks. [0042] Referring to FIG. 1OA, a flow diagram of a first illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor is depicted and generally designated 1000. In a particular embodiment, the method 1000 is implemented on the multithreaded processor 100 of FIG. 1. The method 1000 includes receiving an interrupt and saving the context from an interrupted task, at 1002. The method 1000 also includes adding one or more tasks waiting for the interrupt to a ready queue, at 1004. In a particular embodiment, the ready queue corresponds to the second data structure 210 shown in FIG. 2 that includes the prioritized ready task list 212 of tasks ready to execute on the plurality of executing threads 214. [0043] The method 1000 further includes running the scheduler algorithm shown in FIG. 1OB, at 1006. The method 1000 also includes restoring the context and returning to uninterrupted operation, at 1008. Restoring the context and returning to uninterrupted operation may happen at a later point in time, as the scheduler may run other tasks before returning to the interrupted task. [0044] Referring to FIG. 1OB, the scheduler algorithm is shown at 1010. The scheduler algorithm 1010 includes removing a currently running task from a running queue and adding that task to the ready queue, at 1012. In a particular embodiment, the running queue corresponds to the first data structure 206 shown in FIG. 2 that includes the prioritized executing task list 208 of tasks executing on the plurality of executing threads 214. The scheduler algorithm 1010 also includes removing the highest prioritytask from the ready queue and adding that task to the running queue, making that task the new currently running task, at 1014. The scheduler algorithm 1010 further includes determining whether the new currently running task is the lowest priority running task, at 1016. If the new currently running task is the lowest priority running task, then the thread on which the new currently running task is running is configured to take interrupts, at 1018. If the new currently running task is not the lowest priority running task, then the thread on which the new currently running task is running is configured not to take interrupts, at 1020. The scheduler algorithm 1010 further includes running the scheduler check shown in FIG. 1OC, at 1022. [0045] Referring to FIG. 1OC, the scheduler check is shown at 1024. The scheduler check 1024 begins at 1026. The scheduler check 1024 includes determining whether some thread is configured to take interrupts, at 1028. If no thread is configured to take interrupts, then the lowest priority thread (or threads) is configured to take interrupts, at 1034. The scheduler check 1024 also includes determining whether the lowest priority running task has a lower priority than the highest priority ready task, at 1030. If the lowest priority running task has a lower priority than the highest priority ready task, then an interrupt is triggered to cause a rescheduling event, at 1036, and the scheduler check 1024 ends, at 1038. [0046] If the lowest priority running task does not have a lower priority than the highest priority ready task, then the scheduler check 1024 further includes determining whether any thread is idle while any task is ready, at 1032. If any thread is idle while any task is ready, then an interrupt is triggered to cause a rescheduling event, at 1036, and the scheduler check 1024 ends, at 1038. If no thread is idle while any task is ready, the scheduler check 1024 ends, at 1038. In a particular embodiment, the interrupt controller is already set up such that the interrupt triggered at 1036 will be delivered to the lowest priority thread or an idle thread. [0047] Referring to FIG. 11, a flow diagram of a second illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor is depicted and generally designated 1100. In a particular embodiment, the method 1100 is implemented on the multithreaded processor 100 of FIG. 1. The method 1100 includes receiving an interrupt at a first thread, the first thread including a lowest priority threadof a plurality of executing threads at a processor at a first time, at 1102. The method 1100 also includes identifying a second thread, the second thread including a lowest priority thread of a plurality of executing threads at a processor at a second time, at 1104. In a particular embodiment, the second thread is different from the first thread. The method 1100 further includes directing a subsequent interrupt to the second thread, at 1106. For example, interrupts may be directed to the lowest priority threads of the plurality of executing threads 214, as shown by the arrow 218 of FIG. 2, the arrow 402 of FIG. 4, the arrow 600 of FIG. 6, and the arrow 800 of FIG. 8, as described above. In a particular embodiment, the interrupt indicates at least one task is ready to be executed. For example, the interrupt shown by the arrow 218 of FIG. 2 may indicate that a priority 2 task is ready to be executed, the interrupt shown by the arrow 402 of FIG. 4 may indicate that a priority 2 task is ready to be executed, the interrupt shown by the arrow 600 of FIG. 6 may indicate that a priority 4 task is ready to be executed, and the interrupt shown by the arrow 800 of FIG. 8 may indicate that a priority 4 task is ready to be executed. [0048] In a particular embodiment, the first thread is executing a lowest priority task of a plurality of executing tasks. For example, the lowest priority thread of the plurality of executing threads 214 of FIG. 2 may be executing a priority 6 task, the lowest priority threads of FIG. 3 are executing priority 4 tasks, the lowest priority thread of FIG. 5 and FIG. 6 may be executing a priority 8 task, the lowest priority thread of FIG. 7 and FIG. 8 may be executing a priority 6 task, and the lowest priority threads of FIG. 9 are executing priority 4 tasks. In a particular embodiment, the first thread is an idle thread. For example, the lowest priority thread of the plurality of executing threads 214 of FIG. 4 may be an idle thread. In a particular embodiment, an idle thread is executing an idle task. [0049] The method 1100 also includes, when the first thread receives the interrupt, running an interrupt routine to select a highest priority task from a ready task list, at 1108. A first priority of the highest priority task from the ready task list may be compared to a second priority of a lowest priority task from an executing task list and tasks are swapped only when the first priority is higher than the second priority. For example, in FIG. 2, when the lowest priority thread, the one executing the priority 6 task, of the plurality of executing threads 214 receives the interrupt, as shown by thearrow 218, an interrupt routine may be run to select the highest priority task, the priority 2 task, from the prioritized ready task list 212. [0050] The method 1100 further includes, after swapping tasks to execute the highest priority task of the ready task list at the first thread and to return a prior executing task to the ready task list, determining if any ready tasks of the ready task list have higher priority than a lowest priority executing task, at 1110. For example, after swapping the priority 2 task and the priority 6 task, as shown by the arrow 300 of FIG. 3, it may be determined that no ready tasks of the prioritized ready task list 212, with priority 6 and priority 8, have higher priority than the lowest priority executing tasks of the prioritized executing task list 208, with priority 4. Similarly, after swapping the priority 4 task and the priority 8 task, as shown by the arrow 700 of FIG. 7, it may be determined that one ready task of the prioritized ready task list 212, with priority 4, has higher priority than the lowest priority executing task of the prioritized executing task list 208, with priority 6. [0051] In a particular embodiment, the method 1100 may be repeated as necessary to check a schedule to determine if any additional ready task has a higher priority than the current lowest priority executing task by comparing the lowest priority executing task to a highest priority ready task. Comparing the lowest priority executing task to the highest priority ready task may be one way of determining if any ready tasks of the ready task list have higher priority than the lowest priority executing task . For example, in FIG. 3, it may be determined that no ready tasks have higher priority than the lowest priority executing tasks, the priority 4 tasks, by comparing the priority 4 tasks to the highest priority ready tasks, the priority 6 tasks. Similarly, in FIG. 5, it may be determined that both ready tasks, with priority 4, have higher priority than the lowest priority executing task, the priority 8 task, by comparing the priority 8 task to the highest priority ready tasks, the priority 4 tasks. [0052] In a particular embodiment, the method 1100 further includes determining the lowest priority executing task and using an interrupt routine to direct the subsequent interrupt to the second thread, where the second thread is executing the lowest priority executing task. For example, after swapping the priority 4 task and the priority 8 task, as shown by the arrow 700 of FIG. 7, it may be determined that the priority 6 task is thelowest priority executing task and an interrupt routine may be used to direct the subsequent interrupt, as shown by the arrow 800 of FIG. 8, to the lowest priority thread of the plurality of executing threads 214 that is executing the priority 6 task. [0053] In a particular embodiment, the method 1100 further includes selectively setting the subsequent interrupt based on a result of checking a schedule to determine if any ready task has a higher priority than the lowest priority executing task. For example, the subsequent interrupt, as shown by the arrow 800 of FIG. 8, may be selectively set based on the result of checking the schedule 222 in FIG. 7 and determining that one of the ready tasks, with priority 4, has higher priority than the priority 6 task, the lowest priority executing task. [0054] In a particular embodiment, the interrupt controller 204 of FIGS. 2-9 directs threads of the plurality of executing threads 214 other than the lowest priority thread of the plurality of executing threads 214 to be masked from receiving the interrupt. For example, as shown in FIGS. 2-9, the interrupt controller 204 may direct the interrupt mask 220 to mask off the plurality of executing threads 214 from receiving the interrupt, except for the lowest priority thread of the plurality of executing threads 214. [0055] Referring to FIG. 12, a flow diagram of a third illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor is depicted and generally designated 1200. In a particular embodiment, the method 1200 is implemented on the multithreaded processor 100 of FIG. 1. The method 1200 includes receiving an interrupt at a first thread of a set of threads, where each thread of the set of threads executes a respective task of a set of executing tasks, and where each task of the set of executing tasks and each task of a set of ready tasks has a respective priority, at 1202. For example, as shown in FIG. 6, the thread executing the priority 8 task may receive the interrupt, as shown by the arrow 600, where each thread of the plurality of executing threads 214 executes a respective task of the prioritized executing task list 208, and where each task of the prioritized executing task list 208 and each task of the prioritized ready task list 212 has a respective priority. In a particular embodiment, an idle thread executes an idle task that has the lowest priority. For example, as shown in FIG. 4, the thread executing the idle task may receive the interrupt, as shown by the arrow 402.[0056] The method 1200 also includes iterative Iy swapping a lowest priority task of the set of executing tasks with a highest priority task of the set of ready tasks until each task of the set of executing tasks has a priority that is greater than or equal to a priority of every task of the set of ready tasks, at 1204. For example, as shown in FIGS. 4-9, the lowest priority tasks of the prioritized executing task list 208 may be iterative Iy swapped with the respective highest priority tasks of the prioritized ready task list 212 until each task of the prioritized executing task list 208 has a priority that is greater than or equal to a priority of every task of the prioritized ready task list 212, as shown in FIG. 9. [0057] In a particular embodiment, iterative swapping includes after swapping each executing task with a ready task to form a next set of executing tasks and a next set of ready tasks, determining a lowest priority task of the next set of executing tasks, at 1206. Each iteration of the iterative swapping may include determining a lowest priority task of the next set of executing tasks. For example, after swapping the priority 4 task with the priority 8 task, as shown by the arrow 700 of FIG. 7, the next set of executing tasks may include a priority 0 task, two priority 2 tasks, two priority 4 tasks, and a priority 6 task, and the next set of ready tasks may include a priority 4 task and a priority 8 task, where the priority 6 task may be determined to be the lowest priority task of the next set of executing tasks. Iterative swapping further includes sending an interrupt to the lowest priority thread to perform a next iteration of the iterative swapping when a priority of the lowest priority task of the next set of executing tasks is less than a priority of a highest priority task of the next set of ready tasks, at 1208. For example, as shown by the arrow 800 of FIG. 8, an interrupt may be sent to the priority 6 task, the lowest priority task of the next set of executing tasks, to perform a next iteration of the iterative swapping, since the priority 6 task has lower priority than the priority 4 task that has the highest priority of the next set of ready tasks. [0058] Referring to FIG. 13, a flow diagram of a fourth illustrative embodiment of a method to schedule tasks in real-time on a multithreaded processor is depicted and generally designated 1300. In a particular embodiment, the method 1300 is implemented on the multithreaded processor 100 of FIG. 1. The method 1300 includes directing an interrupt to a lowest priority thread of a plurality of executing threads, where the interrupt indicates at least one task is ready to be executed, where the lowestpriority thread of the plurality of executing threads is either executing a lowest priority task or is an idle thread, and where the lowest priority thread of the plurality of executing threads receives the interrupt and initiates an interrupt routine to select a highest priority task from a ready task list, at 1302. For example, as shown by the arrow 218 of FIG. 2, an interrupt may be directed to a lowest priority thread of the plurality of executing threads 214, where the interrupt may indicate that a priority 2 task is ready to be executed, where the lowest priority thread may be executing a priority 6 task, the lowest priority task, and where the lowest priority thread executing the priority 6 task may receive the interrupt and may initiate an interrupt routine to select the priority 2 task, the highest priority task, from the prioritized ready task list 212. [0059] The method 1300 also includes, after swapping tasks to execute the highest priority task from the ready task list and to return a prior executing task to the ready task list, checking a schedule to determine whether any ready tasks have higher priority than any executing task, where checking the schedule to determine if any ready task has a higher priority than any executing task includes comparing a lowest priority executing task to a highest priority ready task, at 1304. For example, after swapping the priority 4 task and the priority 8 task, as shown by the arrow 700 of FIG. 7, the schedule 222 may be checked to determine if any ready task has a higher priority than any executing task by comparing the priority 6 task, the lowest priority executing task, to the priority 4 task, the highest priority ready task. [0060] The method 1300 further includes using an interrupt routine to direct a subsequent interrupt to a particular thread of the plurality of executing threads that is executing the lowest priority task, at 1306. For example, an interrupt routine may be used to direct a subsequent interrupt, as shown by the arrow 800 of FIG. 8, to the particular thread of the plurality of executing threads 214 that is executing the priority 6 task, the lowest priority task. The method 1300 also includes selectively raising or initiating the subsequent interrupt based on a result of checking the schedule to determine if any ready task has a higher priority than the lowest priority executing task, at 1308. For example, the subsequent interrupt, as shown by the arrow 800 of FIG. 8, may be selectively raised or initiated based on the result of checking the schedule 222 and determining that the priority 4 ready task has higher priority than the priority 6 executing task, the lowest priority executing task.[0061] FIG. 14 is a block diagram of particular embodiment of a system 1400 including a real-time multithreaded scheduler module 1464. The system 1400 may be implemented in a portable electronic device and includes a processor 1410, such as a digital signal processor (DSP), coupled to a memory 1432. In an illustrative example, the real-time multithreaded scheduler module 1464 includes any of the systems of FIGS. 1-9, operates in accordance with any of the embodiments of FIGS. 10-13, or any combination thereof. The real-time multithreaded scheduler module 1464 may be in the processor 1410 or may be a separate device or circuitry (not shown). In a particular embodiment, the real-time multithreaded scheduler 106 of FIG. 1 is accessible to a digital signal processor. For example, as shown in FIG. 14, the real-time multithreaded scheduler module 1464 is accessible to the digital signal processor (DSP) 1410. [0062] A camera interface 1468 is coupled to the processor 1410 and also coupled to a camera, such as a video camera 1470. A display controller 1426 is coupled to the processor 1410 and to a display device 1428. A coder/decoder (CODEC) 1434 can also be coupled to the signal processor 1410. A speaker 1436 and a microphone 1438 can be coupled to the CODEC 1434. A wireless interface 1440 can be coupled to the processor 1410 and to a wireless antenna 1442. [0063] The real-time multithreaded scheduler module 1464 is configured to execute computer executable instructions 1466 stored at a computer-readable medium, such as the memory 1432, to cause the real-time multithreaded scheduler module 1464 to direct an interrupt to a lowest priority thread of a plurality of executing threads, where the interrupt indicates at least one task is ready to be executed, where the lowest priority thread of the plurality of executing threads is either executing a lowest priority task or is an idle thread, and where the lowest priority thread of the plurality of executing threads receives the interrupt and initiates an interrupt routine to select a highest priority task from a ready task list. In this manner, the real-time multithreaded scheduler module 1464 can ensure that high priority tasks such as modem tasks are not interrupted by lower priority tasks such as user interface tasks. [0064] In a particular embodiment, the processor 1410, the display controller 1426, the memory 1432, the CODEC 1434, the wireless interface 1440, and the camera interface 1468 are included in a system-in-package or system-on-chip device 1422. In aparticular embodiment, an input device 1430 and a power supply 1444 are coupled to the system-on-chip device 1422. Moreover, in a particular embodiment, as illustrated in FIG. 14, the display device 1428, the input device 1430, the speaker 1436, the microphone 1438, the wireless antenna 1442, the video camera 1470, and the power supply 1444 are external to the system-on-chip device 1422. However, each of the display device 1428, the input device 1430, the speaker 1436, the microphone 1438, the wireless antenna 1442, the video camera 1470, and the power supply 1444 can be coupled to a component of the system-on-chip device 1422, such as an interface or a controller. [0065] Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. [0066] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disk read-only memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application- specific integrated circuit (ASIC). The ASIC may reside in a computingdevice or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims. |
A thermal protection circuit for high output power supplies. A power supply circuit includes a switching control circuit coupled to a switching regulator circuit. The switching control circuit is configured to generate a plurality of switching control signals for controlling the switching regulator circuit. The power supply circuit also includes a temperature sensitive circuit which includes a thermistor. The temperature sensitive circuit is configured to provide a variable voltage level output to the phase control circuit. The switching control circuit is also configured to suspend operation of the switching regulator circuit upon detecting a predetermined voltage level at the output. |
What is claimed is: 1. A power supply circuit comprising:a switching regulator circuit including a transistor that is configured to constantly switch between a saturation mode and a non-conducting mode during operation; a switching control circuit coupled to said switching regulator circuit, wherein said switching control circuit is configured to generate a plurality of switching control signals for controlling switching of said switching regulator circuit; a temperature sensitive circuit including a thermistor and coupled to said switching control circuit, wherein said temperature sensitive circuit having an output whose voltage level varies as a result of a variance in a voltage drop across said thermistor; and wherein said switching control circuit is further configured to suspend operation of said switching regulator circuit upon detecting a predetermined voltage level at said output. 2. The power supply circuit as recited in claim 1, wherein said temperature sensitive circuit is a voltage divider circuit including said thermistor and a resistor, wherein said voltage divider circuit is configured to provide a variable voltage level to said switching control circuit, wherein said variable voltage level is developed at a node between said thermistor and said resistor.3. The power supply circuit as recited in claim 2, wherein said temperature sensitive circuit is further configured to detect an elevation in temperature corresponding to a temperature of said switching regulator circuit.4. The power supply circuit as recited in claim 3, wherein said temperature sensitive circuit is further configured to develop said predetermined voltage level in response to said thermistor decreasing said internal resistance value.5. The power supply circuit as recited in claim 2, wherein said thermistor is configured to decrease an internal resistance value in response to detecting said elevation in temperature.6. The power supply circuit as recited in claim 5, wherein said thermistor is connected to circuit ground at a first connection and is connected to said resistor at a second connection, and wherein said resistor is connected to a supply voltage at a third connection.7. The power supply circuit as recited in claim 1, wherein said switching control circuit further comprises an enable input, wherein said switching control circuit is further configured to detect said predetermined voltage level at said enable input.8. A computer system comprising:a microprocessor; a power supply circuit coupled to said microprocessor, wherein said power supply circuit comprises: a switching regulator circuit including a transistor that is configured to constantly switch between a saturation mode and a non-conducting mode during operation; a switching control circuit coupled to said switching regulator circuit, wherein said switching control circuit is configured to generate a plurality of switching control signals for controlling switching of said switching regulator circuit; a temperature sensitive circuit including a thermistor and coupled to said switching control circuit, wherein said temperature sensitive circuit having an output whose voltage level varies as a result of a variance in a voltage drop across said thermistor; and wherein said switching control circuit is further configured to suspend operation of said switching regulator circuit upon detecting a predetermined voltage level at said output. 9. The computer system as recited in claim 8, wherein said temperature sensitive circuit is a voltage divider circuit including said thermistor and a resistor, wherein said voltage divider circuit is configured to provide a variable voltage level to said switching control circuit, wherein said variable voltage level is developed at a node between said thermistor and said resistor.10. The computer system as recited in claim 9, wherein said temperature sensitive circuit is further configured to detect an elevation in temperature corresponding to a temperature of said switching regulator circuit.11. The computer system as recited in claim 10, wherein said temperature sensitive circuit is further configured to develop said predetermined voltage level in response to said thermistor decreasing said internal resistance value.12. The computer system as recited in claim 9, wherein said thermistor is configured to decrease an internal resistance value in response to detecting said elevation in temperature.13. The computer system as recited in claim 12, wherein said thermistor is connected to circuit ground at a first connection and is connected to said resistor at a second connection, and wherein said resistor is connected to a supply voltage at a third connection.14. The computer system as recited in claim 8, wherein said switching control circuit further comprises an enable input, wherein said switching control circuit is further configured to detect said predetermined voltage level at said enable input.15. A power supply comprising:a first switching regulator circuit; a second switching regulator circuit; a phase control circuit coupled to said first switching regulator circuit and to said second switching regulator circuit, wherein said phase control circuit is configured to generate a plurality of switching control signals for controlling switching of said first and second switching regulator circuits, wherein said phase control circuit is configured to selectively suspend operation of said second switching regulator in response to receiving a signal indicative of a low power mode of operation; a temperature sensitive circuit including a thermistor and coupled to said phase control circuit, wherein said temperature sensitive circuit having an output whose voltage level varies as a result of a variance in a voltage drop across said thermistor; and wherein said phase control circuit is further configured to suspend operation of said first and second switching regulator circuits upon detecting a predetermined voltage level at said output. 16. The power supply circuit as recited in claim 15, wherein said temperature sensitive circuit is a voltage divider circuit including said thermistor and a resistor, wherein said voltage divider circuit is configured to provide a variable voltage level to said phase control circuit, wherein said variable voltage level is developed at a node between said thermistor and said resistor.17. The power supply circuit as recited in claim 16, wherein said temperature sensitive circuit is further configured to detect an elevation in temperature corresponding to a temperature of said first and second switching regulator circuits.18. The power supply circuit as recited in claim 17, wherein said temperature sensitive circuit is further configured to develop said predetermined voltage level in response to said thermistor decreasing said internal resistance value.19. The power supply circuit as recited in claim 18, wherein said thermistor is configured to decrease an internal resistance value in response to detecting said elevation in temperature.20. The power supply circuit as recited in claim 19, wherein said thermistor is connected to circuit ground at a first connection and is connected to said resistor at a second connection, and wherein said resistor is connected to a supply voltage at a third connection.21. The power supply circuit as recited in claim 15, wherein said phase control circuit further comprises an enable input, wherein said phase control circuit is further configured to detect said predetermined voltage level at said enable input.22. A computer system comprising:a microprocessor; a power supply circuit coupled to said microprocessor, wherein said power supply circuit comprises: a first switching regulator circuit; a second switching regulator circuit; a phase control circuit coupled to said first switching regulator circuit and to said second switching regulator circuit, wherein said phase control circuit is configured to generate a plurality of switching control signals for controlling switching of said first and second switching regulator circuits, wherein said phase control circuit is configured to selectively suspend operation of said second switching regulator in response to receiving a signal indicative of a low power mode of operation; a temperature sensitive circuit including a thermistor and coupled to said phase control circuit, wherein said temperature sensitive circuit having an output whose voltage level varies as a result of a variance in a voltage drop across said thermistor; and wherein said phase control circuit is further configured to suspend operation of said first and second switching regulator circuits upon detecting a predetermined voltage level at said output. 23. The computer system as recited in claim 22, wherein said temperature sensitive circuit is a voltage divider circuit including said thermistor and a resistor, wherein said voltage divider circuit is configured to provide a variable voltage level to said phase control circuit, wherein said variable voltage level is developed at a node between said thermistor and said resistor.24. The computer system as recited in claim 23, wherein said wherein said temperature sensitive circuit is further configured to detect an elevation in temperature corresponding to a temperature of said first and second switching regulator circuits.25. The computer system as recited in claim 24, wherein said temperature sensitive circuit is further configured to develop said predetermined voltage level in response to said thermistor decreasing said internal resistance value.26. The computer system as recited in claim 25, wherein said thermistor is configured to decrease an internal resistance value in response to detecting said elevation in temperature.27. The power supply circuit as recited in claim 26, wherein said thermistor is connected to circuit ground at a first connection and is connected to said resistor at a second connection, and wherein said resistor is connected to a supply voltage at a third connection.28. The power supply circuit as recited in claim 22, wherein said phase control circuit further comprises an enable input, wherein said phase control circuit is further configured to detect said predetermined voltage level at said enable input.29. A power supply comprising:a first switching regulator circuit; a second switching regulator circuit; a third switching regulator circuit; a fourth switching regulator circuit; a phase control circuit coupled to said first switching regulator circuit, said second switching regulator circuit, said third switching regulator circuit and said fourth switching regulator circuit, wherein said phase control circuit is configured to generate a plurality of switching control signals for controlling switching of said first, second, third and fourth switching regulator circuits, wherein said phase control circuit is configured to selectively suspend operation of said second, third and fourth switching regulator circuits in response to receiving a signal indicative of a low power mode of operation; and wherein each of the first, second, third and fourth switching regulator circuits includes an inductor coupled to a first transistor and a second transistor, wherein the first transistor is coupled to pass current from a power source to said inductor when activated and wherein said second transistor is coupled to pass current from a ground node to said inductor when activated, and wherein said phase control circuit is configured to activate said first and second transistors out of phase with respect to each other; a temperature sensitive circuit including a thermistor and coupled to said phase control circuit wherein said temperature sensitive circuit having an output whose voltage level varies as a result of a variance in a voltage drop across said thermistor; and wherein said phase control circuit is further configured to suspend operation of said first, said second, said third and said fourth switching regulator circuits upon detecting a predetermined voltage level at said output. 30. The power supply as recited in claim 29, wherein said phase control circuit selectively suspends operation of said second, said third and said fourth switching regulator circuit during said low power mode of operation by disabling at least one of said plurality of control signals to said second, said third and said fourth switching regulator circuit, respectively.31. The power supply as recited in claim 29, wherein each of said first, said second, said third and said fourth switching regulator circuits further comprises a capacitor coupled to receive current flowing through said inductor.32. A computer system comprising:a microprocessor; a power supply circuit coupled to said microprocessor, wherein said power supply circuit comprises: a first switching regulator circuit; a second switching regulator circuit; a third switching regulator circuit; a fourth switching regulator circuit; a phase control circuit coupled to said first switching regulator circuit, said second switching regulator circuit, said third switching regulator circuit and said fourth switching regulator circuit, wherein said phase control circuit is configured to generate a plurality of switching control signals for controlling switching of said first, second, third and fourth switching regulator circuits, wherein said phase control circuit is configured to selectively suspend operation of said second, third and fourth switching regulator circuits in response to receiving a signal indicative of a low power mode of operation; and wherein each of the first, second, third and fourth switching regulator circuits includes an inductor coupled to a first transistor and a second transistor, wherein the first transistor is coupled to pass current from a power source to said inductor when activated and wherein said second transistor is coupled to pass current from a ground node to said inductor when activated, and wherein said phase control circuit is configured to activate said first and second transistors out of phase with respect to each other; a temperature sensitive circuit including a thermistor and coupled to said phase control circuit, wherein said temperature sensitive circuit having an output whose voltage level varies as a result of a variance in a voltage drop across said thermistor; and wherein said phase control circuit is further configured to suspend operation of said first, said second, said third and said fourth second switching regulator circuits upon detecting a predetermined voltage level at said output. 33. The computer system as recited in claim 32, wherein said phase control circuit selectively suspends operation of said second, said third and said fourth switching regulator circuit during said low power mode of operation by disabling at least one of said plurality of control signals to said second, said third and said fourth switching regulator circuit, respectively.34. The computer system as recited in claim 32, wherein each of said first, said second, said third and said fourth switching regulator circuits further comprises a capacitor coupled to receive current flowing through said inductor. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to power supplies and, more particularly, to the protection of microprocessor power supplies.2. Description of the Related ArtPower supplies are used in various types of devices. There are many specialized types of power supply circuits with various advantages and disadvantages. Microprocessors in computers may require a power supply circuit that regulates a high level of current while maintaining a high level of efficiency.One such type of specialized power supply circuit is a switching regulator. Switching regulator circuits typically provide a lower voltage output than the unregulated input while at the same time providing a higher current than the current drawn from the unregulated supply. This is accomplished using a transistor that is constantly switching between a saturation mode and a non-conducting mode. Typically a transistor that is optimized for power applications, such as a power field effect transistor, is used. Because the transistor is either in saturation or not conducting, there is very low power dissipation. A switching regulator therefore can regulate a high amount of current at a high efficiency rate.Since these power supply circuits regulate a high level of current during normal operation, they may also generate a significant amount of heat while operating. Under normal operating conditions the heat may not cause problems. However, under less than ideal conditions such as, for example, short circuits, improper power supply operation and unacceptable environmental conditions, the heat may become excessive. Excessive heat may cause damage to various computer system components including the motherboard, the microprocessor, or the power supply itself.The heat generated by the switching regulator may be controlled by methods such as directed airflow and the use of heat sinks. These methods may be effective in some cases, but in order to accommodate the worst case operating conditions, those methods may be expensive. Additionally, it may be impossible to anticipate the worst possible conditions.SUMMARY OF THE INVENTIONVarious embodiments of a power supply circuit including a thermal protection circuit are disclosed. In one embodiment, the power supply circuit includes a switching control circuit coupled to a switching regulator circuit. The switching control circuit is configured to generate a plurality of switching control signals for controlling the switching regulator circuit. The power supply circuit also includes a temperature sensitive circuit including a thermistor. The temperature sensitive circuit is configured to provide a variable voltage level output to the phase control circuit. The switching control circuit is also configured to suspend operation of the switching regulator circuit upon detecting a predetermined voltage level at the output.In another embodiment, the power supply circuit includes a phase control circuit coupled to a first switching regulator circuit and to a second switching regulator circuit. The phase control circuit is configured to generate a plurality of switching control signals for controlling switching of the first and second switching regulator circuits. The phase control circuit is also configured to selectively suspend operation of the second switching regulator circuit in response to receiving a signal indicative of a low power mode of operation. The power supply circuit also includes a temperature sensitive circuit which includes a thermistor. The temperature sensitive circuit is configured to provide a variable voltage level output to the phase control circuit. The phase control circuit is further configured to suspend operation of the first and second switching regulator circuits upon detecting a predetermined voltage level at the output.In various other embodiments, the thermistor is configured to detect an elevation in temperature of the first switching regulator circuit or the second switching regulator circuit and to change a resistance value internal to the thermistor. Furthermore, the thermistor is configured to decrease the internal resistance value in response to detecting the elevation in temperature. The temperature sensitive circuit develops the predetermined voltage level in response to the thermistor decreasing the internal resistance value.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block of one embodiment of a switching power supply circuit.FIG. 2 is a block diagram of one embodiment of a multiphase switching power supply circuit.FIG. 3 is a diagram of one embodiment of a motherboard of a computer system including a power supply circuit.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSTurning now to FIG. 1, a block diagram of one embodiment of a switching power supply circuit is shown. The switching power supply circuit of FIG. 1 includes a switching control circuit 60 coupled to a switching regulator circuit 70. The output of switching regulator circuit 70 is Vout 80 and it may used to power a microprocessor (not shown). Switching control circuit 60 is also coupled to a temperature sensitive circuit 10.In this embodiment, switching regulator circuit 70 may include one or more power transistors and various other components (not shown), which are used to regulate a supply voltage for use by a microprocessor or other device. It is noted that in other embodiments there may be more switching regulator circuits. Switching control circuit 60 is configured to generate control signals for switching regulator circuit 70. The control signals switch the transistors on and off.In one embodiment, switching control circuit 60 includes an enable input, which when activated by an active enable signal 50 allows normal operation of switching control circuit 60. However, when enable signal 50 is deactivated, switching control circuit 60 suspends operation of switching regulator circuit 70. In this embodiment, an active signal means a logic value of one, while a deactivated signal refers to a logic value of zero.A node 40 of temperature sensitive circuit 10 is connected to the enable input of switching control circuit 60. In one embodiment, temperature sensitive circuit 10 is a voltage divider circuit, which includes a thermistor 20 and a resistor 30. One lead of resistor 30 is connected to VCC, while the other lead of resistor 30 is connected to one lead of thermistor 20. The second lead of thermistor 20 is connected to circuit ground. The voltage divider circuit develops a voltage across resistor 30 and thermistor 20 proportional to the resistance of each component. Therefore, to calculate the voltage at node 40 of FIG. 1, the equation is as follows: Vnode=VCC (Rthermistor)/(Rthermistor+Rresistor). However, since the resistance value of thermistor 20 varies with changes in temperature, the voltage developed at node 40 also varies with changes in temperature. In this particular embodiment, the resistance value of thermistor 20 decreases with increases in temperature. This type of thermistor is said to have a negative temperature coefficient. It is contemplated that other types of thermistors may be used, such as those having a positive temperature coefficient. In other embodiments, it is contemplated that other temperature sensitive circuits may be used such as, for example, active components such as transistors. Additionally, if the enable input of switching control circuit 60 were an active low input, the voltage divider may be reconfigured such that thermistor 20 is connected to VCC and resistor 30 is connected to ground.As described above, the voltage developed at node 40 is dependent upon the selected resistance value of resistor 30 and the ambient resistance value of thermistor 20. If the ambient temperature of thermistor 20 increases, the resulting decrease in the resistance value of thermistor 20 will cause a proportional decrease in the voltage at node 40. Conversely, a decrease in the ambient temperature of thermistor 20 will cause an increase in the voltage at node 218. The voltage at node 40 may vary between zero volts and the maximum voltage level capable of developing across thermistor 20 depending on the selected resistance value of resistor 30 and the range of resistance values that thermistor 20 can achieve. Therefore, to achieve a particular ambient temperature voltage level at node 40, proper resistance values must be calculated and chosen for resistor 30 and thermistor 20.As will be described in more detail below, thermistor 20 is located such that it may detect a rise in a temperature corresponding to the ambient operating temperature of switching regulator circuit 70. If the ambient temperature begins to increase, the resistance value of thermistor 20 will begin to decrease causing a proportional decrease of the voltage at node 40. If the voltage decreases below the threshold of the enable input circuitry of switching control circuit 60, then switching control circuit 60 will detect the enable signal 50 as inactive and switching control circuit 60 will disable operation of switching regulator circuit 70. This action will power down any devices that are powered by switching regulator circuit 70.If the ambient temperature begins to decrease, the resistance value of thermistor 20 will begin to increase causing a proportional increase of the voltage at node 40. If the voltage increases above the threshold of the enable input circuitry of switching control circuit 60, then switching control circuit 60 will detect the enable signal 50 as active and switching control circuit 60 will enable operation of switching regulator circuit 70. This action will power up any devices that are powered by switching regulator circuit 70.Referring to FIG. 2, a block diagram of one embodiment of a multiphase switching power supply circuit is illustrated. Components that are identical to those shown in FIG. 1 are numbered identically for simplicity and clarity. The multiphase switching power supply circuit of FIG. 2 includes four switching regulator circuits 110A-D coupled to a phase control circuit 150. The output of each of switching regulator circuits 110A-D is coupled together at node Vout 170. Phase control circuit 150 is also coupled to a temperature sensitive circuit 10.In this particular embodiment, power supply circuit 100 comprises synchronous switching regulator circuits designated as 110A, 110B, 110C and 110D. Synchronous switching regulator circuits 110A-D may, individually or collectively be referred to as switching regulator circuit 110 or switching regulator circuits 110, respectively. Switching regulator circuits 110 are coupled to provide power to microprocessor 160 at Vout 170. It is important to note that different embodiments may comprise more or less than four switching regulator circuits.In the illustrated embodiment, each switching regulator 110 includes a pair of transistors (e.g., transistors 101 and 102, transistors 111 and 112, etc.) coupled between a power supply terminal VCC and ground. Each switching regulator 110 further includes a diode (e.g., diodes 103, 113, etc.), an inductor (e.g. inductors 104, 114, etc.) and a capacitor (e.g., capacitors 105, 115, etc.). It is noted that other specific circuit arrangements may be employed to implement each switching regulator 110.Phase control circuit 150 is configured to generate a plurality of control signals for controlling the states of the transistors in switching regulators 110 such that the switching regulators 110 operate out of phase with respect to one another. In a particular embodiment, phase control circuit 150 may include a Semtech SC1144 integrated circuit. As will be described in further detail below, phase control circuit 150 also includes circuitry to selectively suspend operation of a subset of switching regulators 110 during a low power mode of operation to thereby allow for improved efficiency. Phase control circuit 150 also includes further circuitry to suspend operation of all of switching regulator circuits 110.Phase control circuit 150 activates (i.e. turns on) transistors 101,111,121 and 131, respectively, during different phases of operation. During a first phase of operation ("phase 1"), transistor 101 is turned on while transistors 111, 121 and 131 are turned off. Since each switching regulator 110 is embodied as a synchronous regulator, when transistor 101 is turned on, transistor 102 is turned off (in response to a corresponding control signal from phase control circuit 150). Thus, during phase 1, current flows from VCC through transistor 101 and inductor 104 to charge capacitor 105. Also during phase 1, transistors 111, 121 and 131 are turned off, and transistors 112, 122 and 132 are turned on.During the next phase of operation ("phase 2"), phase control circuit 150 turns off transistor 101 and turns on transistor 102. When transistor 102 is turned on and transistor 101 is turned off, current may continue to temporarily flow through inductor 104 to charge capacitor 105 since current flow through inductor 104 cannot change instantaneously. Transistor 102 provides a return path for this current.Also during phase 2, transistor 111 of switching regulator 110B is turned on and transistor 112 is turned off. Consequently, similar to the previous discussion, capacitor 115 is charged by current flow from VCC through transistor 111. Subsequent operations of switching regulators 510C and 510D during phases 3 and 4 are similar.Phase control circuit 150 may be further configured to monitor the output voltage, Vout, at node 170 via a feedback control signal and adjust accordingly the duty cycle of transistors 101, 111, 121 and 131 to maintain a constant voltage level.As stated previously, microprocessor 160 is configured to operate in a low power mode of operation. During such operation, microprocessor 160 requires less current. The low power mode of operation may be controlled by, for example, a power management unit (not shown), which detects certain system inactivity, as desired. Phase control circuit 150 is configured to selectively suspend operation of a subset of switching regulators 110 (e.g. switching regulators 110B, 110C and 110D) upon assertion of a low power mode control signal which indicates that microprocessor 160 is currently operating in a low power mode. The low power mode control signal may be received from the power management unit. In this embodiment, phase control circuit 150 suspends operation of switching regulator circuits 110B, 110C and 110D during the low power mode by removing (or otherwise driving or disabling) the control signals provided to the associated switching transistors 111, 112, 121, 122, 131 and 132 such that the transistors are held in an off state. During this mode, switching regulator 110A operates in its normal manner as described previously.In one embodiment, phase control circuit 150 includes an enable input, which when activated by an active enable signal 50 allows normal operation of phase control circuit 150. However, when enable signal 50 is deactivated, phase control circuit 150 suspends operation of all switching regulator circuits 110. In this particular embodiment, an active signal means a logic value of one, which corresponds to a voltage level of two volts or greater. A deactivated signal refers to a logic value of zero, which corresponds to a voltage level of less than 0.8 volts. It is noted that depending on the integrated circuit used, these voltage levels may be different. It is contemplated and intended that a variety of integrated circuits may be used and therefore a range of voltage levels may be used to satisfy the input voltage specifications on a particular integrated circuit.The output of temperature sensitive circuit 10 is connected to the enable input of phase control circuit 150. As described above in the description of FIG. 1, temperature sensitive circuit 10 may be a voltage divider circuit, which includes a thermistor 20 and a resistor 30. The voltage developed at node 40 is dependent upon the selected resistance value of resistor 30 and the ambient resistance value of thermistor 20. If the ambient temperature of thermistor 20 increases, the resulting decrease in the resistance value of thermistor 20 will cause a proportional decrease in the voltage at node 40. Conversely, a decrease in the ambient temperature of thermistor 20 will cause an increase in the voltage at node 40. Therefore, depending on the selected resistance value of resistor 30 and the range of resistance values that thermistor 20 can achieve, the voltage at node 40 may vary between zero volts and the maximum voltage level capable of developing across thermistor 20. To achieve a particular ambient temperature voltage level at node 40, proper resistance values must be calculated and chosen for resistor 30 and thermistor 20.As described in detail above, the voltage developed at node 40 of FIG. 2 is dependent on the resistance values chosen for resistor 30 and thermistor 20. Hence, in this embodiment, resistance values are chosen such that at ambient operating temperature, the voltage at node 40 is above two volts, thus enabling phase control circuit 150 to provide switching control signals to switching regulator circuits 110.As will be described in more detail below, thermistor 20 is located such that it may detect a rise in a temperature corresponding to the ambient operating temperature of switching regulator circuits110. If the ambient operating temperature begins to increase, the resistance value of thermistor 20 will begin to decrease causing a proportional decrease of the voltage at node 140. If the voltage decreases below the threshold of the enable input circuitry of phase control circuit 150, then phase control circuit 150 will detect the enable signal as inactive and phase control circuit 150 will disable operation of switching regulator circuits 220. This action will power down microprocessor 160. Disabling the switching regulator circuits 110 and microprocessor 160 may advantageously reduce heat related damage to some computer system components.If the ambient temperature begins to decrease, the resistance value of thermistor 20 will begin to increase causing a proportional increase of the voltage at node 40. If the voltage increases above the threshold of the enable input circuitry of phase control circuit 150, then phase control circuit 150 will detect the enable signal as active and phase control circuit 150 will enable operation of switching regulator circuits 110. This action will power up microprocessor 160.Referring to FIG. 3, a diagram of one embodiment of a motherboard of a computer system including a power supply circuit is shown. Components that are identical to those shown in FIG. 1 and FIG. 2 are numbered identically for simplicity and clarity. A motherboard 300 includes a power supply circuit 100 and a microprocessor 160. Power supply circuit 200 includes phase control circuit 150, switching regulator circuits 110A, 110B, 110C and 110D and a thermistor 20.In this embodiment, thermistor 20 is located in close proximity to switching regulator circuits 110A-D. The close proximity allows thermistor 20 to detect a temperature corresponding to the operating temperature of switching regulator circuits 110A-D. It is noted that the location of thermistor 20 shown in FIG. 3 is an example only. It is contemplated that thermistor 20 may be located in other locations which may still allow detection of a temperature corresponding to the operating temperature of switching regulator circuits 220A-D.Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
In one form, a memory controller includes a command queue, an arbiter, a refresh logic circuit, and a final arbiter. The command queue receives and stores memory access requests for a memory. The arbiter selectively picks accesses from the command queue according to a first type of accesses and a second type of accesses. The first type of accesses and the second type of accesses correspond to different page statuses of corresponding memory accesses in the memory. The refresh logic circuit generates a refresh command to a bank of the memory and provides a priority indicator with the refresh command whose value is set according to a number of pending refreshes. The final arbiter selectively orders the refresh command with respect to memory access requests of the first type accesses and the second type accesses based on the priority indicator. |
WHAT IS CLAIMED IS:1. A memory controller (500), comprising:a command queue (520) for receiving and storing memory access requests for a memory (120);an arbiter (538) for selectively picking accesses from the command queue (500) according to a first type of accesses (610) and a second type of accesses, wherein the first type of accesses (620/630) and the second type of accesses correspond to different page statuses of corresponding memory accesses in the memory (120);a refresh logic circuit (532) for generating a refresh command to a bank of the memory (134), andproviding a priority indicator with the refresh command whose value is set according to a number of pending refreshes; anda final arbiter (650) for selectively ordering the refresh command with respect to memory access requests of the first type accesses (610) and the second type accesses (620/630) based on the priority indicator.2. The memory controller (500) of claim 1, wherein the refresh logic circuit:assigns the priority indicator one of a first priority status and a second priority status; andthe final arbiter elevates the refresh command between the first type of accesses (610) and the second type of accesses (620/630) in response to the first priority status.3. The memory controller (500) of claim 2, wherein the final arbiter (650) further elevates the refresh command above the first type of accesses (610) and the second type of accesses (620/630) in response to the second priority status.4. The memory controller (500) of claim 2, wherein the refresh logic circuit (532) comprises:a refresh counter (720) that counts a number of pending per bank refreshes; anda comparator (750) coupled to the refresh counter (720) that provides the first priority status to the refresh command if the refresh counter (720) exceeds a predetermined threshold.5. The memory controller (500) of claim 4, wherein, based on a periodic period of time (534), the refresh logic circuit (532) further elevates the priority indicator for a pending refresh command when the refresh counter (720) is between a lower threshold and an upper threshold.6. The memory controller (500) of claim 5, wherein the periodic period of time is a derivative of a predetermined refresh interval and a total number of banks (120) assigned to the memory controller.7. The memory controller (500) of claim 1, wherein the refresh logic circuit (532) further assigns the priority indicator based on a programmable counter (730), and the programmable counter (730) tracks a number of pending refresh commands.8. The memory controller (500) of claim 7, wherein the refresh logic circuit (532) further elevates the priority indicator for a pending refresh command when the programmable counter (730) is above an urgent refresh count threshold.9. The memory controller (500) of claim 1, wherein the first type accesses (620/630) is not a page hit and the second type accesses is a page hit (610).10. The memory controller (500) of claim 1, wherein the arbiter (538) comprises a plurality of sub-arbiters(612/622/632) for selectively picking accesses based on sub-arbitrations, wherein one sub-arbitration is a page hit (610) and each other sub-arbitration is not a page hit.11. The memory controller (500) of claim 1, wherein in response to simultaneously receiving a priority indicator for more than one bank, the final arbiter relegates the priority indicator of the bank of the memory (134) that is a most recent recipient of the refresh command below the bank of the memory that is a least recent recipient of the refresh command.12. The memory controller (500) of claim 1, wherein the memory controller is adapted to interface to synchronous graphics random access memory capable of supporting per two-bank refresh.13. The memory controller (500) of claim 12, wherein the refresh logic circuit (532) further elevates a priority indicator (700) for a pending refresh command for a paired bank when the priority indicator is a first priority status and a refresh timer (730) is above a refresh timing interval.14. The memory controller of claim 13, wherein the refresh logic circuit (532) further elevates the priority indicator for a pending refresh command to a second priority status for the paired bank when a programmable counter (730) is above an urgent refresh count threshold and both banks of the paired bank have a first type access (700).15. A data processing system (100), comprising:a memory accessing agent (210/220) for providing memory access requests for a memory;a memory system (120) coupled to the memory accessing agent; anda memory controller (500) coupled to the memory system and the memory accessing agent (210/220), the memory controller comprising:a command queue (520) for storing memory access commands received from the memory accessing agent (210/220);an arbiter (538) for selectively picking memory accesses from the command queue (520)according to a first type of access (620/630) and a second type of access (610), wherein each type of access corresponds to a different page status of a bank (134) in the memory (132); anda final arbiter (650) that arbitrates based on input received from a refresh logic circuit (532) that generates a refresh command to the bank (134) of the memory (132) and provides a priority indicator to the refresh command, whose value is set according to a number of pending refreshes, to selectively order the refresh command with respect to a first type of access and a second type of access.16. The data processing system (100) of claim 15, wherein the memory controller (500):assigns the priority indicator one of a first priority status and a second priority status (700); and elevates the refresh command between the first type of access and the second type of access in response to the first priority status (700).17. The data processing system (100) of claim 16, wherein:the memory controller (500) further assigns the priority indicator the first priority status based, in part, on a clock;the clock is for tracking a refresh interval; andthe memory controller (500) determines an intermediate refresh time interval (705) based on a refresh time interval and a total number of banks assigned to the memory controller.18. The data processing system (100) of claim 17, wherein the intermediate refresh time interval is a period of time that is less than the refresh time interval.19. The data processing system (100) of claim 17, wherein in response to the intermediate refresh time interval, the memory controller (500) generates the refresh command to the bank at a higher frequency than the refresh interval.20. The data processing system (100) of claim 15, wherein the memory controller (500) further:assigns the priority indicator based on a predetermined threshold of a refresh counter (720), wherein the refresh counter (720) counts a number of pending per bank refreshes in the memory; and elevates the priority indicator for a pending refresh command, based on a periodic time cycle, when the refresh counter (720) is between a lower threshold and an upper threshold.21. The data processing system (100) of claim 15, wherein the memory controller (500) further assigns the priority indicator a second priority status based on a programmable counter (730).22. The data processing system (100) of claim 21, wherein the memory controller (500) elevates the priority indicator of the refresh command above the first type of access and the second type of access in response to the second priority status.23. The data processing system (100) of claim 21, wherein the memory controller blocks a corresponding bank of memory from opening in response to assertion of the second priority status.24. The data processing system of claim 15, wherein the arbiter comprises a plurality of sub-arbiters, and theplurality of sub-arbiters are for selectively picking accesses based on sub-arbitrations, wherein one subarbitration is a page hit and each other sub-arbitration is not a page hit.25. The data processing system (100) of claim 15, wherein the memory accessing agent (210/220) comprises: a central processing unit core (210);a graphics processing unit core (220); anda data fabric (250) for interconnecting the central processing unit core and the graphics processing unit core to the memory controller (500).26. The data processing system (100) of claim 15, wherein the memory (120) is a high bandwidth memory.27. A method for managing refresh of a memory in a memory system via a memory controller (500), the method comprising:receiving a plurality of memory access requests;storing the plurality of memory access requests in a command queue (520); andselectively picking memory accesses requests from the command queue (520) according to a first type of accesses and a second type of accesses that correspond to different page statuses of corresponding memory accesses in the memory (120);generating a refresh command to a bank (134) of the memory (120), and providing a priority indicator with the refresh command (700); andselectively ordering the refresh command (532/650) with respect to memory access requests of the first type of accesses and the second type of accesses based on the priority indicator.28. The method of claim 27, wherein providing the priority indicator to the refresh command (532) furthercomprising:assigning the priority indicator one of a first priority status and a second priority status (700); elevating the refresh command between the first type accesses and the second type accesses in response to the first priority status (700); andelevating the refresh command above the first type accesses and the second type of accesses in response to the second priority status (700).29. The method of claim 27, further comprising assigning the priority indicator based on a predetermined threshold of a refresh counter (720), wherein the refresh counter (720) counts a number of pending per bank refreshes (720) in the memory.30. The method of claim 29, further comprising elevating the priority indicator for a pending refresh command, based on a periodic period of time, when the refresh counter (720) is between a lower threshold and an upper threshold.31. The method of claim 30, wherein the periodic period of time is a derivative of a predetermined refresh interval and a total number of banks assigned to the memory controller (500).32. The method of claim 27, further comprising assigning the priority indicator based on a programmable counter (730), and the programmable counter (730) tracks a number of refresh commands that are scheduled and incomplete (700).33. The method of claim 32, further comprising elevating the priority indicator for a pending refresh command when the programmable counter (730) is above an urgent refresh count threshold (700).34. The method of claim 27, wherein the first type accesses is not a page hit (620/630) and the second type of accesses is a page hit (610).35. The method of claim 27, further comprising selectively picking accesses based on sub-arbitrations(610/620/630), wherein an arbiter comprises a plurality of sub-arbiters (612/622/633) and one sub-arbitration is a page hit and each other sub-arbitration is not a page hit (612/622/632).36. The method of claim 27, wherein the refresh command is for a selected bank (134). |
A REFRESH SCHEME IN A MEMORY CONTROLLERBACKGROUND[0001] Computer systems typically use inexpensive and high density dynamic random-access memory (DRAM) chips for main memory. Most DRAM chips sold today are compatible with various double data rate (DDR) DRAM standards promulgated by the Joint Electron Devices Engineering Council (JEDEC). DRAM chips are not persistent memory devices. Therefore, periodic memory refresh is needed by the DRAM chips for data retention during normal operation of the computer system. Memory refresh is a background maintenance process required during operation of semiconductor DRAM. Each bit of memory data is stored as the presence or absence of an electric charge on small capacitors which form the DRAM chips. Charges on the capacitors leak away over time, and without a memory refresh, stored data will be lost. To prevent data loss, external circuitry sends commands to cause the memory to periodically read a row and rewrite the row, restoring the charges on the capacitors of the memory cells of the row to the original charge level. While refresh is occurring, the memory is not available for normal read and write operations.[0002] Attempts have been made to mediate the effects of refresh operations on DRAM bandwidth. Known memory controllers adopt one of two processes for refreshing DRAM. In a first example the memory controller waits until no other accesses to the memory are pending, then the memory controller provides a refresh to the memory. These are called casual refreshes. In another example, when the memory controller has waited too long, and the memory is in critical need of a refresh, and the memory controller provides urgent refreshes. Each of the foregoing examples may result in memory transactions being stalled, consequently producing a penalty in memory performance.BRIEF DESCRIPTION OF THE DRAWINGS[0003] FIG. 1 illustrates in block diagram form a data processing system according to some embodiments;[0004] FIG. 2 illustrates in block diagram form an accelerated processing unit (APU) suitable for use in the data processing system of FIG. 1;[0005] FIG. 3 illustrates in block diagram form a memory controller and associated physical interface (PHY) suitable for use in the APU of FIG. 2 according to some embodiments;[0006] FIG. 4 illustrates in block diagram form another memory controller and associated PHY suitable for use in the APU of FIG. 2 according to some embodiments;[0007] FIG. 5 illustrates in block diagram form a memory controller according to some embodiments;[0008] FIG. 6 illustrates a block diagram of a portion of memory controller according to some embodiments; and[0009] FIG. 7 illustrates a block diagram of a refresh logic circuit that may be used for the refresh logic circuit of FIGs. 5 and 6 according to some embodiments.[0010] In the following description, the use of the same reference numerals in different drawings indicates similar or identical items. Unless otherwise noted, the word“coupled” and its associated verb forms include both direct connection and indirect electrical connection by means known in the art, and unless otherwise noted any description of direct connection implies alternate embodiments using suitable forms of indirect electrical connection as well.DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS[0011] As will be described below in one form, a memory controller includes a command queue, an arbiter, a refresh logic circuit, and a final arbiter. The command queue receives and stores memory access requests for a memory. The arbiter selectively picks accesses from the command queue according to a first type of accesses and a second type of accesses. The first type of accesses and the second type of accesses correspond to different page statuses of corresponding memory accesses in the memory. The refresh logic circuit generates a refresh command to a bank of the memory. The refresh logic circuit provides a priority indicator with the refresh command whose value is set according to a number of pending refreshes. The final arbiter selectively orders the refresh command with respect to memory access requests of the first type accesses and the second type accesses. The ordering is based on the priority indicator.[0012] In another form, a data processing system includes a memory accessing agent, a memory system, and a memory controller. The memory accessing agent provides memory access requests for a memory. The memory system is coupled to the memory accessing agent. The memory controller is coupled to the memory system and the memory accessing agent includes a command queue, an arbiter, and a final arbiter. The command queue stores memory access commands received from the memory accessing agent. The arbiter selectively picks memory accesses from the command queue according to a first type of access and a second type of access. Each type of access corresponds to a different page status of a bank in the memory. The final arbiter arbitrates based on input received from a refresh logic circuit. The refresh logic circuit generates a refresh command to the bank of the memory and provides a priority indicator to the refresh command. The value of the priority indicator is set according to a number of pending refreshes, to selectively order the refresh command with respect to a first type of access and a second type of access.[0013] In yet another form, a method for managing refresh of a memory in a memory system via a memory controller. A plurality of memory access requests is received and stored in a command queue. The memory accesses requests are selectively picked from the command queue according to a first type of accesses and a second type of accesses. The first type of accesses and a second type of accesses correspond to different page statuses of corresponding memory accesses in the memory. A refresh command is generated to a bank of the memory. A priority indicator is provided with the refresh command. The refresh command is selectively ordered with respect to memory access requests of the first type access and the second type access based on the priority indicator.[0014] FIG. 1 illustrates in block diagram form a data processing system 100 according to some embodiments. Data processing system 100 includes generally a data processor 110 in the form of an accelerated processing unit (APU), a memory system 120, a peripheral component interconnect express (PCIe) system 150, a universal serial bus (USB) system 160, and a disk drive 170. Data processor 110 operates as the central processing unit (CPU) of data processing system 100 and provides various buses and interfaces useful in modem computer systems. These interfaces include two double data rate (DDRx) memory channels, a PCIe root complex for connection to a PCIe link, a USB controller for connection to a USB network, and an interface to a Serial Advanced Technology Attachment (SATA) mass storage device. [0015] Memory system 120 includes a memory channel 130 and a memory channel 140. Memory channel 130 includes a set of dual inline memory modules (DIMMs) connected to a DDRx bus 132, including representative DIMMs 134, 136, and 138 that in this example correspond to separate ranks. Likewise, memory channel 140 includes a set of DIMMs connected to a DDRx bus 142, including representative DIMMs 144, 146, and 148.[0016] PCIe system 150 includes a PCIe switch 152 connected to the PCIe root complex in data processor 110, a PCIe device 154, a PCIe device 156, and a PCIe device 158. PCIe device 156 in turn is connected to a system basic input/output system (BIOS) memory 157. System BIOS memory 157 can be any of a variety of nonvolatile memory types, such as read-only memory (ROM), flash electrically erasable programmable ROM(EEPROM), and the like.[0017] USB system 160 includes a USB hub 162 connected to a USB master in data processor 110, and representative USB devices 164, 166, and 168 each connected to USB hub 162. USB devices 164, 166, and 168 could be devices such as a keyboard, a mouse, a flash EEPROM port, and the like.[0018] Disk drive 170 is connected to data processor 110 over a SATA bus and provides mass storage for the operating system, application programs, application files, and the like.[0019] Data processing system 100 is suitable for use in modem computing applications by providing a memory channel 130 and a memory channel 140. Each of memory channels 130 and 140 can connect to state-of- the-art DDR memories such as DDR version four (DDR4), low power DDR4 (LPDDR4), graphics DDR version five (GDDR5), and high bandwidth memory (HBM), and can be adapted for future memory technologies. These memories provide high bus bandwidth and high speed operation. At the same time, they also provide low power modes to save power for battery-powered applications such as laptop computers, and also provide built-in thermal monitoring.[0020] FIG. 2 illustrates in block diagram form an APU 200 suitable for use in data processing system 100 of FIG. 1. APU 200 includes generally a central processing unit (CPU) core complex 210, a graphics core 220, a set of display engines 230, a memory management hub 240, a data fabric 250, a set of peripheral controllers 260, a set of peripheral bus controllers 270, a system management unit (SMU) 280, and a set of memory controllers 290.[0021] CPU core complex 210 includes a CPU core 212 and a CPU core 214. In this example, CPU core complex 210 includes two CPU cores, but in other embodiments CPU core complex 210 can include an arbitrary number of CPU cores. Each of CPU cores 212 and 214 is bidirectionally connected to a system management network (SMN), which forms a control fabric, and to data fabric 250, and is capable of providing memory access requests to data fabric 250. Each of CPU cores 212 and 214 may be unitary cores, or may further be a core complex with two or more unitary cores sharing certain resources such as caches.[0022] Graphics core 220 is a high performance graphics processing unit (GPU) capable of performing graphics operations such as vertex processing, fragment processing, shading, texture blending, and the like in a highly integrated and parallel fashion. Graphics core 220 is bidirectionally connected to the SMN and to data fabric 250, and is capable of providing memory access requests to data fabric 250. In this regard, APU 200 may either support a unified memory architecture in which CPU core complex 210 and graphics core 220 share the same memory space, or a memory architecture in which CPU core complex 210 and graphics core 220 share a portion of the memory space, while graphics core 220 also uses a private graphics memory not accessible by CPU core complex 210.[0023] Display engines 230 render and rasterize objects generated by graphics core 220 for display on a monitor. Graphics core 220 and display engines 230 are bidirectionally connected to a common memory management hub 240 for uniform translation into appropriate addresses in memory system 120, and memory management hub 240 is bidirectionally connected to data fabric 250 for generating such memory accesses and receiving read data returned from the memory system.[0024] Data fabric 250 includes a crossbar switch for routing memory access requests and memory responses between any memory accessing agent and memory controllers 290. It also includes a system memory map, defined by BIOS, for determining destinations of memory accesses based on the system configuration, as well as buffers for each virtual connection.[0025] Peripheral controllers 260 include a USB controller 262 and a SATA interface controller 264, each of which is bidirectionally connected to a system hub 266 and to the SMN bus. These two controllers are merely exemplary of peripheral controllers that may be used in APU 200.[0026] Peripheral bus controllers 270 include a system controller or“Southbridge” (SB) 272 and a PCIe controller 274, each of which is bidirectionally connected to an input/output (I/O) hub 276 and to the SMN bus. I/O hub 276 is also bidirectionally connected to system hub 266 and to data fabric 250. Thus, for example a CPU core can program registers in USB controller 262, SATA interface controller 264, SB 272, or PCIe controller 274 through accesses that data fabric 250 routes through I/O hub 276.[0027] SMU 280 is a local controller that controls the operation of the resources on APU 200 and synchronizes communication among them. SMU 280 manages power-up sequencing of the various processors on APU 200 and controls multiple off-chip devices via reset, enable and other signals. SMU 280 includes one or more clock sources not shown in FIG. 2, such as a phase locked loop (PLL), to provide clock signals for each of the components of APU 200. SMU 280 also manages power for the various processors and other functional blocks, and may receive measured power consumption values from CPU cores 212 and 214 and graphics core 220 to determine appropriate power states.[0028] APU 200 also implements various system monitoring and power saving functions. In particular one system monitoring function is thermal monitoring. For example, if APU 200 becomes hot, then SMU 280 can reduce the frequency and voltage of CPU cores 212 and 214 and/or graphics core 220. If APU 200 becomes too hot, then it can be shut down entirely. Thermal events can also be received from external sensors by SMU 280 via the SMN bus, and SMU 280 can reduce the clock frequency and/or power supply voltage in response.[0029] FIG. 3 illustrates in block diagram form a memory controller 300 and an associated physical interface (PHY) 330 suitable for use in APU 200 of FIG. 2 according to some embodiments. Memory controller 300 includes a memory channel 310 and a power engine 320. Memory channel 310 includes a host interface 312, a memory channel controller 314, and a physical interface 316. Host interface 312 bidirectionally connects memory channel controller 314 to data fabric 250 over a scalable data port (SDP). Physical interface 316 bidirectionally connects memory channel controller 314 to PHY 330 over a bus that conforms to the DDR-PHY Interface Specification (DFI). Power engine 320 is bidirectionally connected to SMU 280 over the SMN bus, to PHY 330 over the Advanced Peripheral Bus (APB), and is also bidirectionally connected to memory channel controller 314. PHY 330 has a bidirectional connection to a memory channel such as memory channel 130 or memory channel 140 of FIG. 1. Memory controller 300 is an instantiation of a memory controller for a single memory channel using a single memory channel controller 314, and has a power engine 320 to control operation of memory channel controller 314 in a manner that will be described further below.[0030] FIG. 4 illustrates in block diagram form another memory controller 400 and associated PHYs 440 and 450 suitable for use in APU 200 of FIG. 2 according to some embodiments. Memory controller 400 includes memory channels 410 and 420 and a power engine 430. Memory channel 410 includes a host interface 412, a memory channel controller 414, and a physical interface 416. Host interface 412 bidirectionally connects memory channel controller 414 to data fabric 250 over an SDP. Physical interface 416 bidirectionally connects memory channel controller 414 to PHY 440, and conforms to the DFI Specification. Memory channel 420 includes a host interface 422, a memory channel controller 424, and a physical interface 426. Host interface 422 bidirectionally connects memory channel controller 424 to data fabric 250 over another SDP. Physical interface 426 bidirectionally connects memory channel controller 424 to PHY 450, and conforms to the DFI Specification. Power engine 430 is bidirectionally connected to SMU 280 over the SMN bus, to PHYs 440 and 450 over the APB, and is also bidirectionally connected to memory channel controllers 414 and 424. PHY 440 has a bidirectional connection to a memory channel such as memory channel 130 of FIG. 1. PHY 450 has a bidirectional connection to a memory channel such as memory channel 140 of FIG. 1. Memory controller 400 is an instantiation of a memory controller having two memory channel controllers and uses a shared power engine 430 to control operation of both memory channel controller 414 and memory channel controller 424 in a manner that will be described further below.[0031] FIG. 5 illustrates in block diagram form a memory controller 500 according to some embodiments. Memory controller 500 includes a memory channel controller 510 and a power controller 550. Memory channel controller 510 includes an interface 512, a queue 514, a command queue 520, an address generator 522, a content addressable memory (CAM) 524, a replay queue 530, a refresh logic circuit block 532, a timing block 534, a page table 536, an arbiter 538, an error correction code (ECC) check block 542, an ECC generation block 544, and a data buffer (DB) 546.[0032] Interface 512 has a first bidirectional connection to data fabric 250 over an external bus, and has an output. In memory controller 500, this external bus is compatible with the advanced extensible interface version four specified by ARM Holdings, PLC of Cambridge, England, known as“ AXI4”, but can be other types of interfaces in other embodiments. Interface 512 translates memory access requests from a first clock domain known as the FCLK (or MEMCLK) domain to a second clock domain internal to memory controller 500 known as the UCLK domain. Similarly, queue 514 provides memory accesses from the UCLK domain to the DFICLK domain associated with the DFI interface.[0033] Address generator 522 decodes addresses of memory access requests received from data fabric 250 over the AXI4 bus. The memory access requests include access addresses in the physical address space represented as a normalized address. Address generator 522 converts the normalized addresses into a format that can be used to address the actual memory devices in memory system 120, as well as to efficiently schedule related accesses. This format includes a region identifier that associates the memory access request with a particular rank, a row address, a column address, a bank address, and a bank group. On startup, the system BIOS queries the memory devices in memory system 120 to determine their size and configuration, and programs a set of configuration registers associated with address generator 522. Address generator 522 uses the configuration stored in the configuration registers to translate the normalized addresses into the appropriate format. Command queue 520 is a queue of memory access requests received from the memory accessing agents in data processing system 100, such as CPU cores 212 and 214 and graphics core 220. Command queue 520 stores the address fields decoded by address generator 522 as well other address information that allows arbiter 538 to select memory accesses efficiently, including access type and quality of service (QoS) identifiers. CAM 524 includes information to enforce ordering rules, such as write after write (WAW) and read after write (RAW) ordering rules.[0034] Replay queue 530 is a temporary queue for storing memory accesses picked by arbiter 538 that are awaiting responses, such as address and command parity responses, write cyclic redundancy check (CRC) responses for DDR4 DRAM or write and read CRC responses for GDDR5 DRAM. Replay queue 530 accesses ECC check block 542 to determine whether the returned ECC is correct or indicates an error. Replay queue 530 allows the accesses to be replayed in the case of a parity or CRC error of one of these cycles.[0035] Refresh logic circuit 532 includes state machines for various powerdown, refresh, and termination resistance (ZQ) calibration cycles that are generated separately from normal read and write memory access requests received from memory accessing agents. For example, if a memory rank is in precharge powerdown, it must be periodically awakened to mn refresh cycles. Refresh logic circuit 532 generates auto-refresh commands periodically to prevent data errors caused by leaking of charge off storage capacitors of memory cells in DRAM chips. In addition, refresh logic circuit 532 periodically calibrates ZQ to prevent mismatch in on-die termination resistance due to thermal changes in the system. Refresh logic circuit 532 also decides when to put DRAM devices in different power down modes.[0036] Arbiter 538 is bidirectionally connected to command queue 520 and is the heart of memory channel controller 510. It improves efficiency by intelligent scheduling of accesses to improve the usage of the memory bus. Arbiter 538 uses timing block 534 to enforce proper timing relationships by determining whether certain accesses in command queue 520 are eligible for issuance based on DRAM timing parameters. For example, each DRAM has a minimum specified time between activate commands to the same bank, known as“tRc”. Timing block 534 maintains a set of counters that determine eligibility based on this and other timing parameters specified in the JEDEC specification, and is bidirectionally connected to replay queue 530. Page table 536 maintains state information about active pages in each bank and rank of the memory channel for arbiter 538, and is bidirectionally connected to replay queue 530.[0037] In response to write memory access requests received from interface 512, ECC generation block 544 computes an ECC according to the write data. DB 546 stores the write data and ECC for received memory access requests. It outputs the combined write data/ECC to queue 514 when arbiter 538 picks the corresponding write access for dispatch to the memory channel.[0038] Power controller 550 includes an interface 552 to an advanced extensible interface, version one (AXI), an APB interface 554, and a power engine 560. Interface 552 has a first bidirectional connection to the SMN, which includes an input for receiving an event signal labeled“EVENT n” shown separately in FIG. 5, and an output. APB interface 554 has an input connected to the output of interface 552, and an output for connection to a PHY over an APB. Power engine 560 has an input connected to the output of interface 552, and an output connected to an input of queue 514. Power engine 560 includes a set of configuration registers 562, a microcontroller (pC) 564, a self refresh controller (SLFREF/PE) 566, and a reliable read/write training engine (RRW/TE) 568. Configuration registers 562 are programmed over the AXI bus, and store configuration information to control the operation of various blocks in memory controller 500. Accordingly, configuration registers 562 have outputs connected to these blocks that are not shown in detail in FIG. 5. Self refresh controller 566 is an engine that allows the manual generation of refreshes in addition to the automatic generation of refreshes by refresh logic circuit 532. Reliable read/write training engine 568 provides a continuous memory access stream to memory or I/O devices for such purposes as DDR interface read latency training and loopback testing.[0039] Memory channel controller 510 includes circuitry that allows it to pick memory accesses for dispatch to the associated memory channel. In order to make the desired arbitration decisions, address generator 522 decodes the address information into predecoded information including rank, row address, column address, bank address, and bank group in the memory system, and command queue 520 stores the predecoded information. Configuration registers 562 store configuration information to determine how address generator 522 decodes the received address information. Arbiter 538 uses the decoded address information, timing eligibility information indicated by timing block 534, and active page information indicated by page table 536 to efficiently schedule memory accesses while observing other criteria such as QoS requirements. For example, arbiter 538 implements a preference for accesses to open pages to avoid the overhead of precharge and activation commands required to change memory pages, and hides overhead accesses to one bank by interleaving them with read and write accesses to another bank. In particular during normal operation, arbiter 538 may decide to keeps pages open in different banks until they are required to be precharged prior to selecting a different page.[0040] FIG. 6 illustrates a block diagram of a portion 600 of memory controller 500 of FIG. 5 according to some embodiments. Portion 600 includes arbiter 538, refresh logic circuit 532, and a set of control circuits 660 associated with the operation of arbiter 538. Arbiter 538 includes a set of sub-arbiters 605 and a final arbiter 650. Sub-arbiters 605 include a sub-arbiter 610, a sub-arbiter 620, and a sub-arbiter 630. Sub-arbiter 610 includes a page hit arbiter 612 labeled“PH ARB”, and an output register 614. Page hit arbiter 612 has a first input connected to command queue 520, a second input, and an output. Register 614 has a data input connected to the output of page hit arbiter 612, a clock input for receiving the UCLK signal, and an output. Sub-arbiter 620 includes a page conflict arbiter 622 labeled“PC ARB”, and an output register 624. Page conflict arbiter 622 has a first input connected to command queue 520, a second input, and an output. Register 624 has a data input connected to the output of page conflict arbiter 622, a clock input for receiving the UCLK signal, and an output. Sub-arbiter 630 includes a page miss arbiter 632 labeled“PM ARB”, and an output register 634. Page miss arbiter 632 has a first input connected to command queue 520, a second input, and an output. Register 634 has a data input connected to the output of page miss arbiter 632, a clock input for receiving the UCLK signal, and an output. Final arbiter 650 has a first input connected to the output of page close predictor 662, a second input connected to the output of refresh logic circuit 532, a third input connected to the output of output register 614, a fourth input connected to the output of output register 624, a fifth input connected to the output of register 634, and a first output for providing an arbitration winner to queue 514.[0041] The output of refresh logic circuit 532 provides a priority indicator with an associated refresh command. Refresh logic circuit 532 also has an input connected to the output of final arbiter 650.[0042] Control circuits 660 include timing block 534 and page table 536 as previously described with respect to FIG. 5, and a page close predictor 662. Timing block 534 has an input and an output connected to the first inputs of page hit arbiter 612, page conflict arbiter 622, and page miss arbiter 632. Page table 536 has an input connected to an output of replay queue 530, an output connected to an input of replay queue 530, an output connected to the input of command queue 520, an output connected to the input of timing block 534, and an output connected to the input of page close predictor 662. Page close predictor 662 has an input connected to one output of page table 536, an input connected to the output of output register 614, and an output connected to the second input of final arbiter 650.[0043] In operation, arbiter 538 selects memory access requests (commands) from command queue 520 and refresh logic 532 by taking into account the page status of each entry and the priority of each refresh command. The memory access priority is based on the intermediate refresh interval, but can be altered based on the page status of the memory access request and on a priority indicator status of the refresh command. Arbiter 538 includes three sub-arbiters that operate in parallel with refresh logic circuit 532 to address the mismatch between the processing and transmission limits of existing integrated circuit technology. The winners of the respective sub-arbitrations are presented to final arbiter 650 along with a refresh command having a priority indicator. Final arbiter 650 selects between these three sub-arbitration winners and a refresh operation from refresh logic 532 to output to queue 514.[0044] Each of page hit arbiter 612, page conflict arbiter 622, and page miss arbiter 632 has an input connected to the output of timing block 534 to determine timing eligibility of commands in command queue 520 that fall into these respective categories. Timing block 534 includes an array of binary counters that count durations related to the particular operations for each bank in each rank. The number of timers needed to determine the status depends on the timing parameter, the number of banks for the given memory type, and the number of ranks supported by the system on a given memory channel. The number of timing parameters that are implemented in turn depends on the type of memory implemented in the system. For example, GDDR5 memories require more timers to comply with more timing parameters than other DDRx memory types. By including an array of generic timers implemented as binary counters, timing block 534 can be scaled and reused for different memory types.[0045] A page hit is a read or write cycle to an open page. Page hit arbiter 612 arbitrates between accesses in command queue 520 to open pages. A page conflict is an access to one row in a bank when another row in the bank is currently activated. Page conflict arbiter 622 arbitrates between accesses in command queue 520 to pages that conflict with the page that is currently open in the corresponding bank and rank. Page conflict arbiter 622 selects a sub-arbitration winner that causes the issuance of a precharge command. A page miss is an access to a bank that is in the precharged state. Page miss arbiter 632 arbitrates between accesses in command queue 520 to precharged memory banks. Arbiter 538 selectively picks accesses from command queue 520 according to the type of memory access. Each of page hit arbiter 612, page conflict arbiter 622, and page miss arbiter 632 outputs a first type of accesses or a second type of accesses.[0046] The first type of accesses and the second type of accesses correspond to different page statuses of corresponding memory accesses in the memory. More specifically, page hit arbiter 612 outputs a first type access. Page conflict arbiter 622 and page miss arbiter 632 each output a second type access. After determining the relative priority among the three sub-arbitration winners, final arbiter 650 then determines whether the sub-arbitration winners conflict with the refresh command (i.e. whether they are directed to the same bank and rank). When there are no such conflicts and the refresh time interval is met, then final arbiter 650 selects the refresh command. When there are conflicts, then final arbiter 650 complies with the following rules. When the priority indicator for the refresh command is a first priority status (intermediate priority) and page hit arbiter 612 selects a pending page hit, then final arbiter 650 selects the access indicated by page hit arbiter 612. When the priority indicator for the refresh command is a second priority status (urgent priority) and the sub-arbitration winner is from page hit arbiter 612, final arbiter 650 selects the access indicated by refresh logic circuit 532, thereby prioritizing the refresh command to execute instead of the page hit. In some cases refresh logic circuit 532 elevates the priority status of the refresh command to an urgent status, based on an urgent refresh count threshold.[0047] Refresh logic circuit 532 provides a priority indicator with the refresh command to specify a priority status of the refresh command to final arbiter 650. Refresh logic circuit 532 sets the value of the priority indicator according to a number of pending refreshes. Refresh logic circuit 532 assigns to the priority indicator a first priority status or a second priority status. Refresh logic 532 evenly spreads out a per bank refresh cycle based on a predetermined time period. The predetermined time period is an intermediate refresh interval that is a timing dependent refresh interval, that is based on refresh time interval (tREFI) and the number of memory banks that are assigned to the memory controller. The trigger of the intermediate refresh is dependent on a threshold of owed refreshes.[0048] Within refresh logic circuit 532, priority is initially set based on the number of pending refreshes. In general, refresh logic circuit 532 elevates the refresh command to execute between the first type of accesses and the second type of accesses. More specifically, final arbiter 650 sends the refresh command when there is no page hit transaction to the target memory banks. In response to the second priority status, final arbiter 650 elevates the refresh command above the first type of accesses and the second type of accesses. Thereby, in some cases, final arbiter 650 prioritizes the refresh command to execute instead of pending requests to the memory bank.[0049] By using sub-arbiters for page hits, page conflicts, and page misses, arbiter 538 can selectively pick accesses based on sub-arbitrations, and categorize them as a first type of access and a second type of access. Final arbiter 650 can select refresh commands based on input received from refresh logic circuit 532 which generates the refresh command to bank 134 of memory 132 based on the number of pending refreshes. Final arbiter 650 orders the refresh command with respect to a first type of access and a second type of access. The intermediate refresh time interval is a time period that is less than tREFI. Ordering the refresh commands based on the types of memory accesses and according to the number of pending refreshes allows refresh commands to be sent at a higher frequency than the refresh time interval and in a sufficient amount of time to avoid penalties due to urgent refreshes.[0050] In other embodiments, arbiter 538 could include a different number of sub-arbiters. For example, arbiter 538 could include two sub-arbiters, one arbiter for page hits and another arbiter for page conflicts and page misses. In this case, arbiter 538 is able to access page types based on the two sub-arbitrations.[0051] In some embodiments, refresh logic circuit 532 generates the refresh command per bank in one tREFI so that during high workloads when some banks are refreshing, other transactions are utilizing other banks with memory 132 to more fully take advantage of bus bandwidth of memory 132. In general, to send out the intermediate refresh command during transactions final arbiter 650 asserts an urgent refresh status to the intermediate refresh command having a first or second priority status when a predetermined clock cycle expires and the page of the bank is closed. This allows the intermediate refresh per bank command to be generated to memory 132 evenly and consistently. Final arbiter 650 arbitrates an intermediate refresh per bank command to generate to the bank between page hits and page misses. Elevating the priority of the intermediate refresh per bank command to generate between page hits and page misses further saves memory 132 from penalties that derive from closing the pages.Advantageously, intermediate refresh per bank command alleviates the clock cycles required between opening a row of memory and accessing columns within the row (trcd), and alleviates the clock cycles required between issuing the precharge command and opening the next row (ΐf).[0052] In some embodiments, the arbiter 538 relegates the priority of the memory banks using a priority indicator. In response to simultaneously receiving a refresh command to at least two memory banks with an equivalent priority indicator, arbiter 538 relegates the memory bank that is a most recent recipient of the refresh command below the bank of memory that is the least recent recipient of the refresh command. In response to receiving an urgent refresh command from refresh logic circuit 532, arbiter 538 blocks the activation of a row of the corresponding bank so that no new activity is started in the bank. After receiving an urgent refresh command for the bank, arbiter 538 sends the refresh request to the bank in two conditions. First, arbiter 538 sends the urgent refresh command to the bank right away if the refresh timing was met at the same time as the urgent refresh command was generated. Second, if the refresh timing was not met at the same time that the urgent refresh command was generated, arbiter 538 waits for the refresh timing to be met, and then sends a refresh request to the corresponding bank.[0053] FIG. 7 illustrates a block diagram of a refresh logic circuit 700 that may be used for refresh logic circuit 532 of FIGs. 5 and 6 according to some embodiments. Refresh logic circuit 700 includes generally a refresh internal timer 705, a per-bank timer array 710, a pending refresh queue 720, an owed refresh counter 730, a first comparator 740, and a second comparator 750.[0054] Refresh internal timer 705 has an input connected to a clock source and an output for providing an incremental count to owed refresh counter 730. Per-bank timer array 710 has an input for receiving a clock signal, and an output for providing a per-bank refresh to pending refresh queue 720. Pending refresh queue 720 has a first input connected to per-bank timer array 710, a second input, and an output for providing the refresh command to final arbiter 650. Owed refresh counter 730 has a first input labeled“INC”, a second input labeled“DEC” connected to the output of final arbiter 650, and an output. The output of owed refresh counter 730 provides an owed refresh count to first comparator 740 and second comparator 750. First comparator 740 also includes a second input for receiving a programmable urgent refresh limit, and an output for providing a priority indicator to the refresh command. Second comparator 750 also includes a second input for receiving a programmable intermediate refresh limit, and an output labeled“URGENT” for providing a priority indicator to final arbiter 650 with the refresh command signal. Final arbiter 650 provides a“refresh sent” signal to pending refresh queue 720 and owed refresh counter 730 to track the number of pending refreshes.[0055] In operation, refresh logic circuit 700 receives a clock signal for tracking tREFI. Refresh logic circuit 700 determines the intermediate refresh time interval based on tREFI and provides a refresh command based on the clock signal and the total number of banks assigned to the memory controller. Each cycle an intermediate time period elapses without a refresh sent, refresh timer interval timer 705 signals to increment owed refresh counter 730. Per-bank timer array 710 receives the clock signal and provides a refresh command to pending refresh queue 720 that corresponds to a respective memory bank. Pending refresh queue 720 provides the refresh command and priority indicator to final arbiter 650. Refresh logic circuit 700 sets the value of the priority indicator according to a number of pending refreshes. First comparator 740 compares the number of owed refreshes to the urgent refresh limit and elevates the priority indicator for a pending refresh command when owed refresh counter 730 is above the urgent refresh limit. Second comparator 750 compares the number of owed refreshes to the intermediate refresh limit and sets the priority indicator to a first priority status when owed refresh counter 730 is above an intermediate refresh count threshold.[0056] In some embodiments, refresh logic circuit 700 generates per two bank refresh commands. Refresh logic circuit 700 elevates a priority indicator for a pending refresh command for the paired bank when the priority indicator is a first priority status and a refresh timer is above a refresh timing interval. Accordingly, when one of the paired banks are page closed and the intermediate refresh interval has elapsed, the priority indicator is elevated to urgent refresh status for both paired banks. In response to pages being open in the target banks, final arbiter 650 precharges both banks. [0057] By selecting indicating intermediate priority for per-bank refresh commands, refresh logic circuit 700 allows arbiter 538 to send most refreshes in time to avoid latency penalties due to urgent refreshes. Further, memory bandwidth is increased thereby enabling improved processor performance. In one example memory bandwidth utilization is increased by approximately 3% for double data rate type six sy nchronous graphics random-access memory (GDDR6) when intermediate refresh per bank is used in comparison to when only a casual or urgent refresh per bank scheme is utilized.[0058] The circuits of FIGs. 5, 6, and 7 may be implemented with various combinations of hardware and software. For example, the hardware circuitry may include priority encoders, finite state machines, programmable logic arrays (PLAs), and the like, arbiter 538 could be implemented with a microcontroller executing stored program instructions to evaluate the relative timing eligibility of the pending commands. In this case some of the instructions may be stored in a non-transitory computer memory or computer readable storage medium for execution by the microcontroller. In various embodiments, the non-transitory computer readable storage medium includes a magnetic or optical disk storage device, solid-state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted and/or executable by one or more processors.[0059] APU 110 of FIG. 1 or memory controller 500 of FIG. 5 or any portions thereof, such as arbiter 538, may be described or represented by a computer accessible data structure in the form of a database or other data structure which can be read by a program and used, directly or indirectly, to fabricate integrated circuits. For example, this data structure may be a behavioral-level description or register-transfer level (RTL) description of the hardware functionality in a high level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool which may synthesize the description to produce a netlist comprising a list of gates from a synthesis library. The netlist includes a set of gates that also represent the functionality of the hardware comprising integrated circuits. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce the integrated circuits. Alternatively, the database on the computer accessible storage medium may be the netlist (with or without the synthesis library) or the data set, as desired, or Graphic Data System (GDS) II data.[0060] While particular embodiments have been described, various modifications to these embodiments will be apparent to those skilled in the art. For example, the internal architecture of memory controller 500 and/or arbiter 538 may vary in different embodiments. Memory controller 500 may interface to other types of memory besides DDRx memory, such as high bandwidth memory (HBM), RAMbus DRAM (RDRAM), synchronous graphics random access memory and the like. While the illustrated embodiment showed each bank of memory corresponding to intermediate refresh per bank time intervals, in other embodiments both banks of a paired bank of memory can support responding to an intermediate refresh per bank time interval.[0061] Accordingly, it is intended by the appended claims to cover all modifications of the disclosed embodiments that fall within the scope of the disclosed embodiments. |
A method used in forming an array of elevationally-extending strings of memory cells comprises forming a lower stack comprising vertically-alternating insulative tiers and wordline tiers. Lower channel openings are in the lower stack. A bridge is epitaxially grown that covers individual of the lower channel openings. A lower void space is beneath individual of the bridges in the individual lower channel openings. An upper stack is formed above the lower stack. The upper stack comprises vertically-alternating insulative tiers and wordline tiers. Upper channel openings are formed into the upperstack to the individual bridges to form interconnected channel openings individually comprising one of the individual lower channel openings and individual of the upper channel openings. The interconnected channel openings individually have one of the individual bridges there-across. The individual bridges are penetrated through to uncover individual of the lower void spaces. Transistor channel material is formed in an upper portion of the interconnected channel openings elevationally along the vertically-alternating tiers in the upper stack. |
1.A method for forming an array of vertically extending memory cell strings includes:Forming a lower stack including vertically alternating insulating layers and word line layers, the lower channel opening being in the lower stack;Bridging members covering individual lower channel openings in the lower channel openings are epitaxially grown, and a lower void space is under the individual bridging members in the bridge parts in the individual lower channel openings;Forming an upper stack above the lower stack, the upper stack including vertically alternating insulating layers and word line layers;An upper channel opening to the individual bridge is formed in the upper stack to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening A channel opening, the interconnect channel opening individually having one of the individual bridges spanning the interconnect channel opening;Penetrate the individual bridges to expose individual lower void spaces in the lower void spaces; andA transistor channel material is formed in the upper portion of the interconnect channel opening vertically along the vertically alternating layers in the upper stack.2.The method of claim 1, wherein the bridge comprises elemental silicon.3.The method of claim 1, wherein the bridge comprises SiGe.4.The method of claim 1, wherein the epitaxial growth includes heteroepitaxial growth.5.The method of claim 1, wherein the epitaxial growth is selectively performed from a sidewall surface of the seed material surrounding the individual lower channel opening.6.The method of claim 5 which includes masking material on top of the seed material during the epitaxial growth, and further includes removing all of the masking material before forming the upper stack.7.The method of claim 5, comprising a masking material on top of the seed material during the epitaxial growth, and further comprising forming the upper stack on top of the masking material.8.The method of claim 1, wherein the epitaxial growth is performed from a sidewall surface and a top surface of the seed material surrounding the opening of the individual lower channel.9.The method of claim 1, wherein the epitaxial growth is performed from a sidewall surface of a seed material surrounding the individual lower channel opening, the sidewall of the individual lower channel opening is masked during the epitaxial growth Below the seed material.10.The method of claim 9, wherein the sidewalls of the individual lower channel openings under the seed material are masked by a masking material during the epitaxial growth, and the method additionally includes:Forming the mask material throughout the sidewall surface of the seed material within the individual lower channel opening; andBefore the epitaxial growth, the mask material is recessed vertically below the top surface of the seed material.11.The method of claim 10, wherein the vertical recessing proceeds at least down to the bottom surface of the seed material.12.The method of claim 1, wherein the formation of the transistor channel material simultaneously forms the transistor in both the upper channel opening and the lower channel opening of the interconnect channel opening Channel material.13.A method for forming an array of vertically extending memory cell strings includes:Forming a lower stack including vertically alternating insulating layers and word line layers, the lower channel opening being in the lower stack, the lower stack memory cell material spanning the substrate of the respective lower channel opening in the lower channel opening and Along the sidewall of the individual lower channel opening;Removing a portion of the lower stacked memory cell material that spans individual ones of the substrates in the individual lower channel openings;Bridging members covering individual lower channel openings in the lower channel openings are epitaxially grown, and a lower void space is under the individual bridging members in the bridge parts in the individual lower channel openings;Forming an upper stack above the lower stack, the upper stack including vertically alternating insulating layers and word line layers;An upper channel opening to the individual bridge is formed in the upper stack to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening A channel opening, the interconnect channel opening individually having one of the individual bridges spanning the interconnect channel opening;Forming an upper stacked memory cell material across the base of the individual upper channel opening in the upper channel opening and along the sidewall of the individual upper channel opening;Removing a portion of the upper stacked memory cell material across the individual substrates in the substrate in the individual upper channel openings;Penetrate the individual bridges to expose individual lower void spaces in the lower void spaces; andA transistor channel material is formed in the upper portion of the interconnect channel opening vertically along the vertically alternating layers in the upper stack.14.The method of claim 13, wherein the epitaxial growth is performed from a sidewall surface of a seed material surrounding the opening of the individual lower channel, and further comprising stacking the lower memory cell material before the epitaxial growth Vertically recessed below the top surface of the seed material.15.The method according to claim 13, comprising forming along the sidewall of the individual lower channel opening and the individual substrate after removing the portion of the lower stacked memory cell material and before the epitaxial growth Sacrificial pad.16.The method of claim 15, comprising removing all of the sacrificial liner after the penetration.17.The method of claim 15, wherein the sacrificial liner underfills the individual lower channel opening, and further includes supplementing the individual lower channel opening with sacrificial filling material after forming the sacrificial liner All remaining volume.18.The method of claim 17, wherein the sacrificial liner comprises polysilicon.19.The method of claim 18, wherein the sacrificial liner includes silicon dioxide radially inside the polysilicon.20.The method of claim 17, wherein the epitaxial growth is performed from a sidewall surface of the seed material surrounding the opening of the individual lower channel, and further comprising causing the sacrificial liner and The sacrificial filler material is recessed vertically below the top surface of the seed material.21.The method of claim 20, comprising vertically recessing the sacrificial filler material before recessing the sacrificial pad vertically.22.The method of claim 20, wherein the sacrificial filler material is recessed vertically inward to the bottom surface of the seed material.23.The method of claim 13, comprising forming along the sidewall of the individual upper channel opening and the individual substrate after removing the portion of the upper stacked memory cell material and before the penetration Sacrificial pad.24.A method of forming an array of vertical memory cell strings includes:Forming a lower stack including vertically alternating insulating layers and word line layers, the lower channel opening being in the lower stack;Bridging members covering individual lower channel openings in the lower channel openings are epitaxially grown, and a lower void space is under the individual bridging members in the bridge parts in the individual lower channel openings;Forming an upper stack above the lower stack, the upper stack including vertically alternating insulating layers and word line layers;An upper channel opening to the individual bridge is formed in the upper stack to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening A channel opening, the interconnect channel opening individually having one of the individual bridges spanning the interconnect channel opening;Penetrating the individual bridges to expose individual lower void spaces in the lower void spaces;Vertically forming the transistor channel material in individual interconnect channel openings in the interconnect channel openings along the vertically alternating layers in the upper and lower stacks; andThe word line layer is formed to include a control gate material having an end corresponding to the control gate region of an individual memory cell, a charge storage material between the transistor channel material and the control gate region, the transistor An insulating charge transfer material between the channel material and the charge storage material, and a charge blocking region between the charge storage material and individual control gate regions in the control gate region.25.The method of claim 24, wherein the forming the word line layer to include a control gate material is performed after forming the transistor channel material.26.The method of claim 24, wherein the forming the word line layer to include a control gate material is performed before forming the transistor channel material.27.A method of forming an array of vertically extending memory cell strings includes:Forming a lower stack including vertically alternating insulating layers and word line layers, the lower stacked insulating layer including an insulating lower stack first material, the lower stacked word line layer including a different composition from the lower stack first material The second material is stacked in the lower part, and the lower channel opening is in the lower stack;Forming at least one of the following: (a): stacking a charge blocking material in a lower part of the substrate in and across the lower channel opening of the lower channel opening, or (b) : Stacking a charge storage material in the lower portion of the individual lower channel opening and across the lower portion of the substrate of the individual lower channel opening;Removing a portion of the at least one of (a) and (b) across the individual substrates in the individual lower channel openings;Epitaxially growing the bridge member covering the individual lower channel opening, and the lower void space is under the individual bridge member in the bridge member in the individual lower channel opening;An upper stack is formed above the lower stack, the upper stack includes vertically alternating insulation layers and word line layers, the upper stack insulation layer includes an insulating upper stack first material, and the upper stack word line layer includes The upper stack first material has a different upper stack second material;An upper channel opening to the individual bridge is formed in the upper stack to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening A channel opening, the interconnect channel opening individually having one of the individual bridges spanning the interconnect channel opening;Forming at least one of the following: (c): stacking a charge blocking material in the upper portion of the upper channel opening in and across the substrate of the upper channel opening, or (d) : Stacking a charge storage material in the upper portion of the individual upper channel opening and across the upper portion of the substrate;Removing a portion of the at least one of (c) and (d) across the individual substrates in the individual upper channel openings;Penetrating the individual bridges to expose individual lower void spaces in the lower void spaces;Vertically forming the transistor channel material in the upper part of the interconnection channel opening along the vertically alternating layers in the upper stack;Forming horizontally elongated grooves in the upper and lower stacks;Selectively etching the upper stack second material and the lower stack second material of the word line layer with respect to the insulating upper stack first material and the insulating lower stack first material;A control gate material is formed through the trench in the word line layer to be vertically between the insulating upper stack first materials of the upper stacked alternating layer and vertically between the lower stacked alternating layers Between the insulating lower stack first materials, the control gate material has ends corresponding to control gate regions of individual memory cells;Removing the control gate material from the individual trenches; andThe word line layer is formed to include a charge storage material between the transistor channel material and the control gate region, an insulating charge transfer material between the transistor channel material and the charge storage material, and all A charge blocking region between the charge storage material and individual control gate regions in the control gate region. |
Method for forming array of vertically extending memory cell stringsTechnical fieldThe embodiments disclosed herein relate to a method for forming an array of vertically extending memory cell strings, a method of forming an array of vertically extending memory cell strings, and a method of forming an array of vertical memory cell strings.Background techniqueMemory is a type of integrated circuit and is used to store data in computer systems. The memory can be manufactured as one or more arrays of individual memory cells. Digital lines (which may also be referred to as bit lines, data lines, or sense lines) and access lines (which may also be referred to as word lines) may be used to write to or read from memory cells. The sense lines can electrically interconnect the memory cells along the columns of the array, and the access lines can electrically interconnect the memory cells along the rows of the array. Each memory cell can be uniquely addressed through a combination of sense lines and access lines.The memory cell may be volatile, semi-volatile, or non-volatile. Non-volatile memory cells can store data for a long period of time without powering on. Non-volatile memory is generally designated as memory having a retention time of at least about 10 years. Volatile memory dissipates and is therefore refreshed / rewritten to maintain data storage. Volatile memory can have a retention time of a few milliseconds or less. Regardless, the memory unit is configured to hold or store the memory in at least two different selectable states. In a binary system, the state is regarded as "0" or "1". In other systems, at least some individual memory cells may be configured to store more than two bits or state of information.Field effect transistors are a type of electronic component that can be used in memory cells. These transistors include a pair of conductive power source / drain regions with a semi-conductive channel region therebetween. The conductive gate is adjacent to the channel region and separated from the channel region by a thin gate insulator. Applying an appropriate voltage to the gate allows current to flow from one of the source / drain regions to the other through the channel region. When the voltage is removed from the gate, current is greatly prevented from flowing through the channel region. The field effect transistor may also include additional structures, for example, a reversible programmable charge storage region that is part of the gate structure between the gate insulator and the conductive gate.Flash memory is a type of memory and is widely used in modern computers and devices. For example, modern personal computers can store the BIOS on a flash memory chip. As another example, it is more and more common that computers and other devices utilize flash memory in the form of solid-state drives to replace conventional hard disk drives. As yet another example, flash memory is popular in wireless electronic devices, because flash memory enables manufacturers to support new communication protocols as they become standardized, and enables manufacturers to provide enhancements Features the ability to upgrade devices remotely.NAND can be the basic architecture of integrated flash memory. The NAND cell unit includes at least one selection device coupled in series with the serial merge of the memory cells (the serial merge is generally referred to as a NAND string). The NAND architecture can be configured in a three-dimensional arrangement that includes vertically stacked memory cells that individually include reversible programmable vertical transistors. Control circuitry or other circuitry may be formed under vertically stacked memory cells.Summary of the inventionAn aspect of the present disclosure relates to a method for forming an array of vertically extending memory cell strings, which includes: forming a lower stack including vertically alternating insulating layers and word line layers with a lower channel opening at the lower portion In the stack; the epitaxial growth of the bridges covering the individual lower channel openings, the lower void space under the individual bridges in the individual lower channel openings; an upper stack is formed above the lower stack, the upper stack includes Vertically alternating insulating layers and word line layers; upper channel openings to the individual bridges are formed in the upper stack to individually form mutual A channel opening, the interconnection channel opening individually having one of the individual bridges spanning the interconnection channel opening; penetrating the individual bridges to expose individual lower void spaces; and vertically along The vertically alternating layers in the upper stack form transistor channel material in the upper portion of the interconnect channel opening.Another aspect of the present disclosure relates to a method for forming an array of vertically extending memory cell strings, which includes: forming a lower stack including vertically alternating insulating layers and word line layers, with a lower channel opening at the In the lower stack, the lower stacked memory cell material spans the substrate of the individual lower channel opening and along the sidewalls of the individual lower channel opening; removes the lower stacked memory cell material across the individual substrate in the individual lower channel opening Part of the; extending the bridging member covering the individual lower channel openings epitaxially, the lower void space is below the individual bridging member in the individual lower channel openings; forming an upper stack above the lower stack, the upper stack including vertical Alternating insulating layers and word line layers; upper channel openings to individual bridges are formed in the upper stack to individually form interconnect channel openings including one of individual lower channel openings and individual upper channel openings , The interconnection channel opening individually has one of the individual bridges spanning the interconnection channel opening; across the individual A channel opening base and forming an upper stacked memory cell material along the sidewalls of the individual upper channel openings; removing a portion of the upper stacked memory cell material across the individual substrates in the individual upper channel openings; penetrating the Individual bridges to expose individual lower void spaces; and vertically forming the transistor channel material in the upper portion of the interconnect channel opening along the vertically alternating layers in the upper stack.Another aspect of the present disclosure relates to a method of forming an array of vertical memory cell strings, including: forming a lower stack including vertically alternating insulating layers and word line layers, a lower channel opening being in the lower stack ; The epitaxial growth of the bridge covering the individual lower channel opening, the lower void space in the individual lower channel opening below the individual bridge; forming an upper stack above the lower stack, the upper stack includes vertically alternating An insulating layer and a word line layer; upper channel openings to individual bridges are formed in the upper stack to separately form interconnect channel openings including one of individual lower channel openings and individual upper channel openings, so The interconnection channel openings individually have one of the individual bridges spanning the interconnection channel openings; penetrate the individual bridges to expose individual lower void spaces; vertically along the upper and lower stacks The vertically alternating layers form transistor channel material in individual interconnect channel openings; and form word line layers to include having corresponding memory cells The control gate material at the end of the control gate region, the charge storage material between the transistor channel material and the control gate region, the insulating charge transfer material between the transistor channel material and the charge storage material, and the charge storage material and the individual Control the charge blocking region between the gate regions.Another aspect of the present disclosure relates to a method of forming an array of vertically extending memory cell strings, including: forming a lower stack including vertically alternating insulating layers and word line layers, the lower stacked insulating layer including an insulating lower portion Stacking a first material, the lower stacked word line layer includes a lower stacked second material having a different composition from the lower stacked first material, a lower channel opening is in the lower stack; at least one of the following is formed ( a): stack charge blocking material in the lower part of the substrate in the individual lower channel opening and across the individual lower channel opening, or (b): in the individual lower channel opening and across the substrate of the individual lower channel opening The lower part stacks the charge storage material; removes a part of the at least one of (a) and (b) across the individual substrate in the individual lower channel opening; epitaxially grows the bridge covering the individual lower channel opening The void space is below individual bridges in individual lower channel openings; an upper stack is formed above the lower stack, the upper stack including vertically alternating insulating layers A word line layer, the upper stacked insulating layer includes an insulating upper stack first material, the upper stacked word line layer includes an upper stacked second material having a different composition from the upper stacked first material; in the upper stack An upper channel opening to the individual bridge is formed to individually form an interconnection channel opening including one of the individual lower channel opening and the individual upper channel opening, the interconnection channel opening individually having One of the individual bridges connected to the channel opening; forming at least one of (c): stacking a charge blocking material in the upper portion of the substrate in and across the upper channel opening of the individual, or (d ): Stack charge storage material in the upper part of the substrate in the individual upper channel opening and across the upper channel opening; remove the description in (c) and (d) of the individual substrate across the individual upper channel opening A portion of at least one; penetrating individual bridges to expose individual lower void spaces; vertically along the vertically alternating layers in the upper stack are formed in the upper portion of the interconnection channel opening Transistor channel material; forming horizontally elongated trenches in the upper and lower stacks; selectively etching the word line layer upper stack second material and lower stack first with respect to the insulating upper stack first material and the insulating lower stack first material Two materials; the control gate material is formed through the trench in the word line layer to vertically lie between the insulating upper stacked first materials of the upper stacked alternating layers and vertically between the insulating lower stacked first materials of the lower stacked alternating layers In the meantime, the control gate material has an end corresponding to the control gate region of the individual memory cell; removing the control gate material from the individual trench; and forming a word line layer to include between the transistor channel material and the control gate region The charge storage material, the insulating charge transfer material between the transistor channel material and the charge storage material, and the charge blocking region between the charge storage material and the individual control gate region.BRIEF DESCRIPTIONFIG. 1 is a diagrammatic cross-sectional view of a portion of a substrate in processing according to an embodiment of the present invention, and is taken through line 1-1 in FIG. 2.FIG. 2 is a view taken through line 2-2 in FIG. 1. FIG.3 is a view of the substrate of FIG. 2 at a processing step after the processing step shown in FIG. 2.4 is a view of the substrate of FIG. 3 at a processing step after the processing step shown in FIG. 3.FIG. 5 is a view of the substrate of FIG. 4 at a processing step after the processing step shown in FIG. 4.6 is a view of the substrate of FIG. 5 at a processing step after the processing step shown in FIG. 5.7 is a view of the substrate of FIG. 6 at a processing step after the processing step shown in FIG. 6.FIG. 8 is a view of the substrate of FIG. 7 at the processing step after the processing step shown in FIG. 7 and is taken through line 8-8 in FIG. 9.9 is a view taken through line 9-9 in FIG. 8.10 is a view of the substrate of FIG. 9 at a processing step after the processing step shown in FIG. 9.11 is a view of the substrate of FIG. 10 at a processing step after the processing step shown in FIG. 10.12 is a view of the substrate of FIG. 11 at a processing step after the processing step shown in FIG. 11.13 is a view of the substrate of FIG. 12 at a processing step after the processing step shown in FIG. 12.14 is a view of the substrate of FIG. 13 at a processing step after the processing step shown in FIG. 13.15 is a view of the substrate of FIG. 14 at a processing step following the processing step shown in FIG. 14.16 is a view of the substrate of FIG. 15 at a processing step after the processing step shown in FIG. 15.17 is a view of the substrate of FIG. 16 at a processing step after the processing step shown in FIG. 16.18 is a view of the substrate of FIG. 17 at a processing step after the processing step shown in FIG. 17.19 is a view of the substrate of FIG. 18 at a processing step after the processing step shown in FIG. 18, and is taken through the line 19-19 in FIG. 20.FIG. 20 is a view taken through the line 20-20 in FIG. 19.21 is a view of the substrate of FIG. 20 at a processing step after the processing step shown in FIG.22 is a view of the substrate of FIG. 21 at a processing step after the processing step shown in FIG. 21.23 is a view of the substrate of FIG. 22 at a processing step after the processing step shown in FIG. 22, and is taken through the line 23-23 in FIG. 24.FIG. 24 is a view taken through the line 24-24 in FIG. 23.24A is an enlarged view of a part of the substrate as shown in FIG.25 is a view of the substrate of FIG. 23 at a processing step after the processing step shown in FIG. 23, and is taken through line 25-25 in FIG. 26.FIG. 26 is a view taken through line 26-26 in FIG. 25. FIG.27 is a diagrammatic cross-sectional view of a portion of a substrate in processing according to an embodiment of the invention.28 is a view of the substrate of FIG. 27 at a processing step after the processing step shown in FIG. 27.29 is a view of the substrate of FIG. 28 at a processing step after the processing step shown in FIG. 28.30 is a view of the substrate of FIG. 29 at a processing step after the processing step shown in FIG. 29.31 is a view of the substrate of FIG. 30 at a processing step after the processing step shown in FIG. 30.32 is a view of the substrate of FIG. 31 at a processing step after the processing step shown in FIG. 31.detailed descriptionEmbodiments of the present invention encompass a method of forming a vertically extending string array of transistors and / or memory cells, such as an array of NAND or other memory cells with peripheral control circuitry below the array (eg, CMOS under-array) ). Embodiments of the present invention cover the so-called "last gate" or "replacement gate" process, the so-called "gate first" process, and other processes, whether existing or developed in the future independently of the time of forming the transistor gate. The first example embodiment is described with reference to FIGS. 1 to 26, which can be regarded as a "back gate" or "replace gate" process.1 and 2 show the substrate construction 10 during a method of forming a vertically extending string array 12 of transistors and / or memory cells. The substrate construction 10 includes a base substrate 11 having conductive / conductor / conductivity (ie, electrically herein), semiconductive / semiconductor / semiconducting or insulating / insulator / insulation (ie, (Electrically herein) any one or more of the materials. Various materials have been formed vertically on the base substrate 11. The material can be alongside, vertically inside, or vertically outside of the material depicted in FIGS. 1 and 2. For example, other partially manufactured or fully manufactured components of the integrated circuit may be provided above, around, or inside the base substrate 11. Control circuitry and / or other peripheral circuitry for operating components within an array (e.g., array 12) of vertically extending memory cell strings may also be manufactured, and the circuitry may or may not be fully or partially Within an array or sub-array. In addition, multiple sub-arrays can also be manufactured and operated independently of each other, in tandem, or otherwise. Herein, "sub-array" may also be regarded as an array.The substrate structure 10 includes a lower stack 18 that includes vertically alternating insulating layers 20 and word line layers 22 directly above an example conductive doped semiconductor material 16 (eg, conductive doped polysilicon). The conductive material 16 may include a portion of control circuitry (eg, peripheral array under the array) for controlling read and write access to transistors and / or memory cells to be formed within the array 12. The lower stacked insulating layer 20 includes an insulating lower stacked first material 24 (for example, silicon dioxide). The lower stacked word line layer 22 includes a lower stacked second material 26 having a different composition than the lower stacked first material 24 (eg, silicon nitride, and in any case, it may be completely or partially sacrificial). In one embodiment, the lower stack 18 includes a seed material 14 that may be one of layers 20 or 22 (eg, layer 20 as shown) and may be a sacrificial material in whole or in part. The seed material 14 will provide one or more surfaces from which epitaxial growth will proceed as described below. Lower channel openings 25 have been formed in the alternating layers 20, 22 (eg, by dry anisotropic etching), with the example seed material 14 (and materials 24 and 26) surrounding the individual lower channel openings 25. In one embodiment, the lower channel opening 25 has individual substrates 21 within the material 16.By way of example only, the lower channel openings 25 are shown as being arranged in a staggered row group or column of four openings 25 per row. Any alternative existing or future developed arrangements and configurations may be used. The use of "rows" and "columns" in this document is to facilitate the distinction between one series or oriented feature and another series or oriented feature, and components have been or may be formed along the "rows" and "columns." "Row" and "Column" are used synonymously with respect to any series of regions, components, and / or features, regardless of function. In any case, the rows may be straight and / or curved and / or parallel and / or non-parallel with respect to each other, as may the columns. Furthermore, the rows and columns may intersect at 90 ° relative to each other or at one or more other angles. Other circuitry that may or may not be part of the peripheral circuitry may be between conductively doped semiconductor material 16 and stack 18.Referring to FIG. 3, the lower stacked memory cell material 30 has been formed in the lower channel opening 25 across the individual substrate 21 and along the sidewall of the lower channel opening 25. In the context of this document, "memory cell material" is any material that includes the operating material in the completed memory cell construction, including, by way of example only, gate material, source / drain material, charge blocking material, charge Any one or more of storage materials, charge transfer materials, gate dielectrics, and channel materials. In one embodiment, the memory cell material 30 includes at least one of: (a) lower stacked charge blocking material or (b) lower stacked charge storage material (eg, floating gate material, such as doped or undoped Silicon, or charge trapping materials, such as silicon nitride, metal dots, etc.). In one such embodiment, the memory cell material includes (a). In one such embodiment, the memory cell material includes (b). In one such embodiment, the memory cell materials include (a) and (b). The memory cell material 30 may be formed by, for example, depositing a thin layer thereof on the lower stack 18 and within the individual lower channel opening 25, and then planarizing this material at least back to the vertical outermost surface of the stack 18.Referring to FIG. 4, a portion (eg, lateral central portion and / or radial central portion) of the stacked memory cell material 30 across the lower portion of the individual substrate 21 in the individual lower channel opening 25 has been removed. For example, this can be done by maskless anisotropic etching of material 30 using one or more etching chemistry reactions, and this etching can be performed instead of first placing such materials from the top of lower stack 18 Remove on a horizontal surface, as shown in Figure 3.Referring to FIG. 5, and in one embodiment, after removing a portion of the lower stacked memory cell material 30, the sacrificial liner 15 has been formed along the individual substrate 21 and along the sidewall of the individual lower channel opening 25. In one embodiment and as shown, the sacrificial liner 15 underfills the individual lower channel opening 25 and the method further includes passing the sacrificial liner material 27 (eg, alumina and / or light) after forming the sacrificial liner 15 Resist) supplements the entire remaining volume of the individual lower channel opening 25. In one embodiment, the sacrificial liner 15 includes polysilicon 19, and in one such embodiment includes silicon dioxide 23 radially inward of the polysilicon 19.The memory cell material, the sacrificial pad, and the sacrificial filler material are recessed vertically below the top surface of the seed material 14 (ie, at least vertically recessed below the top surface). FIG. 6 shows an example embodiment of the sacrificial filler material 27 recessed vertically (ie, at least vertically recessed) into the bottom surface of the seed material 14. Thereafter, and referring to FIG. 7, the unmasked portions of the sacrificial pad 15 and the memory cell material 30 have been recessed below the top surface of the seed material 14 and the sacrificial filling material 27 (not shown) has then been removed. As an alternative example, the sacrificial liner 15 (not shown) may not be used and the sacrificial filler material 27 alone is used to mask the lower portion of the memory cell material 30 from being removed. The reason for using the sacrificial pad 15 is described below.8 and 9, the bridge 28 has been epitaxially grown to cover the individual lower channel opening 25. This forms a lower void space 33 under the individual bridge 28 in the individual lower channel opening 25. The bridge 28 may extend upward and / or downward relative to the seed material 14 (both cases are shown). In one embodiment, the epitaxial growth includes heteroepitaxial growth. In one embodiment, the bridge 28 includes elemental silicon (eg, it may be grown epitaxially from single crystal or polycrystalline silicon seed material 14 or from silicon nitride seed material 14). In one embodiment, the bridge 28 includes SiGe (eg, it may also be grown epitaxially from single crystal or polycrystalline silicon seed material 14 or from silicon nitride seed material 14). In one embodiment and as shown, the epitaxial growth is selectively performed from the sidewall surface 31 of the seed material 14 surrounding the individual lower channel opening 25. In one such embodiment and as shown, the masking material (eg, the material 24 of the uppermost layer in the stack 18) is on top of the seed material 14 during the epitaxial growth, for example, such that the material of the bridge 28 The epitaxial growth does not proceed from the top surface of the seed material 14. Alternatively and by way of example only, the epitaxial growth may be performed from both the sidewall surface 31 and the top surface (not shown) of the seed material 14. Regardless, and in one embodiment, the epitaxial growth of the material of the bridge 28 may proceed from the sidewall surface 31 of the seed material 14, while the sidewall of the individual lower channel opening 25 below the seed material 14 is at It is masked during epitaxial growth (for example, through liner 15 as shown). In one such embodiment, the liner 15 may be regarded as a mask material, which was originally formed over the entire sidewall surface 31 of the seed material 14 within the individual lower channel opening 25 (and across the substrate 21), and at Prior to epitaxial growth, it is recessed vertically below the top surface of the seed material 14, and in one embodiment is recessed at least down to the bottom surface of the seed material 14. The sacrificial liner 15 can be ideally used to prevent epitaxial growth from proceeding from the substrate 21 when its exposed material (eg, material 16) can also provide a seed for epitaxial growth. However, epitaxial growth can be performed from the substrate 21 (not shown), regardless of whether the epitaxial growth material is subsequently removed.Referring to FIG. 10, and in one embodiment, all masking material (e.g., all material 24 in the uppermost layer of the stack 18 in FIG. 9 has been removed (e.g., by etching and / or polishing), and thus in FIG. Not shown in 10).Referring to FIG. 11, the upper stack 35 has been formed above the lower stack 18. The upper stack 35 includes vertically alternating insulating layers 20 and word line layers 22. In one embodiment, the upper stacked insulating layer 24 includes an insulating upper stacked first material 24 (which may have the same or different composition as the lower stacked first material 24) and the upper stacked word line layer 22 includes the first stacked material 24 The upper stacked second material 26 having a different composition (which may have the same or different composition as the lower stacked second material 26). Only a few layers 20, 22 in each of the upper and lower stacks are shown, but there may be more layers in each stack (eg, tens, hundreds, etc.), and the stack does not need to Have the same number of layers relative to each other.Referring to FIG. 12, the upper channel opening 37 has been formed in the upper stack 35 (eg, by dry anisotropic etching) to the individual bridges 28 to separately form the individual lower channel opening 25 and the individual upper channel opening 37 One of which has an interconnection channel opening 47 of individual bridges 28 across it. It can be considered that the upper channel opening 37 damages the individual substrate 39. The formation of the upper channel opening 37 may vertically recess the bridge 28 as shown.Referring to FIG. 13, and in one embodiment, the upper stacked memory cell material 30 has been formed across the individual substrate 39 of the individual upper channel opening 37. The upper stacked memory cell material 30 may have the same or different composition as the lower stacked memory cell material 30. Regardless, in one embodiment, the upper stacked memory cell material 30 includes (c) upper stacked charge blocking material or (d) upper stacked charge storage material (eg, floating gate material, such as doped or undoped silicon , Or at least one of charge trapping materials, such as silicon nitride, metal dots, etc.). In one such embodiment, the memory cell material includes (c). In one such embodiment, the memory cell material includes (d). In one such embodiment, the memory cell materials include (c) and (d). The upper stacked memory cell material 30 may be formed by, for example, depositing a thin layer thereof on the upper stack 35 and within the individual upper channel opening 37, and then planarizing this material at least back to the vertical outermost surface of the stack 35.Referring to FIG. 14, a portion of the upper stacked memory cell material 30 across the individual substrate 39 in the individual upper channel opening 37 has been removed (eg, by maskless anisotropic etching).Referring to FIG. 15, and in one embodiment, a sacrificial liner 41 (for example, polysilicon) underfilling the individual upper channel opening 37 has been formed in the upper channel opening 37.Referring to FIG. 16, individual bridges 28 (and sacrificial pads 41, when present) have been penetrated to expose individual lower void spaces 33. By way of example, this penetration can be performed by wet or dry selective anisotropic or isotropic etching (eg, using tetramethylammonium hydroxide and / or ammonium peroxide).Referring to FIG. 17, all of the sacrificial pads 15 and 41 (both not shown) have been removed after penetrating the bridge 28 (for example, by selective anisotropic etching).Referring to FIG. 18, and in the case where, for example, the memory cell material 30 includes a charge blocking material, the charge storage material 32 has been vertically formed in the interconnection channel opening 47 along the alternating layers 20, 22 and the charge blocking material 30. An insulating charge transfer material 34 has been formed in the interconnect channel opening 47 along the alternating layers 20, 22 and the charge storage material 32. By way of example, the charge transfer material 34 may be a band-gap modified structure with a nitrogen-containing material (eg, silicon nitride) sandwiched between two insulator oxides (eg, silicon dioxide).The transistor channel material 36 has been formed vertically in the upper portion (ie, at least) of the interconnect channel opening 47 along the vertical alternating layers 20, 22 in the upper stack 35 (ie, at least). In one embodiment and as shown, the transistor channel material 36 is simultaneously formed in both the upper channel opening 37 and the lower channel opening 25 of the interconnect channel opening 47. Example channel material 36 includes suitably doped crystalline semiconductor materials, such as one or more silicon, germanium, and so-called Group III / Group V semiconductor materials (eg, GaAs, InP, GaP, and GaN). An example thickness of each of the materials 30, 32, 34, and 36 is 25 to 100 angstroms. The interconnect channel opening 47 is shown to include a radial center solid dielectric material 38 (eg, spin-on dielectric, silicon dioxide, and / or silicon nitride). Alternatively and by way of example only, the radial central portion within interconnect channel opening 47 may include void space (not shown) and / or be free of solid material (not shown).Referring to FIGS. 19 and 20, a horizontally elongated (FIG. 19) trench 40 has been formed (eg, by anisotropic etching) into the upper stack 35 and the lower stack 18, and in one embodiment is formed into a conductive doped semiconductor material 16 (ie, at least to material 16). The lateral edges of the trench 40 may be used at least in part to define the lateral edges of the word lines (eg, access or control gate lines to be formed, and not shown in FIGS. 19 and 20), as described below.Referring to FIG. 21, the first material 24 has been selectively stacked relative to the insulating upper portion and the first material 24 has been selectively stacked relative to the insulating lower portion (and in one embodiment, relative to the seed material 14). The upper portion of the wire layer 22 is stacked with a second material 26 (not shown) and the lower portion is stacked with a second material 26 (not shown). An example etching chemical reaction where the second material 26 includes silicon nitride, the first material 24 includes silicon dioxide, and the seed material 14 includes polycrystalline silicon is liquid phase or vapor phase etching using H3PO4 as a primary etchant.Referring to FIG. 22, a control gate material 48 (ie, a conductive material) has been formed through the trench 40 into the word line layer 22 to be vertically between the insulating upper stacked first material 24 of the upper stacked alternating layer 20 and vertically The first material 24 is stacked toward the insulating lower part of the lower stacked alternating layer 20. Any suitable conductive material may be used, such as one or both of metal materials and / or conductive doped semiconductor materials.23, 24 and 24A, the control gate material 48 has been removed from the individual trench 40. This causes the formation of word lines 29 and vertically extending strings 49 of individual transistors and / or memory cells 56. In one embodiment and as shown, the string 49 is formed vertically or within 10 °. The approximate location of the transistor and / or memory cell 56 is indicated by parentheses in FIG. 24A, and some are indicated by dashed outlines in FIGS. 23 and 24, where the transistor and / or memory cell 56 is substantially circular in the depicted example Or circular. The control gate material 48 has an end 50 corresponding to the control gate region 52 of the individual transistor and / or memory cell 56 (FIG. 24A). In the depicted embodiment, the control gate region 52 includes individual portions of individual word lines 29.The charge blocking region (eg, charge blocking material 30) is between the charge storage material 32 and the individual control gate regions 52. The charge blocking member may have the following function in the memory cell: In the programming mode, the charge blocking member may prevent charge carriers from passing from the charge storage material (eg, floating gate material, charge trapping material, etc.) to the control gate, and In the erase mode, the charge blocking member can prevent charge carriers from flowing into the charge storage material from the control gate. Therefore, the charge blocking member can be used to block the charge migration between the control gate region and the charge storage material of individual memory cells. The example charge blocking region as shown includes insulator material 30. As a further example, the charge blocking region may include a lateral (eg, radial) outer portion of a charge storage material (eg, material 32), where such charge storage material is insulated (eg, between the insulated charge storage material 32 and the (There is no different composition material between the materials 48). In any case, as an additional example, the interface of the charge storage material and the conductive material of the control gate may be sufficient to act as a charge blocking region in the absence of any separate constituent insulator material 30. In addition, the interface of the conductive material 48 and the material 30 (when present) in conjunction with the insulator material 30 may act together as a charge blocking region, and alternatively or additionally, may be a lateral outer portion of the insulating charge storage material (eg, silicon nitride material 32) Area.Referring to FIGS. 25 and 26, an insulating material liner 55 has been formed in such trenches (eg, silicon nitride, silicon oxynitride, aluminum oxide) on the sidewalls of individual trenches 40 and vertically along the sidewalls , Hafnium oxide, combinations of these, etc.). Another material 57 (dielectric and / or silicon-containing, such as polysilicon) has been formed in the individual trench 40 vertically along the insulating material liner 55 and across the insulating material liner. Any other attributes or aspects as shown and / or described herein with respect to other embodiments can be used.The above embodiment shows in the drawings the formation of a seed material 14 having a different composition from any of the materials 24 and 26. Alternative example embodiments are shown and described with respect to the substrate construction 10a of FIGS. 27-32. The same reference numerals from the embodiments described above have been used as appropriate, with the suffix "a" indicating certain structural differences.FIG. 27 shows the uppermost example word line layer 22 as including, for example, silicon nitride, which can be used as a seed material for epitaxial growth, such as SiGe, elemental form Si, and other materials. 28 and 29 show example subsequent processing similar to the processing performed by the first described embodiment of FIGS. 3 to 9. FIG. 30 shows a subsequent process similar to the process performed in FIGS. 10 to 11 forming the upper stack 35. 31 and 32 show subsequent processing similar to the processing performed in FIGS. 12 to 26. Any other attributes or aspects as shown and / or described herein with respect to other embodiments can be used.Alternatively and by way of example only, where the seed material includes conductive doped polysilicon that remains part of the completed circuit system construction, this can be fabricated to include an operative gate to facilitate the passage The channel material in between conducts. For example, polysilicon may be recessed laterally from the channel opening before or after forming the upper stack and before or after penetrating the bridge to finally serve as a control gate material. After penetrating the epitaxially grown bridge, then charge blocking material and charge storage material may be deposited in the lateral recesses and then removed from the channel opening. The charge transfer material and channel material can then be deposited and the upper and lower stacked charge transfer material and channel material can be deposited. The operative polysilicon gate can then be "opened" to facilitate conduction through the channel material between the upper stacked channel material above and the lower stacked channel material below. Alternatively, as needed, a conductive insertion material may be provided in the interconnection channel opening between the upper and lower stack channel materials to improve conduction through the channel material between the upper and lower stacks. Any other attributes or aspects as shown and / or described herein with respect to other embodiments can be used.The above-mentioned process as shown in the figure may be regarded as a so-called "back gate" or "replacement gate" process (for example, in which the control gate material is provided after the transistor channel material is formed). Alternatively, by way of example, a so-called "front gate" process may be used (eg, where the control gate material is provided before the transistor channel material is formed). For example, the stack of FIGS. 2 and 11 may be initially manufactured with control gate material 48 (not shown) instead of material 26. Any other attributes or aspects as shown and / or described herein with respect to other embodiments can be used.In this document, unless otherwise indicated, "vertical", "higher", "upper", "lower", "top", "top", "bottom", "above", "below", "Under", "under", "up" and "down" generally refer to the vertical direction. "Horizontal" refers to a general direction along the surface of the main substrate and during processing that the processing substrate can be opposed (ie, within 10 degrees), and vertically is a direction generally orthogonal thereto. Reference to "exactly horizontal" refers to a direction along the surface of the main substrate (ie, no degree is formed from the surface) and during processing the substrate can be opposed. Furthermore, "vertical" and "horizontal" as used herein are substantially vertical directions with respect to each other, and are independent of the orientation of the substrate in three-dimensional space. In addition, "vertically extending" and "vertically extending" refer to directions that deviate from just horizontal by at least 45 °. In addition, with regard to the "vertically extending", "vertically extending", horizontally extending, and horizontally extending reference currents of the field effect transistors during operation, the channel length of the transistor along which the source / drain regions flow Orientation. For bipolar junction transistors, "vertical extension", "vertical extension", horizontal extension, and horizontal extension refer to the orientation of the base length along which current flows in operation between the emitter and collector.Furthermore, "directly above" and "directly below" require that the two stated zones / materials / components have at least some lateral overlap (ie, horizontally) with respect to each other. Moreover, the use of "above" without a "positive" in front of it only requires that a portion of the stated area / material / component above another stated area / material / component be vertically outward from the other stated area / material / component (That is, it does not matter whether there is any horizontal overlap between the two stated areas / materials / components). Similarly, the use of "below" without a "positive" in front of it only requires that a portion of the stated area / material / component under another stated area / material / component be vertically Inside (ie, regardless of whether there is any horizontal overlap of the two stated areas / materials / components).Any of the materials, regions, and structures described herein may be uniform or non-uniform, and in any case may be continuous or discontinuous over any material overlying it. When one or more example compositions are provided for any material, the materials may include, consist essentially of, or consist of such one or more compositions. In addition, unless otherwise stated, any suitable or yet to be developed technology can be used to form each material, examples of which are atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ions injection.In addition, "thickness" (a non-directional adjective in front) used alone is defined as the average straight-line distance perpendicular to a given material or zone from the closest surface of the immediately adjacent material or zone with different compositions. In addition, the various materials or regions described herein may have a substantially constant thickness or have a variable thickness. If it has a variable thickness, the thickness refers to the average thickness unless otherwise specified, and the material or region will have a certain minimum thickness and a certain maximum thickness due to the variable thickness. As used herein, "different composition" only requires that those portions of the two materials or regions in question that can directly abut each other are chemically and / or physically different, for example, if the materials or regions are not uniform under. If the two materials or regions in question do not directly abut each other, then in the case where the materials or regions are not uniform, "different composition" only requires those parts of the two materials or regions closest to each other Chemically and / or physically different. In this document, when a material, region or structure is in at least some physical contact with each other, the stated material, region or structure "directly abuts" another material, region or structure. In contrast, "over", "on", "adjacent", "along", and "abutment" without "positive" in front of it cover "direct arrival" "Reliance" and constructions in which materials, regions or structures are interposed such that the stated materials, regions or structures are not in physical contact with each other.Herein, if in normal operation, current can flow continuously from one zone / material / component to another zone / material / component, and the subatomic positive sum is mainly passed through when sufficient subatomic positive and / or negative charges are generated / Or the movement of negative charges to flow, then the zones / materials / components are "electrically coupled" relative to each other. Another electronic component may be between and electrically coupled to the region / material / component. In contrast, when a zone / material / component is referred to as "direct electrical coupling", there are no intervening electronic components (eg, no diodes, transistors, resistors, transducers) Switch, switch, fuse, etc.).In addition, "metal material" is any one or combination of elemental metals, mixtures or alloys of two or more elemental metals, and any conductive metal compound.In this article, "selectivity" with regard to etching / removing and / or forming / formation is that one stated material is at least by volume relative to another stated material A 2: 1 ratio is applied to this action.Unless otherwise indicated, the use of "or" herein covers either and both.in conclusionIn some embodiments, a method for forming an array of vertically extending memory cell strings includes forming a lower stack including vertically alternating insulating layers and word line layers. The lower channel opening is in the lower stack. Bridges covering individual ones of the lower channel openings are grown epitaxially. The lower void space is below the individual bridge in the bridge in the individual lower channel opening. An upper stack is formed above the lower stack. The upper stack includes vertically alternating insulating layers and word line layers. An upper channel opening is formed in the upper stack to the individual bridge to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening Channel opening. The interconnection channel opening individually has one of the individual bridges spanning the interconnection channel opening. The individual bridges are penetrated to expose individual lower void spaces in the lower void spaces. Transistor channel material is formed vertically in the upper portion of the interconnect channel opening along the vertically alternating layers in the upper stack.In some embodiments, a method for forming an array of vertically extending memory cell strings includes forming a lower stack including vertically alternating insulating layers and word line layers. The lower channel opening is in the lower stack. The lower stacked memory cell material spans the base of each of the lower channel openings and along the sidewalls of the individual lower channel openings. A portion of the lower stacked memory cell material across the individual substrates in the substrate in the individual lower channel openings is removed. Bridges covering individual ones of the lower channel openings are grown epitaxially. The lower void space is below the individual bridge in the bridge in the individual lower channel opening. An upper stack is formed above the lower stack. The upper stack includes vertically alternating insulating layers and word line layers. An upper channel opening is formed in the upper stack to the individual bridge to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening Channel opening. The interconnection channel opening individually has one of the individual bridges spanning the interconnection channel opening. The upper stacked memory cell material is formed across the base of the individual upper channel openings and along the sidewalls of the individual upper channel openings. A portion of the upper stacked memory cell material across the individual substrate in the individual upper channel opening is removed. The individual bridges are penetrated to expose individual lower void spaces in the lower void spaces. Transistor channel material is formed vertically in the upper portion of the interconnect channel opening along the vertically alternating layers in the upper stack.In some embodiments, a method of forming an array of vertical memory cell strings includes forming a lower stack that includes vertically alternating insulating layers and word line layers. The lower channel opening is in the lower stack. Bridges covering individual ones of the lower channel openings are grown epitaxially. The lower void space is below the individual bridge in the bridge in the individual lower channel opening. An upper stack is formed above the lower stack. The upper stack includes vertically alternating insulating layers and word line layers. An upper channel opening is formed in the upper stack to the individual bridge to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening Channel opening. The interconnection channel opening individually has one of the individual bridges spanning the interconnection channel opening. The individual bridges are penetrated to expose individual lower void spaces in the lower void spaces. Transistor channel material is formed vertically in the individual interconnect channel openings in the interconnect channel openings along the vertically alternating layers in the upper and lower stacks. The word line layer is formed to include a control gate material having ends corresponding to control gate regions of individual memory cells. The charge storage material is between the transistor channel material and the control gate region. An insulating charge transfer material is between the transistor channel material and the charge storage material. The charge blocking region is between the charge storage material and the individual control gate region.In some embodiments, a method of forming an array of vertically extending memory cell strings includes forming a lower stack including vertically alternating insulating layers and word line layers, the lower stack insulating layer including insulating lower stacks One material. The lower stacked word line layer includes a lower stacked second material having a different composition from the lower stacked first material. The lower channel opening is in the lower stack. Forming (a): stacking a charge blocking material in the lower part of the individual lower channel opening in the lower channel opening and across the substrate of the individual lower channel opening, or (b): in the individual lower channel At least one of the charge storage materials is stacked in the lower portion of the substrate in the channel opening and across the respective lower channel opening. A portion of the at least one of (a) and (b) across the individual substrates in the individual lower channel openings is removed. The bridge covering the individual lower channel openings is grown epitaxially. The lower void space is below the individual bridge in the bridge in the individual lower channel opening. An upper stack is formed above the lower stack. The upper stack includes vertically alternating insulating layers and word line layers. The upper stacked insulating layer includes an insulating upper stacked first material. The upper stacked word line layer includes an upper stacked second material having a different composition from the upper stacked first material. An upper channel opening is formed in the upper stack to the individual bridge to individually form an interconnection including one of the individual lower channel opening and the individual upper channel opening of the upper channel opening Channel opening. The interconnection channel opening individually has one of the individual bridges spanning the interconnection channel opening. Forming (c): stacking a charge blocking material in the upper part of the individual upper channel opening in the upper channel opening and across the substrate of the individual upper channel opening, or (d): in the individual upper channel At least one of the charge storage materials is stacked on the upper portion of the substrate in the channel opening and across the respective upper channel opening. A portion of the at least one of (c) and (d) across the individual substrates in the individual upper channel openings is removed. The individual bridges are penetrated to expose individual lower void spaces in the lower void spaces. Transistor channel material is formed vertically in the upper portion of the interconnect channel opening along the vertically alternating layers in the upper stack. Horizontally extending trenches are formed in the upper and lower stacks. The upper stack second material and the lower stack second material of the word line layer are selectively etched with respect to the insulating upper stack first material and the insulating lower stack first material. A control gate material is formed through the trench in the word line layer to be vertically between the insulating upper stack first materials of the upper stacked alternating layer and vertically between the lower stacked alternating layers The insulating lower part is stacked between the first materials. The control gate material has ends corresponding to control gate regions of individual memory cells. The control gate material is removed from the individual trenches. The word line layer is formed to include a charge storage material between the transistor channel material and the control gate region, an insulating charge transfer material between the transistor channel material and the charge storage material, and the charge storage material and the individual control gate The charge blocking area between the areas.According to regulations, the subject matter disclosed herein has been described in near-specific language with regard to structural and methodological features. It should be understood, however, that the claims are not limited to the specific features shown and described, as the devices disclosed herein include example embodiments. Thus, the claims have the full scope as stated in writing, and should be properly interpreted according to the equivalent principle. |
Systems, apparatuses, and methods related to organizing data to correspond to a matrix at a memory device are described. Data can be organized by circuitry coupled to an array of memory cells prior to the processing resources executing instructions on the data. The organization of data may thus occur on a memory device, rather than at an external processor. A controller coupled to the array of memory cells may direct the circuitry to organize the data in a matrix configuration to prepare the data for processing by the processing resources. The circuitry may be or include a column decode circuitry that organizes the data based on a command from the host associated with the processing resource. For example, data read in a prefetch operation may be selected to correspond to rows or columns of a matrix configuration. |
1.A device including:An array of memory cells; andA controller coupled to the memory cell array, wherein the controller is configured to direct the circuit to:Transferring data from the memory cell array to a number of sense amplifiers; andThe data is organized into a portion corresponding to the matrix configuration by selecting a portion of the data to be transmitted from the plurality of sense amplifiers to the input/output (I/O) component of the device.2.The apparatus of claim 1, wherein the matrix configuration has a specific size defined by a product of a first number of rows and a second number of columns.3.The device of claim 2, wherein the controller is configured to:Organizing the data into rows corresponding to the matrix configuration, and wherein the row is one of the first number of rows; andThe data is organized into columns corresponding to the matrix configuration, where the column is one of the second number of columns.4.The apparatus of claim 2, further comprising a mode register coupled to the controller and configurable by the controller to indicate the specific size, and wherein the first number of rows is different from The second number of rows.5.The device according to any one of claims 1 to 4, wherein the matrix configuration includes a first sub-matrix having a first size and a second sub-matrix having a second size.6.The device of claim 5, wherein the first size of the first sub-matrix and the second size of the second sub-matrix are selected by a host associated with the device to prepare the data For processing by the host's processing resources.7.A device including:An array of memory cells; andA controller coupled to the memory cell array and configured to direct a circuit to transfer data from the memory cell array to a plurality of sense amplifiers and select portions of the data to obtain information from the plurality of sense amplifiers The test amplifier is sent to an input/output (I/O) component to organize the data into a corresponding matrix set, wherein the matrix set includes:The first sub-matrix of the first size; andThe second sub-matrix of the second size.8.7. The apparatus of claim 7, wherein the first size of the first sub-matrix is equal to the second size of the second sub-matrix.9.8. The apparatus of claim 7, wherein the set of matrices further includes a third sub-matrix of the first size and a fourth sub-matrix of the second size.10.7. The device of claim 7, wherein the set of matrices further comprises a third sub-matrix of a third size, the third size is different from the first size, and the third size is different from the second size.11.A device including:An array of memory cells; andA controller coupled to the memory cell array, wherein the controller directs the circuit to:The data corresponding to the matrix configuration is transformed into a linear configuration by selecting parts of the data to be transmitted from input/output (I/O) components to several sense amplifiers.12.The device of claim 11, wherein the matrix configuration includes a first number of rows and a second number of columns.13.The apparatus according to claim 12, wherein the controller further directs the circuit to perform a write operation of the data on the memory cell array, and wherein the write operation of the data includes a write operation corresponding to the following Bit of at least one of the items:The first row of the first number of rows of the matrix configuration; orThe first column of the second number of columns.14.A method including:Transfer data from the memory array to several sense amplifiers;Organizing the data to correspond to a matrix configuration by selecting a part of the data in the sense amplifier; andThe selected data is sent to input/output (I/O) components coupled to the plurality of sense amplifiers.15.The method of claim 14, wherein selecting the portion of the data to correspond to the matrix configuration is a product of a first number of rows configured by the matrix and a second number of columns configured by the matrix Define where16.The method of claim 14, wherein organizing the data includes organizing the data to correspond to the matrix configuration based on commands from a host associated with the array of memory cells.17.The method of any one of claims 14 to 16, wherein selecting the portion of the data comprises selecting data corresponding to a diagonal line in the matrix configuration, the diagonal line crossing the first A number of rows and the second number of columns.18.A method including:Receiving data at a memory device from a host associated with the memory device;By selecting several sense amplifiers to store specific bits of data, transforming the data from corresponding to a matrix configuration to corresponding to a linear configuration via a column decoding circuit; andA write operation is performed to write the data in the linear configuration to the memory cell array in the memory.19.The method of claim 18, wherein transforming the data from corresponding to the matrix configuration to corresponding to the linear configuration comprises selecting a first number of data bits to be stored in a first number of sense amplifiers.20.The method of claim 19, wherein selecting the first number of data bits to be stored in the first number of sense amplifiers includes at least one of:Select the number of bits corresponding to the rows of the matrix configuration; orSelect the number of bits corresponding to the columns of the matrix configuration. |
Apparatus and method for organizing data in memory deviceTechnical fieldThe present disclosure relates generally to semiconductor memories and methods, and more specifically to apparatuses and methods for organizing prefetched data in a memory device.Background techniqueMemory devices are usually provided as internal semiconductor integrated circuits in computers or other electronic systems. There are many different types of memory, including volatile and non-volatile memory. Volatile memory may require power to maintain its data (for example, host data, error data, etc.), and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), Synchronous dynamic random access memory (SDRAM) and thyristor random access memory (TRAM), etc. Non-volatile memory can provide persistent data by retaining stored data when power is not supplied, and can include NAND flash memory, NOR flash memory, and variable resistance memory (such as phase change random access memory (PCRAM), resistance random access Memory (RRAM) and Magnetoresistive Random Access Memory (MRAM), such as Spin Transfer Torque Random Access Memory (STT RAM), etc.).Electronic systems often contain several processing resources (for example, one or more processors) that can retrieve and execute instructions and store the results of the executed instructions in appropriate locations. The processor may include several functional units, such as arithmetic logic unit (ALU) circuit, floating-point unit (FPU) circuit, and combinational logic block. For example, the functional unit can be used to pass data (eg, one or more operands). ) Perform logical operations such as AND, OR, NOT, NAND, NOR, and XOR, and inverse (for example, inversion) logical operations to execute instructions. For example, the functional unit circuit can be used to perform arithmetic operations on operands through several operations, such as addition, subtraction, multiplication, and division. Memory devices that do not have logic for sorting information may help increase latency, or may not improve latency issues associated with such arithmetic or matrix operations.Description of the drawingsFIG. 1 is a block diagram of a device in the form of a computing system including a memory device according to several embodiments of the present disclosure.FIG. 2 is a block diagram of a memory cell array of a memory device and a controller of the memory device according to several embodiments of the present disclosure.FIG. 3 is a schematic diagram showing rows of a memory cell array according to several embodiments of the present disclosure.FIG. 4 is a flowchart for organizing data into a matrix (for example, a matrix data structure) according to the present disclosure.5A and 5B are schematic diagrams showing an example of organizing data into a matrix corresponding to several embodiments of the present disclosure.6A and 6B are schematic diagrams showing other examples of organizing data into matrices corresponding to several embodiments of the present disclosure.Figure 7 is a flowchart for transforming data from a matrix configuration to a linear configuration according to several embodiments of the present disclosure.8A to 8C are schematic diagrams showing examples of transforming data from a matrix configuration to a linear configuration according to several embodiments of the present disclosure.Detailed waysThe present disclosure includes systems, devices, and methods associated with organizing data in a matrix format on a memory device. In several embodiments, an apparatus includes an array of memory cells and a controller coupled to the array of memory cells. The apparatus may further include the controller that directs a circuit to transfer data from the memory cell array to a plurality of sense amplifiers, and selects at least a part of the data and transfers it from the plurality of sense amplifiers. The sense amplifier is delivered to an input/output (I/O) component that can include DQ and buffers. The transfer of the data portion from the several sense amplifiers to the I/O component may be at least partly a prefetch operation. The apparatus may further include a controller that directs the circuit to organize the data transferred in the prefetch operation in a corresponding matrix configuration.Several components in a computing system may be involved in providing instructions to functional unit circuits for execution. For example, the instructions can be executed by processing resources such as a controller and/or a main processor. Data (for example, operands to which instructions will be executed) can be stored in a memory cell array accessible by functional unit circuits. In many instances, processing resources (e.g., processors and/or associated functional unit circuits) may be external to the memory cell array, and data are accessed via the bus between the processing resources and the memory cell array to execute the instruction set.In some instances, the processing resource reads the data in the order in which the data is stored in the memory cell array. Accessing data in this manner can reduce the throughput (eg, rate and/or efficiency) from the memory cell array to the processing resource, because the processing resource may require reordering, organizing, or otherwise manipulating the data before instructions can be executed on the data . The reduced throughput of processing resources can reduce the overall performance of the computing system.In several embodiments of the present disclosure, before the processing resource executes instructions on the data, the data may be organized by a circuit coupled to the memory cell array. Therefore, data organization can be performed on the memory device instead of on the external processor. In some examples, a controller coupled to the memory cell array directs the circuit to organize the data in a matrix configuration to prepare the data for processing by the processing resource. In some embodiments, the circuit may be a column decoding circuit that may include a multiplexer that organizes data based on commands from the host associated with the processing resource. For example, data transferred from several sense amplifiers to input/output components may be sent from the memory device in an order corresponding to consecutive rows of the matrix configuration used by the host. Alternatively, the data from the memory cell array may be organized by the column decoding circuit to correspond to consecutive columns of the matrix configuration. In several embodiments, the spatial characteristics of the matrix configuration, such as matrix size, number of matrices per prefetch operation, etc., may vary based on commands from the host, which may depend on the current requirements of the computing system.Several embodiments of the present disclosure further include data transformed by the column decoding circuit to reorder the data from the matrix configuration to the linear configuration to prepare the data for writing to the memory cell array. For example, data can be received by a memory device, and the column decoding circuit can transform the data by rearranging the order in which data bits are written to several sense amplifiers. The data can be received so that the bits correspond to the columns of the matrix configuration, and the column decoding circuit can transform the data so that the data bits corresponding to the columns on the matrix are not stored adjacent to each other in the sense amplifier, but correspond to the columns of the The data bits are separated by, for example, a sense amplifier that is one less than the number of bits in the rows of the matrix. The memory device may receive data corresponding to the next column in the matrix, and the data may be organized by the column decoding circuit to be stored in a sense amplifier that is next to the sense amplifier storing the previous column.Performing prefetch operations (for example, part of a read operation that transfers data from a sense amplifier to an input/output component) and/or write operations on the memory cell array in the manner described herein can reduce the amount of processing normally performed by processing resources. The number of steps. Therefore, several embodiments of the present disclosure may provide various benefits, including improved throughput (e.g., increased Speed, rate and/or efficiency).The drawings herein follow a numbering convention, where the first digit or digits of the reference number correspond to the drawing number, and the remaining digits identify the elements or components in the drawing. Similar numbers can be used to identify similar elements or components between different figures. For example, 130 in FIG. 1 may be referred to as element "30", and similar elements in FIG. 2 may be referred to as 230.FIG. 1 is a block diagram of a device in the form of a computing system 100 including a memory device 120 according to several embodiments of the present disclosure. The system 100 can be a laptop computer, a tablet computer, a personal computer, a digital camera, a digital recording and playback device, a mobile phone, a personal digital assistant (PDA), a memory card reader, an interface hub, a sensor, an automatic or semi-automatic motor vehicle , Automatic or semi-automatic manufacturing robots, Internet of Things (IoT) enabled devices and other systems.In several embodiments, based on pre-existing protocols (eg, DDR3, DDR4, LPDDR, etc.), reading and/or writing data and associated commands may utilize data paths and timing in the DRAM device. As used herein, data movement is an inclusive term that includes, for example, copying, transferring, and/or transferring data values from a source location to a destination location, such as from a memory cell array to a processing resource or vice versa. As the reader will understand, although DRAM type memory devices are discussed with respect to the examples presented herein, the embodiments are not limited to DRAM implementations.In several embodiments, a row of the virtual address space in the memory device (e.g., as shown at 120 in FIG. 1) (e.g., as shown at 219 in FIG. 2 and at corresponding reference numbers elsewhere herein) ) May have a bit length of 16K bits (for example, corresponding to 16,384 memory cells or complementary pairs of memory cells in a DRAM configuration). The read/latch circuit for this 16K bit row (for example, as shown at 150 in FIG. 1 and at the corresponding reference number elsewhere in this document) may include a corresponding 16K sense amplifier (for example, as shown in FIG. 3 306 and the corresponding reference numbers elsewhere herein) and associated circuits that form a distance from the sensing lines that are selectively coupled to the corresponding memory cells in the 16K-bit row. The sense amplifier in the memory device is operable as a cache for a single data value (bit) from the row of memory cells sensed by the read/latch circuit 150. More generally, several embodiments of the present disclosure include read/latch circuits 150 (e.g., sense amplifier 306 and associated circuits), which can be spaced from the sense lines of the memory cell array. The read/latch circuits and other data storage components described herein can perform data sensing and/or storage (eg, cache, latch, buffer, etc.) of data local to the memory cell array.In order to understand the improved data movement technology based on organizing data in a matrix, a device for implementing this technology (for example, the memory device 120 and the associated host 110 with these capabilities) is discussed below.As shown in FIG. 1, the system 100 may include a host 110 coupled (eg, connected) to a memory device 120. The memory device 120 includes a memory cell array 130 and a controller 140, as well as various other circuits for organizing data in a matrix configuration and transforming the data from a matrix configuration to a linear configuration, as shown and described herein. The host 110 may be responsible for executing an operating system (OS) and/or various application programs that may be loaded thereto (eg, loaded from the memory device 120 via the controller 140). The host 110 may include a system motherboard and a backplane, and may include several processing resources (e.g., one or more processing resources) capable of accessing the memory device 120 (e.g., via the controller 140) to perform operations on data values organized in a matrix configuration. Controller 160, a microprocessor or some other type of control circuit). In several embodiments, the controller 140 may also include several processing resources for performing processing operations.As further shown in FIG. 1, the controller 140 may include or may be coupled to the mode register 141. The mode register 141 may be guided by the controller 140 to be set in a specific setting corresponding to the size of the matrix configuration. For example, the specific setting of the mode register may correspond to the dimension of the matrix, such as M×N. It should be noted that the data transferred from the memory cell array 130 to several sense amplifiers can be organized into a continuous matrix of a certain size in several prefetch operations, as described further below in conjunction with FIG. 4. The system 100 may include a separate integrated circuit, or both the host 110 and the memory device 120 may be on the same integrated circuit. The system 100 may be, for example, a server system and a high-performance computing (HPC) system or a part thereof. Although the example shown in FIG. 1 shows a system with a Von Neumann architecture, the embodiments of the present disclosure may be implemented in a non-Von Neumann architecture, which may not Contains one or more components (eg, CPU, ALU, etc.) commonly associated with the von Neumann architecture.The controller 140 (e.g., control logic and sequencer) may include a control circuit in the form of hardware, firmware, or software, or a combination thereof. As an example, the controller 140 may include a state machine, a sequencer, and/or some other type of control circuit, which may be implemented in the form of an application specific integrated circuit (ASIC) coupled to a printed circuit board. In several embodiments, the controller 140 may be co-located with the host 110 (e.g., in a system-on-chip (SOC) configuration).For the sake of clarity, the description of the system 100 has been simplified to focus on features that are particularly relevant to the present disclosure. For example, the memory cell array 130 may be a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RRAM array, FeRAM array, phase change array of memory cells, 3D XpointTM array, NAND flash memory array, and/or NOR flash memory array . The memory cell array 130 may include memory cells arranged in rows (e.g., in multiple sub-arrays) and columns. The memory cells may be coupled to each other through access lines (which may be referred to herein as word lines or select lines) to form rows. In addition, the memory cells may be coupled to each other through sensing lines (which may be referred to as data lines or digit lines herein) to form columns. Although a single memory cell array 130 is shown in FIG. 1, the embodiment is not limited thereto. For example, the memory device 120 may represent a plurality of memory cell arrays 130 (for example, a memory cell array included in several DRAM cell groups, NAND flash cells, etc.) in addition to a plurality of sub-arrays, as described herein . Therefore, the description in this disclosure can be made with respect to the DRAM architecture through examples and/or in a clear manner. However, unless expressly stated otherwise, the scope of the present disclosure and claims is not limited to the DRAM architecture.As further shown in FIG. 1, the memory device 120 may include an address circuit 142 to latch the I/O circuit 144 contained within the memory device 120 (eg, provided to an external device via a local I/O line and a global I/O line). The ALU circuit and the DRAM DQ) provide address signals via the data bus 156 (for example, the I/O bus from the host 110). As further shown in FIG. 1, the host 110 may include a channel controller 143. The status and abnormality information may be provided from the controller 140 of the memory device 120 to the channel controller 143 through the control bus 154, for example, and the information may be provided from the channel controller 143 to the host 110 in turn. The address signal can be received by the address circuit 142 (for example, from the channel controller 143 or another host component), and can be decoded by the row decoder 146 and/or the column decoder 152 to access the memory cell array 130.The memory cell array 130 can be sensed (read from the memory cell array 130 by using a sense amplifier (for example, shown as the read/latch circuit 150 in FIG. 1) to sense the voltage and/or current changes on the sense line (digital line). )data. The data can be sensed from the memory cell array 130 in 256-bit, 128-bit, 64-bit and other possible prefetch sizes. As described herein, the read/latch circuit 150 may include a number of sense amplifiers to read and latch a page of data (eg, a row or a portion of a row) from the memory cell array 130. The input/output (I/O) circuit 144 may include a data I/O for performing bidirectional data communication with the host 110 via the data bus 156 (for example, a 64-bit wide data bus, a 128-bit wide data bus, a 256-bit wide data bus, etc.). O pin. The memory device 120 may further include a write circuit 148 that can be used to write data to the memory cell array 130.The controller 140 may decode signals (eg, commands) provided by the control bus 154 from the host 110. The controller 140 may be configured to receive a command from the host 110 regarding the organization of data read from the memory cell array 130 into a matrix configuration. For example, the controller 140 may receive a command to organize data in a continuous matrix of a certain size. The controller 140 may control the operation by issuing a signal determined according to a decoding command from the host 110. These signals may include chip enable signals, write enable signals, and address signals that can be used to control operations performed on the memory cell array 130 (for example, sub-array address signals, row address signals, and/or latch address signals). Operations include data sensing, data storage, sub-array addressing, row addressing, latch addressing, data movement, data writing and data erasing operations, and other operations. In various embodiments, the controller 140 may be responsible for executing instructions from the host 110 and accessing the memory cell array 130 for prefetch operations or write operations.As further shown in FIG. 1, the memory device 120 includes a column decoding circuit/multiplexer 152. The controller 140 can direct a circuit such as the read/latch circuit 150 to transfer data values from the memory cell array 130. In several embodiments, the controller 140 may direct the column decoding circuit 152 to organize the data transmitted in the prefetch operation so that the data is sent from the memory device 120 in a matrix configuration (for example, the prefetch operation sends a portion corresponding to the matrix configuration ( For example, row or column data). Additionally or alternatively, the controller may direct the column decoding circuit 152 to transform the data received by the I/O circuit 144 from a matrix configuration to a linear configuration. The I/O circuit 144 can receive data from the host 110 via the data bus 156. The conversion to a linear configuration may be data prepared for writing to the memory cell array 130 by the writing circuit 148.FIG. 2 is a block diagram of a memory cell array 230 of a memory device and a controller 240 of the memory device according to several embodiments of the present disclosure. The architecture of the memory cell array 230 may include multiple columns (for example, the "X" column 222 as shown in FIG. 2). In addition, the array 230 can be divided into a plurality of sub-arrays 225-0 (sub-array 0), 225-1 (sub-array 1), ..., 225-N-1 (sub-array 225-N-1), which may include sensing The corresponding amplification areas of the amplifier group (eg, set) are separated. The sense amplifier group may be referred to as a sense amplifier stripe or a read/latch stripe. For example, as shown in FIG. 2, each sub-array 225-0, 225-1, ..., 225-N-1 has an associated read/latch stripe associated with it (e.g., 224-0, 224, respectively). -1,..., 224-N-1).The memory cell array 230 may include 64 sub-arrays, 128 sub-arrays, 256 sub-arrays, 512 sub-arrays, and various other possible numbers of sub-arrays. However, the embodiments are not limited thereto, and some embodiments of the memory cell array may have a different number of sub-arrays than just presented. In many embodiments, the sub-arrays 225 may have the same number of rows in each sub-array (e.g., 256 rows, 512 rows, 1024 rows, 2048 rows, and various other possible numbers of rows). However, the embodiment is not limited thereto, and at least some of the plurality of sub-arrays within the memory cell array 230 may have different numbers of rows.Each column 222 is configured to be coupled to a read/latch circuit (eg, read/latch circuit 150 as described in connection with FIG. 1 and elsewhere herein). Therefore, each column in the sub-array can be individually coupled to a sense amplifier that contributes to a set of sense amplifiers (e.g., read/latch stripes) of the sub-array. For example, as shown in FIG. 2, the memory cell array 230 may include read/latch stripe 0, read/latch stripe 1,..., read/latch stripe N-1, such as 224 -0, 224-1,..., 224-N-1, each of which has a read/latch circuit with a set of sense amplifiers, which can be used as Registers, caches and data buffers. A sense amplifier (e.g., as shown at 306 and described in conjunction with FIG. 3) may be coupled to each column 222 in the sub-arrays 225-0, 225-1, ..., 225-N-1. Each of the sub-arrays 225-0, 225-1, ..., 225-N-1 may contain a corresponding plurality of rows (e.g., a corresponding set of "Y" rows 219). Each read/latch stripe 224-0, 224-1, ..., 224-N-1 can be coupled to a column decoding circuit/multiplexer (for example, the column decoding circuit/multiplexer in FIG. 1 3), the column decoding circuit/multiplexer can be coupled to I/O components (for example, the I/O component circuit 144 in FIG. 1 and the I/O component circuit 144 in FIG. 3). Component 344) to send data from the read/latch stripe to the device coupled to the memory cell array 230.Figure 2 is a schematic diagram of a portion of a memory device according to several embodiments of the present disclosure. FIG. 2 shows an example including 1T1C memory cells in a folded DRAM configuration, each of which is coupled to a sense amplifier 206. However, the embodiments are not limited thereto, so that some embodiments may have memory cells in a 2T2C DRAM configuration.FIG. 3 is a schematic diagram showing rows of a memory cell array according to several embodiments of the present disclosure. As shown in FIG. 3, a portion of the sub-array 325 includes a row 319-1, which may include a plurality of X memory cells 308-0, ..., 308-X-1. The memory cells 308-0, ..., 308-X-1 may be located at the intersections of the plurality of X digital lines 305-0, ..., 305-X-1 and the row 319-1. In the figure, the plurality of digital lines 305-0,...,305-X-1 are referred to as digital lines 1,..., digital lines X-1. The number X corresponds to the number of columns (for example, the number of columns 222 shown in FIG. 2). As further shown in FIG. 3, the memory cells 308-0, ..., 308-X-1 may each be connected to an associated read/latch circuit 350-0, ..., 350-X-1, respectively. Each of the read/latch circuits 350-0,..., 350-X-1 includes a corresponding sense amplifier 306-0,..., 306-X-1. The sense amplifiers 306-1,..., 306-X-1 are referred to as sense amplifiers 1,..., X-1 in the figure. As shown in the figure, a sense amplifier associated with the memory cell is disposed between the memory cell and the column decoding circuit 352. The sense amplifier can be operated to determine the data value (e.g., logic state) stored in the selected memory cell. The sense amplifier 306 may include a cross-coupled latch (not shown). The sense amplifier 306 may be coupled to a balance circuit (not shown), which may be configured to balance the sense lines 305-1 and 305-2.Each of the plurality of memory cells may include a transistor used as an access element and a capacitor used as a storage element. The number of data values (e.g., voltage) sensed from the memory cell (e.g., in a read operation) may correspond to the number of columns of memory cells that intersect with a row of the sub-array (e.g., row 319-1 of FIG. 3). number. For example, a plurality of memory cells 308-0...308-X-1 can store a total of X data values.As further shown, the part of the sub-array 325 shown in FIG. 3 is connected to the column decoding circuit 352. Specifically, as shown in the figure, each memory cell is connected to the column decoding circuit via a digital line associated with the memory cell and via a sense amplifier connected to the digital line. The column decoding circuit 352 is connected to the input/output component 344 in turn. The architecture shown in FIG. 3 allows the column decoding circuit 352 to read the data stored in each memory cell and organize the data independently of reading the data stored in other memory cells.The controller (e.g., the controller 140 in FIG. 1) may be configured to receive (e.g., from the host 110) an encoded instruction to perform a data movement operation (e.g., read from a selected row of a sub-array of the memory cell array) , Write, erase operations, etc.) and/or calculation operations on the data values stored in the memory cells of the selected row (for example, logical operations, such as Boolean operations, and by the processor (for example, the processor 160 in FIG. 1) Other logical operations performed). For example, the controller may be configured to receive operation commands that include requests to perform DRAM operations (e.g., DRAM read and/or write operations). The controller may be further configured to sort or organize the data values to correspond to rows in the matrix configuration when transferring data between the sense amplifier and I/O components (eg, I/O circuit 144 in FIG. 1). The controller may instruct the column decoding circuit to organize the data values into rows corresponding to the matrix configuration. Therefore, the sense amplifiers described herein are configured to implement the execution of memory operations and/or calculation operations in conjunction with selected rows.FIG. 4 is a flowchart for organizing data into a matrix (for example, a matrix data structure) according to the present disclosure. Unless explicitly stated, the elements of the methods described herein are not limited to a specific order or order. In addition, several method embodiments or elements thereof described herein may be performed at the same or substantially the same point in time.As shown in FIG. 4, at block 465, method 464 may include receiving a command from a host associated with the memory device. For example, a controller (e.g., controller 140 of FIG. 1) may receive commands from a host (e.g., host 110 of FIG. 1) associated with a memory device (e.g., memory device 130 of FIG. 1). Referring again to FIG. 4, the command or request from the host may contain information about the characteristics of the matrix configuration in which the data read from the memory device is organized, as detailed below in conjunction with block 468. For example, the controller may receive a command to specify the specific size of the matrix configuration, the relationship between the number of rows of the matrix configuration and the number of columns of the matrix configuration, and the matrix generated by organizing a single data operation (for example, a prefetch operation) The number of matrices, the size of multiple matrices generated by a single data organization operation (for example, a prefetch operation), and whether the data organized into a matrix configuration should correspond to consecutive rows of the matrix, consecutive columns of the matrix, or consecutive pairs of matrix configurations The angle, the dimension of the matrix, or some other spatial characteristics of the matrix configuration. In several embodiments, the command received by the controller from the host may specify the characteristics of the matrix configuration in order to prepare the data for processing by the processor of the host (for example, the processor 160 of FIG. 1). The commands may differ based on the user application currently being processed or based on the previous mode used by the application known to the host.At block 466, the method 464 may include directing a circuit by a controller coupled to the memory cell array to perform certain steps, such as the steps detailed in the discussion of blocks 467 and 468 below. As an example, a controller (e.g., controller 140 shown in FIG. 1) may direct a column decoding circuit (e.g., column decoding circuit 152) to perform the steps detailed in the discussion of blocks 467 and 468 below. In several embodiments, the controller (e.g., the controller 140) can direct the column decoding circuit (e.g., the column decoding circuit 152), the read/latch circuit (e.g., the read/latch circuit 150), and the row decoding circuit. A certain combination of a row decoder (row decoder 146), a column decoder (for example, column decoder 152), an address circuit (address circuit 142), and an input-output circuit (for example, input-output circuit 144) to perform the following regarding blocks 467 and Steps detailed in the discussion of block 468.At block 467, the method may include directing the circuit to transfer data from the memory cell array to the sense amplifier. For example, a controller (e.g., controller 140 shown in FIG. 1) may direct a circuit to transfer data from a memory cell array (e.g., memory cell array 130). In several embodiments, data can be stored in multiple sense amplifiers (for example, multiple sense amplifiers 306-0,..., 306-X-1 shown in FIG. 3), where X is multiple sense amplifiers. Measure the number of amplifiers. A plurality of sense amplifiers can respectively read data through digital lines (for example, digital lines 305-0,..., 305-X-1 in FIG. 3). In several embodiments, the data value of each memory cell can be read by a dedicated sense amplifier that does not read data values from other memory cells. For example, the data values of multiple memory cells can be read by multiple sense amplifiers, respectively. The controller can use multiple sense amplifiers to read and store data.At block 468, the method may include directing the circuit to organize the data to correspond to the matrix configuration based on the command from the host. For example, the controller (e.g., the controller 140 of FIG. 1) may direct the column decoding circuit (e.g., the column decoding circuit 152 of FIG. 1) or other circuits to select specific bits of the data from the sense amplifier to make the selection position The data transferred from the memory cell array (e.g., memory cell array 130) to the sense amplifiers in the matrix configuration is organized corresponding to a portion of the matrix configuration (e.g., one or more rows or one or more columns of the matrix). It should be noted that in several embodiments, the controller may further direct the circuit to read data from the memory cell array in a prefetch operation, as described above in connection with block 467. In several embodiments, the controller may be coupled to the memory cell array, and both the controller and the memory cell array may be included in a memory device (e.g., memory device 120 shown in FIG. 1).Referring again to block 468, as described below in connection with FIGS. 5A, 5B, 6A, and 6B, the organization of data into corresponding matrix configurations can be performed in different ways in various different embodiments. The method may further include providing data to input and output components (e.g., in the form of I/O circuit 144 shown in FIG. 1). For example, the controller may direct a circuit such as a column decoding circuit to provide data to the I/O circuit.In several embodiments, the data transferred from the memory cell array to the sense amplifier can be organized into a matrix configuration of a specific size based on a command from the host during the prefetch operation. In several embodiments, for example, 256 bits can be prefetched at a time.5A and 5B are schematic diagrams showing an example of organizing data into a matrix corresponding to several embodiments of the present disclosure.As further shown in FIG. 5A, the column decoding circuit 552 can be directed by a controller coupled to the memory cell array to organize the data 569 read from the memory cell array in the matrix set 570. It should be noted that in an embodiment, a controller (for example, the controller 140 of FIG. 1) may be coupled to a memory cell array (for example, the memory cell array 130 of FIG. 1) and may be configured as a boot circuit (for example, a column decoding circuit). 552) Perform certain operations. In addition, the memory device may include a controller and a memory cell array. The matrix set 570 may include a sub-matrix 570-1 having a first size (the sub-matrix is sometimes referred to as a matrix here) and a sub-matrix 570-2 having a second size, as shown in FIG. 5A, where the matrix 570-1 has The size of 4×4, and the matrix 570-2 has a size of 2×2. In several embodiments, the size of matrix 570-1 and the size of matrix 570-2 are selected by the host associated with the memory device containing the memory cell array. As shown in FIG. 5A, the size of matrix 570-1 is different from the size of matrix 570-2. The matrix set 570 may include matrices of alternating sizes between 4×4 and 2×2, until there are K matrices in total. Referring again to FIG. 5A, the matrix set 570 may include a third sub-matrix 570-K-1 having the same size as the first sub-matrix 570-1 (e.g., a first size) and a third sub-matrix 570-K-1 having the same size as the second sub-matrix (e.g., The second size) of the fourth sub-matrix 570-K.Alternatively, the matrix set 570 may include a third sub-matrix 570-3 having a third size, as shown in FIG. 5B. The third size may be different from the first size, and the third size may be different from the second size, as shown in the embodiment of FIG. 5B. The matrix set 570 of FIG. 5B may include three different sizes of matrices, repeating from the largest size to the smallest size until there are K matrices in total. In several embodiments, the data can be organized into diagonals corresponding to the matrix configuration. For example, in FIG. 5B, the prefetch operation involves organizing the data corresponding to the diagonals in the matrix configuration 570. Specifically, the diagonal line traverses the four rows and four columns of the first matrix 570-1 from the upper left to the lower right of the 5B matrix 570-1.6A and 6B are schematic diagrams showing other examples of organizing data into matrices corresponding to several embodiments of the present disclosure.As shown in FIG. 6A, the 256 bits read from 256 memory cells can be organized into a 4×4 matrix (256 bits can come from 256 consecutive memory cells or from non-contiguous memory cells). FIG. 6A shows a column decoding circuit 652-1 that organizes data 669 into a matrix configuration 670 including a plurality of matrices 670-1, 670-2, ..., 670-K. The matrices 670-1, ..., 670-K have the same size and are 4×4. The size of 4×4 refers to a matrix with four columns and four rows, and a total of 16 bits are used for data storage. As shown in FIG. 6A, the size of the matrix (for example, 16 bits) is defined by the product (4*4=16) of the first number of rows (for example, 4) and the second number of columns (for example, 4). In such an example, the controller will guide the column decoding circuit to organize the data into sixteen consecutive 4×4 size matrices to accommodate the 256 bits of the prefetch operation. In other embodiments, the number of matrices corresponding to a single prefetch operation varies based on the size of the matrix and the size of the prefetch operation. As further shown in FIG. 6A, the prefetch operation corresponds to organizing the data into rows corresponding to the matrix configuration 670. Specifically, the first row of the matrix 670-1 of FIG. 6A contains the value 1001, which is the first four values in the data 669 read from the prefetch. In several embodiments, the matrix configuration may be a single matrix with a size equal to the number of bits in the prefetch operation. For example, for a prefetch operation containing 256 bits, the matrix may be 16×16 (16*16=256).Referring now to FIG. 6B, the column decoding circuit 652 can organize the data 669 into a matrix configuration 670 containing matrices 670-1, 670-2, 670-3, ..., 670-K, each matrix having a size of 8×2 , Which means that each matrix contains eight rows and two columns. As shown in FIG. 6B, the controller directs the circuit to organize the data into a matrix set, wherein the matrix set includes a first matrix with a size equal to the size of the second matrix. As shown in the figure, the prefetch operation may correspond to organizing the data into columns corresponding to the matrix configuration. Specifically, the first four data values in the first column of the matrix 670-1 of FIG. 6B are 1001, which are the first four data values in the data 669 read in prefetching.Figure 7 is a flowchart of an embodiment of a method 773 for transforming data from a matrix configuration to a linear configuration according to the present disclosure. Unless explicitly stated, the elements of the methods described herein are not limited to a specific order or order. In addition, several method embodiments or elements thereof described herein may be performed at the same or substantially the same point in time.At block 775, the method may include execution of the steps detailed below in connection with the discussion of blocks 776, 777, and 778 by the controller steering circuit coupled to the memory cell array. For example, a controller (e.g., controller 140 of FIG. 1) may direct one or more components of a memory device (e.g., memory device 120 of FIG. 1) to perform the steps discussed in connection with blocks 776, 777, and 778. In several embodiments, the controller can be coupled to a memory cell array (e.g., memory cell array 130), and both the controller and the memory cell array can be included in the memory device.At block 776, the method may include receiving data corresponding to a matrix configuration from a host associated with the device. For example, the controller may direct input and output components (e.g., I/O circuit 144 of FIG. 1) to receive matrix configuration data from a host associated with the memory device. Data may be received from a processor (e.g., processor 160) of the host via a data bus (e.g., data bus 156). The processor of the host can provide data to the memory device in a matrix configuration, because performing the transformation of the data from the matrix configuration to a form more suitable for writing to the memory device can cause an excessive processing burden on the processor. The controller may further direct the I/O component to provide data to one or more other components of the memory device, such as a write circuit (e.g., write circuit 148), a column decode circuit (e.g., multiplexer 152), Row decoder (e.g., row decoder 146) and column decoder (e.g., column decoder 152). In several embodiments, the controller may further provide address signals associated with the received data to an address circuit (eg, address circuit 142 of FIG. 1).At block 777, the method may include transforming the data from a matrix configuration to a linear configuration. For example, the controller can direct the column decoding circuit to transform the data from a matrix configuration to a linear configuration, as discussed in more detail below in conjunction with Figures 8A, 8B, and 8C. The matrix configuration may contain a first number of rows and a second number of columns.At block 778, the method may include performing a write operation of data on the memory cell array. For example, the controller may direct the write circuit of the memory device and/or another component to perform data write operations on the memory cell array. In several embodiments, the data writing operation corresponds to the first row of the matrix configuration. The consecutive bits of a row of a matrix configuration (such as the matrix configuration 870 of FIG. 8A) can be written to memory cells (such as the memory cells 308-0,..., 308-X-1 of row 319-1 of FIG. 3) by the write circuit. Rows of contiguous memory cells. Alternatively, consecutive bits of a row of a matrix configuration (such as matrix configuration 870 of FIG. 8A) may be written by a write circuit to memory cells (such as memory cells 308-0, ..., 308- of row 319-1 of FIG. 3). X-1) Non-contiguous memory cells in rows.8A to 8C are schematic diagrams showing examples of transforming data from a matrix configuration to a linear configuration according to several embodiments of the present disclosure.FIG. 8A shows an example of a controller that guides the column decoding circuit to transform data from a matrix configuration to a linear configuration. As shown in FIG. 8A, the controller directs the column decoding circuit 852 to transform the matrix configuration 870 into a linear configuration 869. The matrix configuration 870 includes matrices 870-1,...,870-K. The column decoding circuit organizes the data to correspond to the linear configuration 869 by retrieving bits from the continuous rows of the continuous matrix 870-1,...,870-K, so that the write operation from the linear configuration 869 to the memory cell array includes the corresponding matrix configuration 870 At least one row of bits in the row.FIG. 8B shows another example of a controller that guides the column decoding circuit to transform data from a matrix configuration to a linear configuration. As shown in FIG. 8B, the controller directs the column decoding circuit 852 to transform the matrix configuration 870 into a linear configuration 869. The matrix configuration 870 includes matrices 870-1,...,870-K. The column decoding circuit organizes the data to correspond to the linear configuration 869 by retrieving bits from the continuous columns of the continuous matrix 870-1, ..., 870-K, so that the writing operation from the linear configuration 869 to the memory cell array includes the corresponding matrix configuration 870 The bit of at least one of the columns.FIG. 8C shows another example of a controller that guides the column decoding circuit to transform data from a matrix configuration to a linear configuration. As shown in FIG. 8C, the controller directs the column decoding circuit 852 to transform the matrix configuration 870 into a linear configuration 869. The matrix configuration 870 includes matrices 870-1,...,870-K. The column decoding circuit organizes the data to correspond to the linear configuration 869 by retrieving bits from the continuous columns of the continuous matrix 870-1, ..., 870-K, so that the writing operation from the linear configuration 869 to the memory cell array 130 includes the corresponding matrix The bits of at least one diagonal of 870 (for example, the diagonal starting at the upper left of matrix 870-1 in FIG. 8C and ending at the lower right diagonal of matrix 870-1 in FIG. 8C) are configured. The diagonal line traverses the four rows and four columns of matrix 870-1 of FIG. 8C.The ordinal positioning used here is used to distinguish the relative positions of the components in the corresponding component group. For example, multiple sub-arrays may each contain a sequence of 1024 rows (eg, row 0 to row 1023). In this example, row 0 from a specific sub-array (e.g., the first row of the specific sub-array) has a different ordinal position from any of rows 1 to 1023 (e.g., the last row) of other sub-arrays. However, ordinal terms such as "first" and "second" used herein are not intended to indicate a specific ordinal position of an element unless the context clearly dictates otherwise. For example, consider a row having an ordinal position of row 0 in a particular subarray and a different row having an ordinal position of row 4 in a different subarray. In this example, row 0 can be referred to as the "first" row and row 4 can be referred to as the "second" row, although row 2 does not have an ordinal position. Alternatively, row 4 may be referred to as the "first" row and row 0 may be referred to as the "second" row.In the above detailed description of the present disclosure, reference is made to the accompanying drawings that form a part of the present disclosure, and how one or more embodiments of the present disclosure may be implemented are shown by way of illustration. These embodiments are described in sufficient detail to enable those skilled in the art to implement the embodiments of the present disclosure, and it should be understood that other embodiments may be used, and the description may be carried out without departing from the scope of the present disclosure. Process, electrical and structural changes.As used herein, indicators (such as "X", "Y", "N", "M", "K", etc.) particularly related to reference numbers in the drawings indicate that a number of specific features so designated may be included. It should also be understood that the terms used herein are only for the purpose of describing specific embodiments and are not intended to be limiting. As used herein, the singular forms "a", "an" and "the" include singular and plural indicators, unless the context clearly dictates otherwise, as in "a number", "at least one" and "one or more" "(For example, the number of memory cell arrays can refer to one or more arrays of memory cells), and "multiple" is intended to refer to more than one such thing. In addition, the words "can" and "may" are used in the permitted meaning (that is, have potential, can) in the entire text of this application, rather than in the mandatory meaning (that is, must). The term "including" and its derivatives mean "including but not limited to". The terms "coupled" and "coupling" refer to physically directly or indirectly connected in order to appropriately access and/or move (transmit) instructions (for example, control signals, address signals, etc.) and data. The terms "data" and "data value" are used interchangeably herein, and may have the same meaning as appropriate to the context (eg, one or more data units or "bits").Although it has been illustrated and described in this article that it includes read/latch circuits, sense amplifiers, column decoding circuits, multiplexers, write circuits, read/latch strips, I/O components, and sub-arrays. Examples of various combinations and configurations of decoders, mode registers, and/or row decoders, and other circuits for organizing into a matrix or transforming from a matrix to the linear configuration shown and described in the text, but the embodiment of the present invention It is not limited to the combinations expressly stated herein. The read/latch circuit, sense amplifier, multiplexer, column decoding circuit, write circuit, read/latch stripe, I/O components, sub-array decoder, mode register and Other combinations and configurations of/or row decoders and other circuits for organizing into a matrix or transforming from a matrix into a linear configuration are expressly included within the scope of the present disclosure.Although specific embodiments have been illustrated and described herein, those skilled in the art will understand that arrangements calculated to achieve the same result may be substituted for the specific embodiments shown. The present disclosure is intended to cover modifications or variations of one or more embodiments of the present disclosure. It should be understood that the above description has been made in an illustrative manner and not a restrictive manner. After reading the above description, the combination of the above embodiments and other embodiments not specifically described herein will be obvious to those skilled in the art. The scope of one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims and the full scope of equivalents conferred by the claims.In the foregoing detailed description, for the purpose of simplifying the present disclosure, some features are grouped together in a single embodiment. This method of the present disclosure should not be construed as reflecting the following intent: the disclosed embodiments of the present disclosure must use more features than those clearly stated in each claim. On the contrary, as reflected in the appended claims, the subject matter of the invention lies in less than all the features of a single disclosed embodiment. Therefore, the appended claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. |
A dielectric memory cell (10) comprises a substrate (12) which includes a source region (18), a drain region (20), and a channel region (22) positioned therebetween. A multilevel charge trapping dielectric (14) is positioned on the surface of the substrate (12) and a control gate (16) is positioned on the surface of the dielectric and is positioned over the channel region (22). The multilevel charge trapping dielectric (14) includes a tunneling dielectric (14a) adjacent to the substrate (12), a high dielectric constant capacitive coupling dielectric (14c) adjacent to the control gate (16), and a charge trapping dielectric layer (14b) positioned therebetween. |
ClaimsWhat is claimed is:1. A charge trapping dielectric (14) providing a non volatile storage of electrons in a dielectric memory cell (10), the charge trapping dielectric (14) comprising: a) a tunneling dielectric (14a) positioned adjacent to a channel region (22) of the dielectric memory cell (10); b) a high dielectric constant capacitive coupling dielectric (14c) adjacent to a control gate (16) of the dielectric memory cell (10); and c) a charge trapping dielectric layer (14b) positioned between the tunneling dielectric (14a) and the capacitive coupling dielectric (14c). 2. The charge trapping dielectric (14) of claim 1, wherein the tunneling dielectric (14a) is silicon dioxide and the charge trapping dielectric layer (14b) is a nitride compound. 3. The charge trapping dielectric (14) of claim 2, wherein the capacitive coupling dielectric (14c) is a dielectric selected from the group of an aluminum oxide compound, a Hafnium oxide compound, and a zirconium oxide compound. 4. The charge trapping dielectric (14) of claim 3, wherein the capacitive coupling dielectric (14c) is a dielectric selected from the group of A1203, HfSixOy HfO2, ZrO2, and ZrXixOy. 5. The charge trapping dielectric (14) of claim 4, wherein the tunneling dielectric (14a) has a thickness within a range of about 5 nm to about 15 nm. 6. The charge trapping dielectric (14) of claim 5, wherein the tunneling dielectric (14a) has a thickness within a range of about 6 nm to about 9 nm. 7. The charge trapping dielectric (14) of claim 6, wherein the tunneling dielectric (14a) has a thickness within a range of about 7 nm to about 8 nm. 8. The charge trapping dielectric (14) of claim 5, wherein the capacitive coupling dielectric (14c) has a thickness within a range of about 7 nm to 13 nm. 9. The charge trapping dielectric (14) of claim 8, wherein the capacitive coupling dielectric (14c) has a thickness within a range of about 8 nm to about 12 nm. 10. The charge trapping dielectric (14) of any of the preceding claims, wherein the dielectric memory cell (10) includes: a) a substrate (12) comprising a source region (18), a drain region (20), and a channel region (22) positioned there between; b) the charge trapping dielectric (14) positioned on the surface of the substrate (12); and c) a control gate (16) positioned on the surface of the charge trapping dielectric (14) and positioned over the channel region (22). |
NON VOLATILE MEMORY CELL STRUCTURE USING MULTILEVEL TRAPPING DIELECTRICTechnical FieldThe present invention relates generally to a integrated circuit non volatile dielectric memory cell devices and, more specifically to improvements in scalable non volatile dielectric memory cell device structure and to methods of erasing non volatile dielectric memory cell devices. Background ArtConventional floating gate flash memory types of EEPROMs (electrically erasable programmable read only memory), utilize a memory cell characterized by a vertical stack of a tunnel oxide (SiO2), a polysilicon floating gate over the tunnel oxide, an interlayer dielectric over the floating gate (typically an oxide, nitride, oxide stack), and a control gate over the interlayer dielectric positioned over a crystalline silicon substrate.Within the substrate are a channel region positioned below the vertical stack and source and drain diffusions on opposing sides of the channel region. The floating gate flash memory cell is programmed by inducing hot electron injection from the channel region to the floating gate to create a non volatile negative charge on the floating gate. Hot electron injection can be achieved by applying a drain to source bias along with a high control gate positive voltage. The gate voltage inverts the channel while the drain to source bias accelerates electrons towards the drain. The accelerated electrons gain 5.0 to 6. 0eV of kinetic energy which is more than sufficient to cross the 3. 2eV Si-Si02 energy barrier between the channel region and the tunnel oxide. While the electrons are accelerated towards the drain, those electrons which collide with the crystalline lattice are re-directed towards the Si-SiO2 interface under the influence of the control gate electrical field and gain sufficient energy to cross the barrier. Once programmed, the negative charge on the floating gate increases the threshold voltage of the FET characterized by the source region, drain region, channel region, and control gate. During a"read"of the memory cell, the magnitude of the current flowing between the source and drain at a predetermined control gate voltage indicates whether the flash cell is programmed. The erase function is typically performed using Fowler-Nordheim (FN) tunneling through the floating gate/tunnel oxide barrier. More specifically, large negative voltage is applied to the control gate, a moderate positive voltage is applied to the source, and the drain is floated. Under such bias conditions, the electrons stored on the floating gate tunnel into the tunnel oxide and are swept into the source region. More recently dielectric memory cell structures have been developed. A dielectric memory cell is characterized by a vertical stack of an insulating bottom oxide layer, a charge trapping dielectric layer, an insulating top oxide layer, and polysilicon control gate positioned on top of a crystalline silicon substrate.Within the substrate are a channel region positioned below the vertical stack and source and drain diffusions on opposing sides of the channel region. This particular structure of a silicon channel region, bottom oxide, nitride, top oxide, silicon control gate is often referred to as a SONOS device. Similar to the floating gate device, a SONOS device is programmed utilizing hot electron injection.However, it should be appreciated that because the injected electrons are trapped in the nitride/bottom oxide junction, the charge remains close to the source region or the drain region from which the electrons were injected. As such, the SONOS device can be used to store two bits of data per cell. Scalability of such memory cell is effected by the minimum feature size of the fabrication equipment and by a minimum channel length requirement which is a function of the total thickness of the ONO stack. A SONOS device can be erased by injecting hot holes created by Band to Band (BTB) tunneling. More specifically, the source is floated and an appropriate positive voltage is applied to the drain region to create theBTB tunneling. A negative voltage is applied to the control gate to accelerate holes towards the source side charge trapping layer. A problem associated with hot hole injection is that it damages the bottom oxide and its interface with the silicon substrate. More specifically, a large portion of the injected holes are trapped in the bottom tunnel oxide and, the trapped holes generate interface states between the bottom tunnel oxide layer and the silicon channel. Another problem associated with dielectric memory cell structures is that the minimum required thicknesses of the oxide, nitride, oxide stack limits the scaling of the channel length to smaller dimensions. Therefore, there is a need in the art for a dielectric memory cell structure which does not suffer the disadvantages discussed above. More specifically, there is a need in the art for a dielectric memory cell structure which can provide for further scaling of the channel to smaller dimensions and which provides for an erase method that causes less cell damage. Disclosure of InventionA first aspect of the present invention is to provide a novel dielectric memory cell structure. The dielectric memory cell structure comprises a substrate with a source region, a drain region, and a channel region positioned between the source region and the drain region. A multilevel charge trapping dielectric is positioned on the surface of the substrate and polysilicon control gate positioned on the surface of the multilevel charge trapping dielectric and positioned over the channel region. The multilevel charge trapping dielectric includes: a) a bottom layer adjacent to the substrate which comprises a first dielectric with a first dielectric constant; b) a top layer adjacent to the control gate comprising a second dielectric with a second dielectric constant which is higher than the first dielectric constant; and c) a charge trapping layer positioned between the bottom layer and the top layer of a third dielectric with charge trapping properties. The bottom layer first dielectric may be silicon dioxide and the charge trapping third dielectric may be a nitride layer. The top layer second dielectric may be dielectric selected from the group of an aluminum oxide compound, a hafnium oxide compound, and a zirconium oxide compound. More specifically, the top layer second dielectric may be a dielectric selected from the group of A1203, HfSixOyV HfO2, ZrO2, and ZrXiOy. A second aspect of the present invention is to provide a tunneling erasable charge trapping dielectric for a non-volatile storage of electrons in a dielectric memory cell. The charge trapping dielectric comprises: a) a tunneling dielectric positioned adjacent to a channel region of the dielectric memory cell; b) a high dielectric constant capacitive coupling dielectric adjacent to a control gate of the dielectric memory cell; and c) a charge trapping dielectric positioned between the tunneling dielectric and the capacitive coupling dielectric. The tunneling dielectric may be silicon dioxide and the charge trapping dielectric may be a nitride compound. The capacitive coupling dielectric may be dielectric selected from the group of an aluminum oxide compound, a hafnium oxide compound, and a zirconium oxide compound. More specifically, the capacitive coupling dielectric may be a dielectric selected from the group of A1203, HfSixOy, Hf02, Zr02, and ZrXixOy Brief Description of DrawingsFigure 1 is a cross section diagram of a dielectric memory cell in accordance with one embodiment of this inventionFigure 2 is a flow chart diagram representing exemplary processing steps for fabricating the dielectric memory cell of Figure 1;Figure 3a is a cross section diagram of a processing step in the fabrication of the dielectric memory cell of Figure 1 ;Figure 3b is a cross section diagram of a processing step in the fabrication of the dielectric memory cell of Figure 1;Figure 3c is a cross section diagram of a processing step in the fabrication of the dielectric memory cell of Figure 1;Figure 3d is a cross section diagram of a processing step in the fabrication of the dielectric memory cell of Figure 1;Figure 3e is a cross section diagram of a processing step in the fabrication of the dielectric memory cell of Figure 1; andFigure 3f is a cross section diagram of a processing step in the fabrication of the dielectric memory cell of Figure 1. Mode (s) for Carrying Out the InventionThe present invention will now be described in detail with reference to the drawings. In the drawings, like reference numerals are used to refer to like elements throughout. Referring to Figure 1, a cross section view of a dielectric memory cell 10 formed on a semiconductor substrate 12 is shown. The diagram is not drawn to scale and the dimensions of some features are intentionally drawn larger than scale for purposes of showing clarity. The memory cell 10 is shown as a substantially planar structure formed on the bulk substrate 12.However, it should be appreciated that the teachings of this invention may be applied to both planar, fin formed, and other dielectric memory cell structures which may be formed on either bulk substrates, SOI substrates or other substrate structures. The memory cell 10 includes a multi layer charge trapping dielectric 14 positioned between the bulk substrate 12 and a polysilicon control gate 16. The bulk substrate 12 preferably comprises lightly doped p (or n) -type silicon and includes an n- (or p- ) type implanted source region 18 and an n- (or p) type drain region 20 on opposing sides of a central channel region 22 which is positioned beneath the polysilicon control gate 16. The charge trapping dielectric 14 comprises three layers. The bottom layer, or tunneling layer, 14 (a) comprises a first dielectric material, and the top layer, or capacitive coupling layer, 14 (c) comprises a second dielectric material with a dielectric constant higher than that of the first dielectric material, and the middle charge trapping layer 14 (b) comprises a third dielectric material which is capable of electron trapping,In the exemplary embodiment, the first dielectric material comprising the tunneling layer 14 (a) is silicon dioxide and its thickness is within a range of about 50 angstroms (A) to about 150 angstroms (A) (5 to about 15 nm). An embodiment with a more narrow bracket includes a tunneling layer 14 (a) thickness within a range of about 60 angstroms (A) to about 90 angstroms (A) (6 to about 9 nm) and even narrower yet, a tunneling layer 14 (a) thickness of about 70 angstroms (A) to about 80 angstroms (A) (7 to about 8 nm). The third dielectric material comprising the charge trapping layer 14 (b) may be silicon nitride and its thickness is within a range of about 20 angstroms (A) to about 80 angstroms (A) (2 to about 8 nm). An embodiment with a more narrow bracket includes a charge trapping layer 14 (b) thickness within a range of about 30 angstroms (A) to about 70 angstroms (A) (3 to about 7 nm) and even narrower yet, a charge trapping layer 14 (b) thickness of about 50 angstroms (A) to about 60 angstroms (A) (5 to about 6 nm). The second dielectric material comprising the capacitive coupling layer 14 (c) includes a material with a high dielectric constants such as A1203 and has a thickness within a range of about 70 angstroms (A) to 130 angstroms (A) (7 to 13 nm). An embodiment with a more narrow bracket includes a capacitive coupling top layer 14 (c) thickness within a range of about 80 angstroms (A) to about 120 angstroms (A) (8 to about 12 nm) and even narrower yet, a capacitive coupling top layer 14 (c) thickness of about 90 angstroms (A) to about 100 angstroms (A) (9 to about 10 nm). The second dielectric material may alternatively comprise a material with a high dielectric constant selected from the group of HfSiOX, HfO2, ZrO2, and other materials with similarly high dielectric constants. The memory cell 10 is configured to store two bits of data within the cell. The first bit of data is represented by the storage of trapped electrons in a region 24 of the charge trapping layer 14 (b) adjacent to the source region 18. The second bit of data is represented by the storage of trapped electron in a region 26 of the charge trapping layer 14 (b) adjacent to the drain region 20. The memory cell 10 is programmed utilizing a hot electron injection technique. More specifically, programming of the first bit of data comprises injecting electrons into region 24 and programming the second bit of data comprises injecting electrons into region 26. Hot electron injection into region 24 comprises applying a source 18 to drain 20 bias while applying a high voltage to the control gate 16. In the exemplary embodiment, this may be accomplished by grounding the drain region 20 and applying approximately 6V to the source region 18 and 10V to the control gate 16. The control gate 16 voltage inverts the channel region 22 while the source region 18 to drain region 20 bias accelerates electrons into the channel region 22 towards the drain region 20.The 5. 5eV to 6eV kinetic energy gain of the electrons is more than sufficient to surmount the 3. 1eV to 3. 5eV energy barrier at the channel region 22/bottom dielectric layer 14 (a) interface and, while the electrons are accelerated towards the drain region 20, the high voltage on the control gate 16 redirects the electrons towards the dielectric layer 14. Those electrons which cross the interface into the dielectric layer 14 are trapped in the charge trapping layer 14 (b) in the region 24. Similarly, the second bit of data, comprising the storage of electrons in region 26, may be programmed by grounding the source region 18, and applying approximate 6V to the drain region 20 and 10V to the control gate 16. Again, the drain region 20 to source region 18 bias accelerates electrons into the channel region 22 towards the source region 18 and the high voltage on the control gate 16 redirects the electrons towards the dielectric layer 14. Those electrons which cross the interface into the dielectric layer 14 are trapped in the charge trapping layer 14 (b) in the region 26. The presence of trapped electrons within regions 24 and 26 each effect depletion within the channel region 22 and as such effect the threshold voltage of a field effect transistor (FET) characterized by the control gate 16, the source region 18 and the drain region 20. Therefore, each bit may be"read", or more specifically, the presence of electrons stored within regions 24 and 26 may be detected, by operation of the FET. More specifically, the presence of electrons stored within region 24 may be detected by applying a positive voltage to the control gate 16 and a lesser positive voltage to the to the drain region 20 while the source region 18 is grounded. The current flow is then measured at the drain region 20. If there are electrons trapped within region 24, no current will be measured at the drain region 20. Otherwise, if region 24 is charge neutral (e. g. no trapped electrons) then there will be a measurable current flow into the drain region 20. Similarly, the presence of electrons stored within region 26 may be detected by applying a positive voltage to the control gate 16 and a lesser positive voltage to the to the source region 18 while the drain region 20 is grounded. The current flow is then measured at the source region 18. If there are electrons trapped within region 26, no current will be measured at the source region 18. Otherwise, if region 26 is charge neutral then there will be a measurable current flow into the source region 18. The erasure of each bit may be accomplished by tunneling trapped electrons into the bottom tunneling dielectric layer 14 (a) towards the source region 18, drain region 20, and channel region 22. More specifically, a high negative voltage is applied to the control gate 16 while the source, drain and substrate are grounded.Because the third dielectric in the top dielectric layer 14 (c) comprises a material with a high dielectric constant, the strong capacitive coupling between the control gate 16 and the charge trapping layer 14 (b) induces FowlerNordheim tunneling of trapped electrons through the silicon dioxide bottom dielectric layer 14 (c). Turning to the flowchart of Figure 2 and the cross sectional diagrams of Figure 3 (a) to Figure 3 (f), exemplary processing steps for fabricating the dielectric memory cell 10 of Figure 1 in a planar structure are represented. Step 30 represents growing a layer of oxide 14 (a) approximately 70 angstroms (A) to 80 angstroms (7 to 8 nm) in thickness on the surface of the p-type bulk wafer 12 as shown in Figure 3 (a). Step 32 represents depositing a layer of nitride 14 (b) approximately 50 angstroms (A) to 60 angstroms (A) (5 to 6 nm) in thickness on the surface of the oxide layer 14 (a) as is shown in Figure 3 (b). Step 34 represents patterning and implanting the source region 18, drain region 20, and bit lines (not shown) as set forth in Figure 3 (c). More specifically, a layer of photoresist is applied to the top of the nitride 14 (b) and patterned to expose the source region 18, drain region 20 and bit lines. The nitride is then etched to form a hard mask exposing the source region 18, drain region 20, and bit lines. Such regions are then formed in the p-type bulk wafer 12 by implanting an n-type dopant such as boron in the exposed regions. Step 36 represents depositing the high dielectric constant material forming the capacitive coupling layer 14 (c) on the surface of the exposed nitride layer 14 (b) as is shown in Figure 3 (d). Step 38 then represents forming the gate 16 on the surface of the high dielectric constant capacitive coupling layer 14 (c). More specifically, a polysilicon layer 16 is applied to the surface of the capacitive coupling layer and patterned and etched using standard techniques. Step 40 represents forming nitride spacers 28 on the side of the dielectric layers 14 (a), 14 (b), and 14 (c) and the gate 16 as is shown in Figure 3 (f). More specifically, a layer of nitride is applied over the surface of the wafer and anisotropically etched to form the spacers. Thereafter step 42 represents forming contacts to the source region 18, drain region 20, and control gate 16. Industrial ApplicabilityIt should be appreciated that the erasure of the dielectric memory cell 10 utilizing Fowler-Nordheim tunneling of electrons through the bottom dielectric layer 14 (c) provides for improved reliability of the device by avoiding the break down effects of erasure utilizing hot hole injection from the channel region 22. It should also be appreciated that FN tunneling of electrons through the bottom dielectric layer 14 (c) neutralizes any charge that may be stored between the regions 24 and 26 which, if not properly neutralized during a program/erase cycle can cause erratic reading. It should further be appreciated that the use of a dielectric with a high dielectric constant in the top dielectric layer 14 (c) improves capacitive coupling between the control gate and the channel region 22 which permits scaling of the channel length to shorter dimensions without experiencing erratic reading results due channel depletion adjacent to the source/channel and drain/channel junctions. Although the dielectric memory cell of this invention has been shown and described with respect to certain preferred embodiments, it is obvious that equivalents and modifications will occur to others skilled in the art upon the reading and understanding of the specification. The present invention includes all such equivalents and modifications, and is limited only by the scope of the following claims. |
Some embodiments include apparatuses and methods of operating the apparatuses. One of the apparatuses includes volatile memory cells located along a pillar that has a length extending in a direction perpendicular to a substrate of a memory device. Each of the volatile memory cells includes a capacitor and at least one transistor. The capacitor includes a capacitor plate. The capacitor plate is either formed from a portion a semiconductor material of the pillar or formed from a conductive material separated from the pillar by a dielectric. |
What is claimed is:1. An apparatus comprising:a pillar including a length extending in a direction perpendicular to a substrate, the pillar including a first segment and a second segment, each of the first and second segments including a portion of semiconductor material of a first conductivity type contacting a portion of semiconductor material of a second conductivity type;a first volatile memory cell including:a first conductive material located along the first segment and separated from the first segment by a first dielectric; anda first additional conductive material separated from the first conductive material by a first additional dielectric; anda second volatile memory cell including:a second conductive material located along the second segment andseparated from the second segment by a second dielectric; and a second additional conductive material separated from the secondconductive material by a second additional dielectric.2. The apparatus of claim 1, wherein:the first conductive material forms part of a storage node of the first volatile memory cell; andthe second conductive material forms part of a storage node of the secondvolatile memory cell.3. The apparatus of claim 2, wherein:the portion of semiconductor material of the second conductivity type of the first segment of the pillar forms part of a channel of a transistor included in the first volatile memory cell; andthe portion of semiconductor material of the second conductivity type of the second segment of the pillar forms part of a channel of a transistor included in the second volatile memory cell.4. The apparatus of claim 3, wherein: the first additional conductive material includes a portion surrounding a sidewall of the first additional dielectric; andthe second additional conductive material includes a portion surrounding asidewall of the second additional dielectric.5. The apparatus of claim 1, wherein the first dielectric and the second dielectric are part of a dielectric region extending continuously along a sidewall of the pillar.6. The apparatus of claim 1, wherein:the first conductivity type is n-type; andthe second conductivity type is p-type.7. An apparatus comprising:a first pillar including a length extending in a direction perpendicular to asubstrate,a second pillar including a length extending in the direction perpendicular to the substrate;a first volatile memory cell including:a first conductive material including a first portion and a second portion, the first portion of the first conductive material located along a first segment of the first pillar and separated from the first segment of the first pillar by a first dielectric, and the second portion of the first conductive material contacting a conductive material of a first segment of the second pillar; and a second volatile memory cell including:a second conductive material including a first portion and a secondportion, the first portion of the second conductive material located along a second segment of the first pillar and separated from the second segment of the first pillar by a second dielectric, and the second portion of the second conductive material contacting a conductive material of a second segment of the second pillar.8. The apparatus of claim 7, wherein:the first conductive material forms part of a storage node of the first volatile memory cell; andthe second conductive material forms part of a storage node of the secondvolatile memory cell.9. The apparatus of claim 7, whereinthe first dielectric includes a portion surrounding a sidewall of the first segment of the first pillar;the first portion of first conductive material surrounds a sidewall of the first dielectric;the second dielectric includes a portion surrounding a sidewall of the second segment of the first pillar; andthe first portion of the second conductive material surrounds a sidewall of the second dielectric.10. The apparatus of claim 9, wherein:the second portion of the first conductive material surrounds a sidewall of the conductive material of the first segment of the second pillar; and the second portion of the second conductive material surrounds a sidewall of the conductive material of the second segment of the second pillar.11. The apparatus of claim 7, further comprising:a third conductive material including a first portion and a second portion, the first portion of the third conductive material located along of the first pillar and separated from a third segment of the first pillar by a third dielectric, and the second portion of the third conductive material located along a third segment of the second pillar and separated from the third segment of the second pillar by a third additional dielectric;a fourth conductive material including a first portion and a second portion, the first portion of the fourth conductive material located along a fourth segment of the first pillar and separated from the third segment of the first pillar by a fourth dielectric, and the second portion of the fourth conductive material located along a fourth segment of the second pillar and separated from the fourth segment of the second pillar by a fourth additional dielectric.12. The apparatus of claim 11, wherein:the third conductive material forms part of a word line associated with the first volatile memory cell; andthe fourth conductive material forms part of a word line associated with thesecond volatile memory cell.13. The apparatus of claim 11, wherein:the third and fourth segments of the first pillar are between the first and second segments of first pillar; andthe third and fourth segments of second pillar are between the first and second segments of the second pillar.14. The apparatus of claim 11, wherein:the third segment of the first pillar is between the first and second segments of the first pillar, and the second segment of the second pillar is between the third and fourth segments of the first pillar; andthe third segment of the second pillar is between the first and second segments of the second pillar, and the second segment of the second pillar is between the third and fourth segments of the second pillar.15. The apparatus of claim 7, further comprising:an additional conductive material contacting a conductive material of a third segment of the second pillar, wherein the third segment of the second pillar is between the first and second volatile memory cells.16. The apparatus of claim 7, further comprising:an additional conductive material contacting a conductive material of the second pillar, wherein the first volatile memory cell is between the additional conductive material and the second volatile memory cell.17. An apparatus comprising:a pillar including a length extending in a direction perpendicular to a substrate; a first volatile memory cell located along a first segment of the pillar, the first volatile memory cell including a first storage node included in a portion of the first segment of the pillar;a second memory cell located along a second segment of the pillar, the second volatile memory cell including a second storage node included in a portion of the second segment of the pillar, each of the portion of the first segment and the portion of the second segment formed from a semiconductor material of a first conductivity type; andthe pillar including a third segment located between the first and secondsegments, the third segment including a portion formed from a semiconductor material of a second conductivity type.18. The apparatus of claim 17, wherein the portion of the third segment contacts each of the portion of the first segment and the portion of the second segment.19. The apparatus of claim 17, further comprising a conductive material, separated from the portion of the third segment by a dielectric.20. The apparatus of claim 18, further comprising a conductive material contacting the portion of the third segment.21. The apparatus of claim 20, wherein:the pillar includes an additional portion contacting a first side of the portion of the first segment, the additional portion having a first thickness in the direction of the length of the pillar; andthe portion of the third segment contacts a second side of the portion of the first segment, the portion of the third segment has a second thickness in the direction of the length of the pillar, and the second thickness is greater than the first thickness.22. The apparatus of claim 17, wherein:the first segment of the pillar includes an additional portion contacting theportion of the first segment; andthe second segment of the pillar includes an additional portion contacting the portion of the second segment, and each of the additional portion of the first segment and the additional portion of the second segment is formed from a semiconductor material of the second conductivity type.23. The apparatus of claim 17, wherein:the first conductivity type is n-type; andthe second conductivity type is p-type.24. An apparatus comprising:a substrate included in a volatile memory device; anda pillar included in the volatile memory device, the pillar including a length extending in a direction perpendicular to the substrate, the pillar including a first portion, a second portion contacting the first portion, a third portion contacting the second portion, a fourth portion contacting the third portion, and a fifth portion contacting the fourth portion, wherein each of the first, third, and fifth portions is formed from a semiconductor material of a first conductivity type, and each of the second and fourth portions is formed from a semiconductor material of a second conductivity type.25. The apparatus of claim 24, wherein:the first conductivity type is n-type; andthe second conductivity type is p-type.26. The apparatus of claim 24, further comprising a conductive material, separated from the fifth portion by a dielectric.27. The apparatus of claim 25, further comprising a conductive material contacting the fifth portion.28. The apparatus of claim 24, further comprising a conductive material contacting the first portion.29. The apparatus of claim 28, further comprising a conductive material, separated from the fifth portion by a dielectric.30. The apparatus of claim 28, further comprising a conductive material contacting the fifth portion.31. The apparatus of claim 24, wherein:the second portion has a first thickness in the direction of the length of the pillar; andthe fourth portion has a second thickness in the direction of the length of the pillar, and the second thickness is greater than the first thickness.32. The apparatus of claim 24, wherein the pillar further includes a a sixth portion contacting the fifth portion, and a seventh portion contacting the sixth portion, the sixth portion including a semiconductor material of the second conductivity type, and the seventh portion including asemiconductor material of the first conductivity type.33. The apparatus of claim 32, further comprising:a first conductive material contacting the first portion; anda second conductive material contacting the seventh portion.34. An apparatus comprising:a first data line and a second data line;a first volatile memory cell coupled to the first data line, the first data lineconfigured to provide information to be stored in the first volatile memory cell;a second volatile memory cell coupled to the second data line, the second data line configured to provide information to be stored in the second volatile memory cell; anda switch coupled between the first and second volatile memory cells, the first and second volatile memory cells coupled in series with the switch between the first and second data lines, the switch configured to turn off during storing of the information in the first volatile memory cell, and to turn off during storing of the information in the second volatile memory cell.35. The apparatus of claim 34, wherein:the first volatile memory cell includes a first transistor coupled in series between the first data line and the switch, and a first capacitor plate located between the first transistor and the switch; andthe second volatile memory cell includes a second transistor coupled in series between the second data line and the switch, and a second capacitor plate located between the second transistor and the switch.36. The apparatus of claim 34, wherein the switch is configured to turn on during sensing of information stored in the first volatile memory cell and during sensing of information stored in the second volatile memory cell.37. The apparatus of claim 34, wherein:the first data line is configured to receive a first voltage during sensing ofinformation from each of the first and second volatile memory cells; and a third data line is configured to receive a second voltage during sensing of information from each of the first and second volatile memory cells.38. An apparatus comprising:a first data line, a second data line, and a third data line;a first volatile memory cell including first and second transistors coupled in series between the first and second data lines, and a first capacitor plate located between the first and second transistors; anda second volatile memory cell including third and fourth transistors coupled in series between the second and third data lines, and a second capacitor plate located between the third and fourth transistors.39. The apparatus of claim 38, wherein:the first data line is configured to provide information to be stored in the first volatile memory cell; andthe third data line configured to provide information to be stored in the second volatile memory cell.40. The apparatus of claim 38, wherein the first transistor has a longer channel length than the second transistor.41. A method comprising:applying a first voltage during a first stage of a read operation of a memory device to a first access line coupled to a first memory cell and to a second access line coupled to a second memory cell, the first and second memory cells coupled between a first data line and a second data line; applying a second voltage to the first and second data lines during the first stage of the read operation;sensing the first memory cell during a first time interval of a second stage of the read operation;sensing the second memory cell during a second time interval of the second stage of the read operation;applying a third voltage to the first and second access lines during a third stage of the read operation;applying a fourth voltage to the first and second data lines during the third stage of the read operation;storing first information in the first memory cell during a fourth stage of the read operation; andstoring second information in the second memory cell during the fourth stage of the read operation.42. The method of claim 41, wherein: sensing the first memory cell includes applying different voltages to the first and second access lines; andsensing the second memory cell includes applying different voltages to the first and second access lines.43. The method of claim 41, wherein:sensing the first memory cell includes applying a fifth voltage to the first access line, and applying a sixth voltage to the second access line, the fifth voltage having a value less than a value of the sixth voltage; and sensing the second memory cell includes applying a seventh voltage to the first access line, and applying an eighth voltage to the second access line, the seventh voltage having a value greater than a value of the eighth voltage.44. The method of claim 41, wherein storing first information in the first memory cell and storing second information in the second memory cell includes: applying a same voltage to the first and second access lines during storing of first information in the first memory cell and during storing of second information in the second memory cell.45. The method of claim 41, wherein the first voltage has a value less than a value of the second voltage.46. The method of claim 41, wherein the fourth voltage has a value at most equal to zero.47. The method of claim 41, wherein the third voltage has a value greater than a value of the fourth voltage.48. A method comprising:applying a first voltage to a gate of a first transistor of a volatile memory cell during a read operation of a memory device;applying a second voltage to a gate of a second transistor of the volatile memory cell during the read operation, the first and second transistors coupled between a first data line and a second data line;sensing the volatile memory cell during the read operation; andstoring information in the volatile memory cell during the read operation after sensing the volatile memory cell.49. The method of claim 48, wherein sensing the volatile memory cell includes:applying a third voltage to the gate of the first transistor; andapplying a fourth voltage to the gate of the second transistor, the third and fourth voltages having different values.50. The method of claim 48, wherein storing information in the volatile memory cell includes:applying a third voltage to the gate of the first transistor; andapplying a fourth voltage to the first data line, the fourth voltage having a value greater than a value of the third voltage.51. The method of claim 48, further comprising:applying a third voltage to the first data line after sensing the volatile memory cell and before storing information in the volatile memory cell, the third voltage having a value at most equal to zero; andapplying a fourth voltage to the second data line after sensing the volatilememory cell and before storing information in the volatile memory cell, the fourth voltage having a value at most equal to zero. |
VOLATILE MEMORY DEVICE INCLUDING STACKED MEMORYCELLSRelated Application[0001] This application claims the benefit of priority to U.S. ApplicationSerial Number 62/551,542, filed 29 August 2017, which is incorporated herein by reference in its entirety.Background[0002] Memory devices are widely used in computers and many other electronic items to store information. Memory devices are generally categorized into two types: volatile memory device and non-volatile memory device. An example of a volatile memory device includes a dynamic random access memory (DRAM) device. An example of a non-volatile memory device includes a flash memory device (e.g., a flash memory stick). A memory device usually has numerous memory cells. In a volatile memory device, information stored in the memory cells are lost if supply power is disconnected from the memory device. In a non-volatile memory device, information stored in the memory cells are retained even if supply power is disconnected from the memory device.[0003] The description herein involves volatile memory devices. Most conventional volatile memory devices have a planar structure (i.e., a two- dimensional structure) in which the memory cells are formed in a single level of the device. As demand for device storage density increases, many conventional techniques provide ways to shrink the size of the memory cell in order to increase device storage density for a given device area. However, physical limitations and fabrication constraints may pose a challenge to such conventional techniques if the memory cell size is to be shrunk to a certain dimension. Unlike some conventional memory devices, the memory devices described herein include features that can overcome challenges faced by conventional techniques. Brief Description of the Drawings[0004] FIG. 1 shows a block diagram of an apparatus in the form of a memory device including volatile memory cells, according to someembodiments described herein.[0005] FIG. 2A shows a schematic diagram of a portion of a memory device including a memory array, according to some embodiments described herein.[0006] FIG. 2B shows a schematic diagram of a portion of the memory device of FIG. 2A.[0007] FIG. 2C is a chart showing example values of voltages provided to signals of the memory device of FIG. 2B during example write and read operations, according to some embodiments described herein.[0008] FIG. 2D shows a side view (e.g., cross-sectional view) of a structure of a portion of the memory device schematically shown in FIG. 2B, in which the memory cell structure of each memory cell can include parts from a double-pillar, according to some embodiments described herein.[0009] FIG. 2E through FIG. 21 show different portions (e.g., partial top views) of the memory device of FIG. 2D including some elements of the memory device viewed from different sectional lines of FIG. 2D, according to some embodiments described herein.[0010] FIG. 3 A shows a schematic diagram of a portion of a memory device that can be a variation of the memory device of FIG. 2 A, according to some embodiments described herein.[0011] FIG. 3B shows a schematic diagram of a portion of the memory device of FIG. 3 A.[0012] FIG. 3C is a chart showing example values of voltages provided to signals of the memory device of FIG. 3B, during example write and read operations, according to some embodiments described herein.[0013] FIG. 3D is a chart showing example values of voltages provided to signals of the memory device of FIG. 3B, during additional example write and read operations of the memory device, according to some embodiments described herein. [0014] FIG. 3E shows a side view (e.g., cross-sectional view) of a structure of a portion of the memory device schematically shown in FIG. 3B, according to some embodiments described herein.[0015] FIG. 3F shows a portion (e.g., partial top views) of the memory device of FIG. 3E, according to some embodiments described herein.[0016] FIG. 4A shows a schematic diagram of a portion of a memory device including memory cells, in which the memory cell structure of each memory cell can include parts from a single-pillar, according to some embodiments described herein.[0017] FIG. 4B shows a side view (e.g., cross-sectional view) of a structure of a portion of the memory device schematically shown in FIG. 4A, according to some embodiments described herein.[0018] FIG. 4C shows a portion of the memory device of FIG. 4B.[0019] FIG. 4D through FIG. 4F show different portions (e.g., partial top views) of the memory device of FIG. 4C including some of the elements of the memory device viewed from different sectional lines of FIG. 4C, according to some embodiments described herein.[0020] FIG. 4G shows a schematic diagram of a portion of the memory device of FIG. 4A.[0021] FIG. 4H is a chart showing example values of voltages provided to the signals of the portion of the memory device of FIG. 4G during three different example write operations, according to some embodiments described herein.[0022] FIG. 41 is a flow chart showing different stages of a read operation of the memory device of FIG. 4A, according to some embodiments described herein.[0023] FIG. 4J shows a schematic diagram of a portion of the memory device of FIG. 2 A.[0024] FIG. 4K is a chart showing values of signals in FIG. 4J during a pre-sense stage based on impact ionization (II) current mechanism.[0025] FIG. 4K' is a chart showing values of signals in FIG. 4J during a pre-sense stage using an alternative pre-sense scheme based on gate-induced drain-leakage (GIDL) current mechanism. [0026] FIG. 4L shows a schematic diagram of a portion of the memory device of FIG. 4A.[0027] FIG. 4M is a chart showing values of signals in FIG. 4L during a sense stage using a sense scheme based threshold voltage shift.[0028] FIG. 4M' is a chart showing values of signals in FIG. 4L during a sense stage using an alternative sense scheme based on a property (e.g., self- latching) of a built-in bipolar junction transistor (BJT).[0029] FIG. 4N is a graph showing relationships between some signals inFIG. 4M.[0100] FIG. 40 shows a schematic diagram of a portion of the memory device of FIG. 4 A.[0101] FIG. 4P is a chart showing values of signals in FIG. 40 during a reset stage.[0030] FIG. 4Q shows a schematic diagram of a portion of the memory device of FIG. 4A.[0031] FIG. 4R is a chart showing values of signals in FIG. 4Q during a restore stage.[0032] FIG. 5 A shows a schematic diagram of a portion of another memory device including memory cells having memory cell structure from a single-pillar, according to some embodiments described herein.[0033] FIG. 5B shows a side view (e.g., cross-sectional view) of a structure of a portion of the memory device schematically shown in FIG. 5 A, according to some embodiments described herein.[0034] FIG. 5C shows a portion of the memory device of FIG. 5B.[0035] FIG. 5D shows a schematic diagram of a portion of the memory device of FIG. 5 A including two memory cells.[0036] FIG. 5E is a chart showing example values of voltages provided to the signals of the portion of the memory device of FIG. 5D during three different example write operations, according to some embodiments described herein.[0037] FIG. 5F is a flow chart showing different stages of a read operation of the memory device of FIG. 5 A through FIG. 5C, according to some embodiments described herein. [0038] FIG. 5G shows a schematic diagram of a portion of the memory device of FIG. 5 A.[0039] FIG. 5H is a chart showing values of signals in FIG. 5G during a pre-sense stage based on impact ionization current mechanism.[0040] FIG. 5FT is a chart showing values of signals in FIG. 5G during a pre-sense stage using an alternative pre-sense scheme based on GIDL current mechanism.[0041] FIG. 51 shows a schematic diagram of a portion of the memory device of FIG. 5 A.[0042] FIG. 5J is a chart showing values of signals in FIG. 51 during a sense stage using a sense scheme based threshold voltage shift.[0043] FIG. 5J' is a chart showing values of signals in FIG. 51 during a sense stage using an alternative sense scheme based on a property (e.g., self- latching) of a built-in bipolar junction transistor.[0044] FIG. 5K shows a schematic diagram of a portion of the memory device of FIG. 5 A.[0045] FIG. 5L is a chart showing values of signals in FIG. 5K during a reset stage.[0046] FIG. 5M shows a schematic diagram of a portion of the memory device of FIG. 5 A.[0047] FIG. 5N is a chart showing values of signals in FIG. 5M during a restore stage.[0048] FIG. 6 shows a structure of a portion of a memory cell located along a segment of a pillar of a memory device, according to some embodiments described herein.Detailed Description[0049] The memory device described herein includes volatile memory cells that are arranged in a 3-D (three-dimensional) structure. In the 3-D structure, the memory cells are vertically stacked over each other in multiple levels of the memory device. Since the memory cells are vertically stacked, storage density of the described memory device can be higher than that of a conventional volatile memory device for a given device area. The 3-D structure also allows an increase in storage density of the described memory device without aggressively reducing feature size (e.g., memory cell size). The memory device described herein can have an effective feature size of 2F2or less.Different variations of the described memory device are discussed in detail below with reference to FIG. 1 through FIG. 6.[0050] FIG. 1 shows a block diagram of an apparatus in the form of a memory device 100 including volatile memory cells, according to some embodiments described herein. Memory device 100 includes a memory array 101, which can contain memory cells 102. Memory device 100 is volatile memory device (e.g., a DRAM device), such that memory cells 102 are volatile memory cells. Thus, information stored in memory cells 102 may be lost (e.g., invalid) if supply power (e.g., supply voltage VDD) is disconnected from memory device 100. Hereinafter, VDD is referred to represent some voltage levels, however, they are not limited to a supply voltage (e.g., VDD) of the memory device (e.g., memory device 100). For example, if the memory device (e.g., memory device 100) has an internal voltage generator (not shown in FIG. l) that generates an internal voltage based on VDD, such an internal voltage may be used instead of VDD.[0051] In a physical structure of memory device 100, memory cells 102 can be formed vertically (e.g., stacked over each other in different layers) in different levels over a substrate (e.g., semiconductor substrate) of memory device 100. The structure of memory array 101 including memory cells 102 can include the structure of memory arrays and memory cells described below with reference to FIG. 2A through FIG. 6.[0052] As shown in FIG. 1, memory device 100 can include access lines104 (or "word lines") and data lines (e.g., bit lines) 105. Memory device 100 can use signals (e.g., word line signals) on access lines 104 to access memory cells 102 and data lines 105 to provide information (e.g., data) to be stored in (e.g., written) or sensed (e.g., read) from memory cells 102.[0053] Memory device 100 can include an address register 106 to receive address information ADDR (e.g., row address signals and column address signals) on lines (e.g., address lines) 107. Memory device 100 can include row access circuitry (e.g., x-decoder) 108 and column access circuitry (e.g., y- decoder) 109 that can operate to decode address information ADDR from address register 106. Based on decoded address information, memory device 100 can determine which memory cells 102 are to be accessed during a memory operation. Memory device 100 can perform a write operation to store information in memory cells 102, and a read operation to read (e.g., sense) information (e.g., previously stored information) in memory cells 102. Memory device 100 can also perform an operation (e.g., a refresh operation) to refresh (e.g., to keep valid) the value of information stored in memory cells 102. Each of memory cells 102 can be configured to store information that can represent a binary 0 ("0") or a binary 1 (" 1").[0054] Memory device 100 can receive a supply voltage, including supply voltages VDD and Vss, on lines 130 and 132, respectively. Supply voltage Vss can operate at a ground potential (e.g., having a value ofapproximately zero volts). Supply voltage VDD can include an external voltage supplied to memory device 100 from an external power source such as a battery or an alternating current to direct current (AC -DC) converter circuitry.[0055] As shown in FIG. 1, memory device 100 can include a memory control unit 1 18 to control memory operations (e.g., read and write operations) of memory device 100 based on control signals on lines (e.g., control lines) 120. Examples of signals on lines 120 include a row access strobe signal RAS*, a column access strobe signal CAS*, a write-enable signal WE*, a chip select signal CS*, a clock signal CK, and a clock-enable signal CKE. These signals can be part of signals provided to a dynamic random access memory (DRAM) device.[0056] As shown in FIG. 1, memory device 100 can include lines (e.g., global data lines) 1 12 that can carry signals DQO through DQN. In a read operation, the value (e.g., logic 0 and logic 1) of information (read from memory cells 102) provided to lines 1 12 (in the form signals DQO through DQN) can be based on the values of signals DLo and DLo * through DLN and DLN* on data lines 105. In a write operation, the value (e.g., "0" (binary 0) or " 1" (binary 1)) of the information provided to data lines 105 (to be stored in memory cells 102) can be based on the values of signals DQO through DQN on lines 1 12. [0057] Memory device 100 can include sensing circuitry 103, select circuitry 115, and input/output (I/O) circuitry 116. Column access circuitry 109 can selectively activate signals on lines (e.g., select lines) based on address signals ADDR. Select circuitry 115 can respond to the signals on lines 114 to select signals on data lines 105. The signals on data lines 105 can represent the values of information to be stored in memory cells 102 (e.g., during a write operation) or the values of information read (e.g., sensed) from memory cells 102 (e.g., during a read operation).[0058] I/O circuitry 116 can operate to provide information read from memory cells 102 to lines 112 (e.g., during a read operation) and to provide information from lines 112 (e.g., provided by an external device) to data lines 105 to be stored in memory cells 102 (e.g., during a write operation). Lines 112 can include nodes within memory device 100 or pins (or solder balls) on a package where memory device 100 can reside. Other devices external to memory device 100 (e.g., a memory controller or a processor) can communicate with memory device 100 through lines 107, 112, and 120.[0059] Memory device 100 may include other components, which are not shown to help focus on the embodiments described herein. Memory device 100 can be configured to include at least a portion of the memory device with associated structures and operations described below with reference to FIG. 2A through FIG. 6.[0060] One of ordinary skill in the art may recognize that memory device100 may include other components, several of which are not shown in FIG. 1 so as not to obscure the example embodiments described herein. At least a portion of memory device 100 (e.g., a portion of memory array 101) can include structures similar to or identical to any of the memory devices described below with reference to FIG. 2 A through FIG. 6.[0061] FIG. 2A shows a schematic diagram of a portion of a memory device 200 including a memory array 201, according to some embodiments described herein. Memory device 200 can correspond to memory device 100 of FIG. 1. For example, memory array 201 can form part of memory array 101 of FIG. 1. [0062] As shown in FIG. 2A, memory device 200 can include memory cells 210 through 217, which are volatile memory cells (e.g., DRAM cells). Each of memory cells 210 through 217 can include two transistors Tl and T2 and one capacitor 202, such that each of memory cells 210 through 217 can be called a 2T1C memory cell. For simplicity, the same labels Tl and T2 are given to the transistors of different memory cells among memory cells 210 through 217, and the same label (i.e., 202) is given to the capacitor of different memory cells among memory cells 210 through 217.[0063] Memory cells 210 through 217 can be arranged in memory cell groups (e.g., strings) 201 o and 201 i. Each of memory cell groups 2010and 20 can include the same number of memory cells. For example, memory cell group 20 l o can include four memory cells 210, 211, 212, and 213, and memory cell group 20 l i can include four memory cells 214, 215, 216, and 217. FIG. 2 A shows four memory cells in each of memory cell groups 2010and 2011as an example. The number of memory cells in memory cell groups 2010and 2011can be different from four.[0064] FIG. 2A shows directions x, y, and z that can correspond to the directions x, y, and z directions of the structure (physical structure) of memory device 200 shown in FIG. 2D through FIG. 21. As described in more detail below with reference to FIG. 2D through FIG. 21, the memory cells in each of memory cell groups 2010and 201 1 can be formed vertically (e.g., stacked over each other in a vertical stack in the z-direction) over a substrate of memory device 200.[0065] Memory device 200 (FIG. 2A) can perform a write operation to store information in memory cells 210 through 217, and a read operation to read (e.g., sense) information from memory cells 210 through 217. Each of memory cells 210 through 217 can be randomly selected during a read or write operation. During a write operation of memory device 200, information can be stored in the selected memory cell (or memory cells). During a read operation of memory device 200, information can be read from the selected memory cell (or memory cells).[0066] As shown in FIG. 2A, memory device 200 can include decoupling components (e.g., isolation components) 281 through 286, which are not memory cells. A particular decoupling component among decoupling components 281 through 286 can stop a flow of current from going across that particular decoupling component (described in more detail below). In the physical structure of memory device 200, each of the decoupling components 281 through 286 can be a component (e.g., a transistor) that is permanently turned off (e.g., always placed in a turned-off state). Alternatively, each of the decoupling components 281 through 286 can be a dielectric material (e.g., silicon oxide) that can prevent a conduction of current through it.[0067] As shown in FIG. 2A, memory device 200 can include a read data line (e.g., read bit line) 220 that can be shared by memory cell groups 2010and 201 1 . Memory device 200 can include a common conductive line 290 coupled to memory cell groups 20 l o and 201 1 . Common conductive line 290 can be coupled to ground during an operation (e.g., read or write operation) of memory device 200.[0068] Read data line 220 can carry a signal (e.g., read data line signal)BL Ro. During a read operation of memory device 200, the value (e.g., current or voltage value) of signal BL Ro can be used to determine the value (e.g., "0" or "1") of information read (e.g., sensed) from a selected memory cell. The selected memory cell can be either from memory cell group 20 l o or memory cell group 2011 . During a read operation of memory device 200, the memory cells of memory cell group 20 l o and memory cell group 201 1 can be selected one at a time to provide information read from the selected memory cell.[0069] Memory device 200 can include separate plate lines 250 through257. Plate lines 250, 251, 252, and 253 can carry signals PL00, PLOi, PL02, and PL03, respectively. Plate lines 254, 255, 256, and 257 can carry signals PLl o, PLl 1 , PL12, and PLl 3 , respectively.[0070] During a read operation of memory device 200, signals PLOo,PLOi, PL02, and PLO3 on corresponding plate lines 250 through 253 can be provided with different voltages. Depending on the value of information stored in a selected memory cell, an amount (e.g., a predetermined amount) of current may or may not flow between read data line 220 and common conductive line 290 through memory cells 210, 211, 212, and 213. Based on the presence or absence of such an amount of current, memory device 200 can determine (e.g., by using a detection circuit (not shown in FIG. 2A)) the value (e.g., "0" or "1") of information stored in the selected memory cell.[0071] As shown in FIG. 2A, memory device 200 can include read select lines 260 and 261 coupled to memory cell groups 20 l o and 2011 , respectively. Read select lines 260 and 261 can carry signals (e.g., read select signals) RSL0 and RSL1, respectively. During a read operation of memory device 200, read select signals RSL0 and RSL1 can be selectively activated to couple a corresponding memory cell group (2010or 2011) to read data line 220.[0072] Memory device 200 can include select transistors 270 and 271 that can be controlled (e.g., turned on or turned off) by signals RSL0 and RSL1, respectively. Memory cell groups 20 l o and 201 1 can be selected one at a time during a read operation to read information from memory cells 210 through 217. For example, during a read operation, signal RSL0 can be activated (e.g., provided with a positive voltage) to turn on select transistor 270 and coupled to memory cell group 20 l o to read data line 220 if one of memory cells 210, 211, 212, and 213 is selected. In this example, signal RSL1 can be deactivated (e.g., provided with zero volts) to turn off select transistor 271 when signal RSL0 is activated, so that memory cell group 2011is not coupled to read data line 220. In another example, during a read operation, signal RSL1 can be activated (e.g., provided with a positive voltage) to turn on select transistor 271 and couple to memory cell group 201 1 to read data line 220 if one of memory cells 214, 215, 216, and 217 is selected. In this example, signal RSL0 can be deactivated (e.g., provided with zero volts) when signal RSL1 is activated, so that memory cell group 20 lo is not coupled to read data line 220.[0073] Memory device 200 can include write data lines (write bit lines)231 and 232 that can be shared by memory cell groups 2010and 2011. Write data lines 231 and 232 can carry signals BL WA and BL WB, respectively. During a write operation of memory device 200, signals BL WA and BL WB can be provided with voltages that can have values based on the value (e.g., "0" or "1") of information to be stored in a selected memory cell (or memory cells). Two memory cells within a group can share a write data line. For example, memory cells 210 and 211 can share write data line 231, and memory cells 212 and 213 can share write data line 232. In another example, memory cells 214 and 215 can share write data line 231, and memory cells 216 and 217 can share write data line 232.[0074] Memory device 200 can include write word lines 240 through 247(which can be part of the access lines of memory device 200). Write word lines 240, 241, 242, and 243 can carry signals WWLOo, WWLO i, WWL02, and WWL03, respectively. Write word lines 244, 245, 246, and 247 can carry signals WWLl o, WWL1 1, WWL12, and WWLI 3, respectively.[0075] During a write operation of memory device 200, write word lines240, 241, 242, and 243 (associated with memory cell group 2010) can be used to provide access to memory cells 210, 21 1, 212, and 213, respectively, in order to store information in the selected memory cell (or memory cells) in memory cell group 2010.[0076] During a write operation of memory device 200, write word lines244, 245, 246, and 247 (associated with memory cell group 201 1) can be used to provide access to memory cells 214, 215, 216, and 217, respectively, in order to store information in the selected memory cell (or memory cells) in memory cell group 201 1[0077] Information stored in a particular memory cell (among memory cells 210 through 217) of memory device 200 can be based on the presence or absence of an amount (e.g., a predetermined amount) of charge in a capacitor 202 of that particular memory cell. The amount of charge placed on the capacitor 202 of a particular memory cell can be based on the value of voltages provided to signals BL WA and BL WB during a write operation. During a read operation to read information from a selected memory cell, the presence or absence of an amount of current between read data line 220 and common conductive line 290 is based on the presence or absence of an amount of charge in capacitor 202 of the selected memory cell.[0078] FIG. 2A shows read data line 220 and write data lines 231 and232 shared by two memory cell groups (e.g., 2010 and 201 1) as an example. However, read data line 220 and write data lines 231 and 232 can be shared by other memory cell groups (not shown) of memory device 200 that are similar to memory cell groups 2010 and 201 1 (e.g., memory cell groups in the y-direction). [0079] Write word lines 240, 241, 242, and 243 can be shared by other memory cell groups (not shown) in the x-direction of memory device 200. Plate lines 250, 251, 252, and 253 can be shared by other memory cell groups (not shown) in the x-direction of memory device 200.[0080] As shown in FIG. 2A, two memory cells (e.g., 212 and 213) of a same memory cell group (e.g., 201 o) can share a write data line (e.g., 232). Thus, the number of write data lines (e.g., two date lines in FIG. 2A) can be one half of the number of memory cells (e.g., four memory cells in FIG. 2A) in each memory cell group. For example, if each memory cell group in FIG. 2A has six memory cells, then memory device 200 can include three write data lines (similar to write data lines 231 and 232) shared by respective pairs of the six memory cells.[0081] As shown in FIG. 2A, memory device 200 can include other elements, such as read data line 221 (and corresponding signal BL RN), read select lines 262 and 263 (and corresponding signals RSL2 and RSL3), and select transistors 272 and 273. Such other elements are similar to those described above. Thus, for simplicity, detailed description of such other elements of memory device 200 is omitted from the description herein.[0082] FIG. 2B shows a schematic diagram of a portion of the memory device 200 of FIG. 2 A including memory cell group 201 o . As shown in FIG. 2B, the capacitor 202 can include capacitor plates (e.g., terminals) 202a and 202b. Capacitor plate 202a can form part of (or can be the same as) a storage node (e.g., a memory element) of a corresponding memory cell of memory device 200. Capacitor plate 202a of a particular memory cell can hold a charge that can be used to represent the value (e.g., "0" or " 1") of information stored in that particular memory cell. Capacitor plate 202a can be coupled to a terminal (e.g., source or drain) of transistor T2 through a conductive connection 203.[0083] Capacitor plate 202b of capacitor 202 can also be the gate of transistor Tl of a corresponding memory cell. Thus, capacitor plate 202b of capacitor 202 and the gate of transistor Tl are the same element. The combination of capacitor 202 and transistor Tl can also be called a storage capacitor-transistor (e.g., a gain cell). During a write operation to store information in the memory (e.g., memory cell 213), the storage capacitor- transistor of memory device 200 can allow a relatively small amount of charge to be stored in capacitor plate 202a to represent the value (e.g., "1") of information stored in the memory. The relatively small amount of charge can allow the size of a memory cell of memory device 200 to be relatively small. During a read operation of reading information from the memory cell, the storage capacitor- transistor combination can operate to amplify the charge (e.g., current). Since the amount of charge is relatively small, the amplification (e.g., gain) of the charge can improve accuracy of the information read from the memory cell of memory device 200.[0084] During a write operation of storing information in a selected memory cell (e.g., memory cell 213), charge can be provided to (or not provided to) capacitor plate 202a of the selected memory cell (e.g., memory cell 213), depending on the value of information to be stored in that selected memory cell. For example, if "0" (binary 0) is to be stored in memory cell 213 (selected memory cell), then charge may not be provided to capacitor plate 202a. In this example, signal BL WB on write data line 232 can be provided with zero volts (or alternatively a negative voltage), transistor T2 of memory cell 213 can be turned on, and transistor T2 of memory cell 212 can be turned off. In another example, if "1" (binary 1) is to be stored in memory cell 213 (selected memory cell), then an amount (e.g., predetermined amount) of charge can be provided to capacitor plate 202a of memory cell 213. In this example, signal BL WB on write data line 232 can be provided with a positive voltage, transistor T2 of memory cell 213 can be turned on, and transistor T2 of memory cell 212 can be turned off.[0085] During a read operation of reading (e.g., sensing) information previously stored in a selected memory cell (e.g., memory cell 212) of a memory cell group (e.g., 201 o), a voltage (e.g., VI > 0) can be applied to the gates of transistors Tl of unselected memory cells (e.g., memory cell 210, 211, and 213) of that memory cell group, such that transistors Tl of the unselected memory cells are turned on regardless of the value of information stored in the selected memory cells. Another voltage (e.g., V0 < VI) can be provided to the gate of transistor Tl of the selected memory cell. Transistor Tl of the memory cell may turn on or may remain turned off, depending on the value (e.g., "0" or " 1") previously stored in the selected memory cell.[0086] During the read operation, signal BL Ro on read data line 220 can have different values depending on the state (e.g., turned-on or turned-off) of transistor Tl of the selected memory cell. Memory device 200 can detect different values of signal BL Ro to determine the value of information stored in the selected memory cell. For example, in FIG. 2B, if memory cell 212 is selected to be read, then a voltage (e.g., zero volts) can be provided to signal PL02(which controls the gate of transistor Tl of memory cell 212), and a voltage VI can be applied to the gates of transistors Tl memory cell 210, 21 1, and 213. In this example, depending on the value (e.g., binary 0 or binary 1) previously stored in memory cell 212, transistor Tl of memory cell 213 may turn on or may remain turned off. Memory device 200 can detect the different values of signal BL Ro to determine the value of information stored in memory cell 212.[0087] FIG. 2C is a chart showing example values of voltages provided to the signals of memory device 200 of FIG. 2B, during example write and read operations of memory device 200, according to some embodiments described herein. The signals in FIG. 2C (WWLOo through WWLO3, PLOo through PL03, BL WA, BL WB, RSLO, and BL Ro) are the same as those shown in FIG. 2B. As shown in FIG. 2C, in each of the write and read operations, the signals can be provided with voltages having specific values (in volt unit), depending upon which memory cell among memory cells 210, 21 1, 212, and 213 is selected. In FIG. 2C, memory cell 212 (shown in FIG. 2B) is assumed to be a selected (target) memory cell during a write operation and a read operation, and memory cells 210, 21 1, and 213 are not selected (unselected). The following description refers to FIG. 2B and FIG. 2C.[0088] During a write operation of memory device 200 (FIG. 2C), signalWWLO2 (associated with selected memory cell 212) can be provided with a voltage VI (positive voltage), such as WWLO2 = VI, in order to turn on transistor T2 of memory cell 212. As an example, the value of voltage VI can be greater than the supply voltage (e.g., VDD). Signals WWLOo, WWLO i, and WWLO3 (associated with unselected memory cells 210, 21 1, and 213, respectively) can be provided with a voltage V0 (e.g., substantially equal to VDD), such as WWLOo = WWLO i = WWL03= VO, in order to turn off transistors T2 of memory cells 210, 21 1, and 213. Information (e.g., "0" or " 1") can be stored in memory cell 212 (through the turned-on transistor T2 of memory cell 212) by way of providing a voltage VBL w to signal BL_WB . The value of voltage VBL w can be based on the value of information to be stored in memory cell 212. For example, voltage VBL w can have one value (e.g., VBL w = 0V or VBL w < 0V) if "0" is to be stored in memory cell 212, and voltage VBL w can have another value (e.g., VBL w > 0V (e.g., or VBL w = IV) if " 1" is to be stored in memory cell 212.[0089] Other signals of memory device 200 during a write operation can be provided with voltages as shown in FIG. 2C. For example, each of signals PLOo, PLO i, PLO2, and PLO3 (associated with both selected and unselected memory cells) can be provided with voltage V0, and each of signals BL WA, RSLO, and BL Ro can be provided with voltage V0.[0090] The values of the voltages applied to the signals of FIG. 2C during a write operation can be used for any selected memory cell of memory cell group 2010 (FIG. 2B). For example, if memory cell 213 is selected(memory cells 210, 21 1, and 212 are unselected) during a write operation, then the values of voltages provided to signals WWLO2 and WWLO3 in FIG. 2C can be swapped (e.g., WWLO2 = V0, and WWLO3 = VI), and other signals can remain at the values shown in FIG. 2C.[0091] In another example, if memory cell 210 is selected (memory cells21 1, 212, and 213 are unselected) during a write operation, the values of voltages provided to signals WWLOo and WWLO2 in FIG. 2C can be swapped (e.g., WWLOo = VI, and WWLO2 = V0), the values of voltages provided to BL WA and BL WB in FIG. 2C can be swapped (e.g., BL WB = VBL w, and BL WA = V0), and other signals can remain at the values shown in FIG. 2C.[0092] In another example, if memory cell 21 1 is selected (memory cells210, 212, and 213 are unselected) during a write operation, the values of voltages provided to signals WWLO i and WWLO2 in FIG. 2C can be swapped (e.g., WWLO i = VI, and WWLO2 = V0), the values of voltages provided to BL WA and BL WB in FIG. 2C can be swapped (e.g., BL WB = VBL w, and BL WA = VO), and other signals can remain at the values shown in FIG. 2C.[0093] As shown in FIG. 2B, memory cells 210 and 211 can share write data line 231, and memory cells 212 and 213 can share write data line 232 (which is different from data line 231). In this configuration, two memory cells associated with different write data lines can be concurrently (e.g.,simultaneously) selected during the same write operation to store (e.g., concurrently store) information in the two selected memory cells. For example, in a write operation, memory cells 210 and 212 can be concurrently selected; memory cells 210 and 213 can be concurrently selected; memory cells 211 and 212 can be concurrently selected; and memory cells 211 and 213 can be concurrently selected. As an example, if memory cells 210 and 212 are selected (e.g., concurrently selected) in a write operation, then the values of voltages can be provided, such that WWLOo = WWL02= VI (to turn on transistors T2 of memory cells 210 and 212), WWLOi = WWLO3 = V0 (to turn on transistors T2 of memory cells 211 and 213), and other signals can remain at the values shown in FIG. 2C. In this example, the values of information to be stored in selected memory cells 210 and 212 can be the same (e.g., by providing the same voltage to signals BL WA and BL WB) or can be different (e.g., by providing different voltages to signals BL WA and BL WB).[0094] The following description discusses an example read operation of memory device 200 of FIG. 2B. As assumed above, during a read operation, memory cell 212 (FIG. 2B) is a selected memory cell and memory cells 210, 211, and 213 are unselected memory cells. In the description herein, specific values for the voltages are used as an example. However, the voltages can have different values. During a read operation (FIG. 2C), signals WWLOo, WWLOi, WWLO2, and WWLO3 can be provided with a voltage V0 (e.g., WWLOo = WWLOi = WWLO2 = WWLO3 = V0) because transistors T2 of memory cells 210, 211, 212, and 213 can remain turned off (or may not need to be turned on) in a read operation. Signal PL02(associated with selected memory cell 212) can be provided with a voltage V0. Signals PLOo, PLOi, and PLO3 (associated with unselected memory cell 210, 211, and 213, respectively) can be provided with a voltage V2, such as PLOo = PLOi = PL03= V2. As an example, the value of voltage V2 can be substantially equal to VDD.[0095] Other signals of memory device 200 during a read operation can be provided with voltages as shown in FIG. 2C. For example, signal RSL0 can be provided with a voltage V2 (to turn on select transistor 270), and each of signals BL WA and BL WB can be provided with voltage V0.[0096] Based on the applied voltage V2 shown in FIG. 2C, transistor Tl of memory cells 210, 211, and 213 can turn on (regardless of (e.g., independent of) the value of information stored in memory cells 210, 211, and 213). Based on the applied voltage V0, transistor Tl of memory cell 212 may turn on or may remain turned off (may not turn on). For example, transistor Tl of memory cell 212 may turn on if the information stored in memory cell 212 is "0" and turn off (or remain turned off) if the information stored in memory cell 212 is "1". If transistor Tl of memory cell 212 is turned on, an amount of current may flow on a current path between read data line 220 and common conductive line 290 (through the turned-on transistors Tl of each of memory cells 210, 211, 212, and 213). If transistor Tl of memory cell 212 remains turned off (or is turned off), an amount of current may not flow between read data line 220 and common conductive line 290 (e.g., because no conductive path may form through transistor Tl of memory cell 212, which is turned off).[0097] In FIG. 2C, signal BL Ro can have a voltage VBL R. The value of voltage VBL w can be based on the presence or absence of current (e.g., an amount of current) flowing between read data line 220 and common conductive line 290 (presence or absence of current is based on the value of information stored in memory cell 212). For example, the value of voltage VBL w can be 0 < VBL R < IV (or 0 < VBL R = 1), if information stored in memory cell 212 is "1", and the value of voltage VBL w can be VBL R = 0 if information stored in memory cell 212 is "0". Based on the value of voltage VBL w associated with signal BL Ro, memory device 200 can determine the value of information stored in memory cell 212 during this example read operation.[0098] The description above assumes memory cell 212 is a selected memory cell during a read operation. The values of the signals in the chart shown in FIG. 2C can be similar if other memory cells (210, 211, and 213) of memory device are selected. For example, if memory cell 210 is selected, signals PLOo, PLOi, PL02, and PL03can be provided with voltages VO, V2, V2, and V2, respectively; if memory cell 211 is selected, signals PLOo, PLOi, PL02, and PLO3 can be provided with voltages V2, VO, V2, and V2, respectively; if memory cell 213 is selected, signals PLOo, PLOi, PL02, and PLO3 can be provided with voltages V2, V2, V2, and VO, respectively. In this example, other signals can remain at the values shown in FIG. 2C.[0099] The memory cells (e.g., memory cells 210, 211, 212, and 213) of memory device 200 can be randomly selected during a write operation or a read operation. Alternatively, the memory cells (e.g., memory cells 210, 211, 212, and 213) of memory device 200 can be sequentially selected during a write operation, a read operation, or both.[00100] FIG. 2D shows a side view (e.g., cross-sectional view) of a structure of a portion of memory device 200 schematically shown in FIG. 2B, in which the memory cell structure of each of memory cells 210, 211, 212, and 213 can include parts from double-pillar, according to some embodiments described herein. For simplicity, cross-sectional lines (e.g., hatch lines) are omitted from most of the elements shown in the drawings described herein.[00101] As shown in FIG. 2D, memory device 200 can include a substrate299 over which memory cells 210, 211, 212, and 213 can be formed (e.g., formed vertically) in different levels (physical internal levels) of memory device 200 with respect to the z-direction. Substrate 299 can include monocrystalline (also referred to as single-crystal) semiconductor material. For example, substrate 299 can include monocrystalline silicon (also referred to as single- crystal silicon). The monocrystalline semiconductor material of substrate 299 can include impurities, such that substrate 299 can have a specific conductivity type (e.g., n-type or p-type). Substrate 299 can include circuitry 295 formed in substrate 299. Circuitry 295 can include sense amplifiers (that can be similar to sensing circuitry 103 of FIG. 1), decoder circuitry (that can be similar to row and column access circuitry 108 and 109 of FIG. 1), and other circuitry of a memory device (e.g., a DRAM device) such as memory device 100.[00102] Memory device 200 can include pillars (e.g., semiconductor material pillars) 301 and 302 having lengths extending in a direction perpendicular to (e.g., outwardly from) substrate 299 in the z-direction. The z- direction can be a vertical direction of memory device 200, which is a direction between common conductive line 290 and read data line 220. As shown FIG. 2D, pillars 301 and 302 are parallel with each other in the z-direction. As described in more detail below, each of memory cells 210, 211, 212, and 213 has a memory cell structure that includes parts of both pillars (double-pillar) 301 and 302.[00103] In FIG. 2D, portions labeled "n+" can be n-type semiconductor material portions (n-type semiconductor material regions). The material of the n+ portions include semiconductor material (e.g., silicon) that can be doped (e.g., implanted) with dopants (e.g., impurities), such that the n+ portions are conductively doped portions (doped regions) that can conduct current. Portions labeled "P_Si" can be semiconductor material (e.g., silicon) and have a different type (e.g., conductivity type) from the n+ portions. Portions P Si can be p-type semiconductor material (p-type semiconductor material regions). For example, portions P Si can be p-type polysilicon portions. As described below, when a voltage is applied to a conductive element (e.g., a write word line) adjacent a particular portion P Si, a channel (e.g., a conductive path) can be formed in a particular portion P Si and electrically connect that particular P Si portion with two n+ portions adjacent that particular portion P Si.[00104] As shown in FIG. 2D, each of pillars 301 and 302 can include different segments, in which each of the segments can include an n+ portion, a P Si portion, or a combination of an n+ portion and a P Si portion. For example, as shown in FIG. 2D, pillar 301 can have a segment that includes a portion 301a (n+ portion) and a portion 30 Id (P Si portion) adjacent the structure (e.g., the material) of capacitor plate 202a of memory cell 213. In another example, pillar 301 can have a segment that includes a portion 301c (n+ portion) and a portion 301e (P Si portion) adjacent the structure (e.g., the material) of capacitor plate 202a of memory cell 212. In a further example, pillar 301 can have segment that includes a portion 301b (n+ portion) adjacent portion 301d (P Si portion). FIG. 2D also shows pillar 302 having portions 302a, 302b, 302c (n+ portions), 302d and 302e (P_Si portions) included in respective segments of pillar 302. [00105] Each of transistors Tl can include parts of a combination of a particular portion P Si of pillar 301 and two n+ portions of pillar 301 adjacent that particular P Si portion. For example, portion 30 Id (P Si portion) and portions 301a and 301b (n+ portions) can form parts of the body, source, and drain, respectively, of transistor Tl of memory cell 213. In another example, portion 301e (P Si portion) and portions 301b and 301c (n+ portions) can form parts of the body, source, and drain, respectively, of transistor Tl of memory cell 212.[00106] Each of transistors T2 can include a combination of parts of a particular portion P Si of pillar 302 and two n+ portions of pillar 302 adjacent that particular P Si portion. For example, portion 302d (P Si portion) and portions 302a and 302b (n+ portions) can form parts of the body, source, and drain, respectively, of transistor T2 of memory cell 213. In another example, portion 302e (P Si portion) and portions 302b and 302c (n+ portions) can form parts of the body, source, and drain, respectively, of transistor T2 of memory cell 212.[00107] As shown in FIG. 2D, memory cell structures of memory cells212 and 213 can include conductive material 312 and 313, respectively.Examples of each of conductive materials 312 and 313 include polysilicon (e.g., conductively doped polysilicon), metals, or other conductive materials.[00108] Conductive material 312 can include a portion that forms part of capacitor plate 202a of memory cell 212, a portion that contacts (e.g., electrically connected to (which is directly coupled to)) portion 302a (n+ portion) of pillar 302, and a portion that forms part of conductive connection 203 of memory cell 212.[00109] Conductive material 313 can include a portion that forms part of capacitor plate 202a of memory cell 213, a portion that contacts (e.g., electrically connected to) portion 302b (n+ portion) of pillar 302, and a portion that forms part of conductive connection 203 of memory cell 213.[00110] The memory cell structure of each of memory cells 210 and 211 is similar to the memory cell structures of memory cells 212 and 213, as shown in FIG. 2D. For simplicity, detailed description of the memory cell structures of memory cells 210 and 211 is omitted from the description of FIG. 2D. [00111] As shown in FIG. 2D, memory device 200 can include a dielectric(e.g., dielectric material) 304 that can extend continuously along the length and a sidewall of pillar 301. Capacitor plate 202a of each of memory cells 210, 211, 212, and 213 can be separated (e.g., electrically isolated) from pillar 301 by dielectric 304.[00112] Memory device 200 can include dielectrics (e.g., dielectric materials) 305. Capacitor plate 202a of each of memory cells 210, 211, 212, and 213 can be separated (e.g., electrically isolated) from a respective plate line (among plate lines 250, 251, 252, and 253) by one of dielectrics 305.[00113] Memory device 200 can include dielectrics (e.g., dielectric materials) 306 and 307 located at respective locations (adjacent respective segment) of pillar 302, as shown in FIG. 2D. Each of write word lines 240, 241, 242, and 243 can be separated (e.g., electrically isolated) from pillar 302 by a respective dielectric among dielectrics 306. Each of write data line 231 and 232 can contact (e.g., electrically connect) a respective n+ portion of pillar 302. Each of plate lines 250, 251, 252, and 253 can be separated (e.g., electrically isolated) from pillar 302 by a respective dielectric among dielectrics 307.[00114] Dielectrics 304, 305, 306, and 307 can be formed from the same dielectric material or different dielectric materials. For example, dielectrics 304, 305, 306, and 307 can be formed from silicon dioxide. In another example, dielectrics 304, 306, and 307 can be formed from silicon dioxide, and dielectric 305 can be formed from dielectric material having a dielectric constant greater than the dielectric constant of silicon dioxide.[00115] As shown in FIG. 2D, each of read select lines 260, write word lines 240 through 243, and plate lines 250 through 253 can have a length in the x-direction, which is perpendicular to the z-dimension. Each of read data line 220 and write data lines 231 and 232 can have length in the y-direction (not shown), which is perpendicular to the x-dimension.[00116] Common conductive line 290 can include a conductive material(e.g., a conductive region) and can be formed over a portion of substrate 299 (e.g., by depositing a conductive material over substrate 299). Alternatively, common conductive line 290 can be formed in or formed on a portion of substrate 299 (e.g., by doping a portion of substrate 299). [00117] Memory device 200 can include a conductive portion 293, which can include conductively doped polysilicon, metals, or other conductive materials. Conductive portion 293 can be coupled to ground (not shown).Although common conductive line 290 can be coupled to ground, connecting pillar 301 to ground through conductive portion 293 may further improve a conductive path (e.g., current path) between read data line 220 and ground during read operation of memory device 200.[00118] As shown in FIG. 2D, each of decoupling components 281, 282, and 283 can include a P Si portion of pillar 302, a portion of one of dielectrics 307, and a portion of a conductive line among conductive lines 281a, 282a, and 283a. Examples of conductive lines 281a, 282a, and 283a include conductively doped polysilicon, metals, or other conductive materials. Decoupling components 281, 282, and 283 are in a "turned-of ' state (e.g., permanently turned off (always off)) during operations (e.g., write and read operations) of memory device 200.[00119] As mentioned above with reference to FIG. 2A, each of decoupling components 281 through 286 can be permanently placed in a turned- off state. The turned-off state of each of decoupling components 281, 282, and 283 can prevent current (e.g., stop current) from flowing from one location to another location across each of decoupling components 281, 282, and 283. This can create an electrical separation between elements associated with pillar 302 where current is undesirable to be flown between such elements. For example, decoupling component 282 in FIG. 2D can create an electrical separation between write data lines 231 and 232. This separation prevents information intended for storing in a selected memory cell from being stored in an unselected memory cell. For example, decoupling component 282 can prevent information from write data line 231 intended to be stored in selected memory cell 211 from being stored in an unselected memory cell 212, and prevent information from write data line 232 intended to be stored in selected memory cell 212 from being stored in an unselected memory cell 211.[00120] In an alternative structure of memory device 200, decoupling components 281, 282, and 283 can have structures different from their structures shown in FIG. 2D as long as each of decoupling components 281, 282, and 283 can be an electrical isolation component. For example, in such an alternative structure, each of decoupling components 281, 282, and 283 can include a dielectric material in a respective portion of pillar 302. In this example, each of portions 302f, 302g, and 302h can be a dielectric portion (e.g., a silicon oxide portion).[00121] In FIG. 2D, each of read data line 220, write data lines 231 and 232, read select line 260, write word lines 240 through 243, plate lines 250 through 253, and capacitor plates 202a can be formed form a conductive material (or a combination of conductive materials). Examples of such a conductive material include polysilicon (e.g., conductively doped polysilicon), metals, or other conductive materials.[00122] As shown in FIG. 2D, conductive material 313, and other elements (e.g., plate lines, write word lines, and write data lines), can be located along respective segments of pillars 301 and 302, as shown in FIG. 2D. For example, conductive material 313 can include a portion (the portion that forms part of capacitor plate 202a of memory cell 213) located along the segment of pillar 301 that includes portions 301a and 30 Id. Conductive material 313 can also include a portion that contacts portion 302a (n+ portion) of pillar 302. In another example, conductive material 312 can include a portion (the portion that forms part of capacitor plate 202a of memory cell 212) located along the segment of pillar 301 that includes portions 301c and 30 le. Conductive material 312 can also include a portion that contacts portion 302c (n+ portion) of pillar 302. The conductive materials of plate lines 250 through 253, write word lines 240 through 243, and write data lines 231 and 232 can be located along respective segments of pillar 301 and 302, as shown in FIG 2D.[00123] In FIG. 2D, lines 2E, 2F, 2G, 2H, and 21 are sectional lines. As described below, some portions (e.g., partial top views) of memory device 200 taken from lines 2E, 2F, 2G, 2H, and 21 are shown in FIG. 2E, FIG. 2F, FIG. 2G, FIG. 2H, and FIG. 21, respectively.[00124] FIG. 2E shows a portion (e.g., partial top view) of memory device200 including some elements viewed from line 2E of FIG. 2D down to substrate 299 of FIG. 2D, according to some embodiments described herein. For simplicity, detailed description of the same elements shown in FIG. 2A through FIG. 2D (and other figures described below) are not repeated.[00125] For purposes of illustrating relative locations of some of the elements of memory device 200 (e.g., memory cells 213 and 217), FIG. 2E shows the locations for some elements of memory device 200 that are schematically shown in FIG. 2C but not structurally shown in FIG. 2D. For example, FIG. 2E shows memory cell 217 (FIG. 2 A), read select line 261 (FIG. 2C), plate line 257 (FIG. 2C) and write word line 247 (FIG. 2C) that are schematically shown in FIG. 2C but not structurally shown in FIG. 2D. In another example, FIG. 2E shows an X-decoder and a Y-decoder that are not shown in FIG. 2D. The X-decoder and the Y-decoder in FIG. 2E can be part of circuitry 295 in substrate 299 in FIG. 2D of memory device 200. The X-decoder and the Y-decoder (FIG. 2E) can be part of respective row and column access circuitry of memory device 200.[00126] As shown in FIG. 2E, each of read select line 260, plate line 253(located below (underneath) read select line 260 with respect to the z-direction), and write word line 243 (located below (underneath) plate line 253 in the z- direction) can have a length extending in the x-direction. FIG. 2E does not show write word lines 242, 241, and 240 (FIG. 2D), which are located below write word line 243.[00127] Similarly, in FIG. 2E, each of read select line 261, plate line 257(located below read select line 261 with respect to the z-direction), and write word line 247 (located below plate line 257 with respect to the z-direction) can have a length extending in the x-direction. FIG. 2E does not show write word lines 244, 245, and 246 (FIG. 2A), which are located below write word line 247.[00128] As shown in FIG. 2E, each of read data line 220, write data line 232, and write data line 231 (located below write data line 232 in the z-direction) can have a length extending in the y-direction.[00129] FIG. 2F shows a portion (e.g., partial top view) of memory device200 including some elements viewed from line 2F of FIG. 2D down to substrate 299 of FIG. 2D, according to some embodiments described herein. As shown in FIG. 2F, portion 301a (which is a segment of pillar 301 that includes n+ portion) can include a sidewall 301a' (e.g., circular sidewall). Dielectric 304 can include a sidewall 304' (e.g., circular sidewall). Capacitor plate 202a (formed from a portion of conductive material 313 in FIG. 2D) can include a sidewall 202a' (e.g., circular sidewall). Dielectric 305 can include a sidewall 305' (e.g., circular sidewall).[00130] Dielectric 304 can include a portion surrounding sidewall 301a' .Capacitor plate 202a can include a portion surrounding sidewall 304' of dielectric 304. Dielectric 305 can include a portion surrounding sidewall 202a' of capacitor plate 202a. The conductive material of plate line 253 can include a portion surrounding sidewall 305' of dielectric 305.[00131] FIG. 2G shows a portion (e.g., partial top view) of memory device 200 including some elements viewed from line 2G of FIG. 2D down to substrate 299 of FIG. 2D, according to some embodiments described herein. As shown in FIG. 2G, conductive material 313 can include a portion that forms capacitor plate 202a, and a portion contacting (e.g., electrically connected to) portion 302a (n+ portion) of pillar 302. Material 313 also include a portion that forms part of conductive connection 203.[00132] FIG. 2H shows a portion (e.g., partial top view) of memory device 200 including some elements viewed from line 2H of FIG. 2D down to substrate 299 of FIG. 2D, according to some embodiments described herein. As shown in FIG. 2H, write word line 243 (which is formed from a conductive material) can include a portion separated from portion 301b of pillar 301 by dielectric 304, and a portion separated from portion 302d of pillar 302 by a dielectric 306.[00133] FIG. 21 shows a portion (e.g., partial top view) of memory device200 including some elements viewed from line 21 of FIG. 2D down to substrate 299 of FIG. 2D, according to some embodiments described herein. As shown in FIG. 21, decoupling component 280 can include a portion of conductive lines 281a separated from portion 302f (P Si portion) of pillar 302 by dielectric 307. Conductive portion 293 can contact (electrically connected to) n+ portion of pillar 301.[00134] As described above with reference to FIG. 2A through FIG. 21, memory device 200 can include memory cells (e.g., 210, 211, 212, and 213) stacked over a substrate (e.g., substrate 299). The memory cells (e.g., 210, 211, 212, and 213) can be grouped into individual memory cell groups, in which memory device 200 can include multiple (e.g., two) write data lines (e.g., 231 and 232) associated with each memory cell group to provide information to be stored in respective memory cells within each memory cell group.[00135] In an alternative structure, memory device 200 can have more than two write data lines associated with each of memory cell groups 20 l o and 201 1 . For example, in such an alternative structure, memory device 200 can include four write data lines separately coupled to memory cells 210, 211, 212, and 213, such that each of the four write data lines can be coupled to respective memory cells among memory cells 210, 211, 212, and 213. The four write data lines can be shared between memory cell groups 20 l o and 201 1 . In the alternative structure (e.g., four write data lines), memory cell groups 201 o and 201 1 can share a read data line like read data line 220 shown in FIG. 2 A.[00136] Memory device 200 can include other variations (e.g., single write data line associated with each memory cell group). One of such variations is described in detail with reference to FIG. 3 A through FIG. 3F.[0102] FIG. 3A shows a schematic diagram of a portion of a memory device 300 that can be a variation of memory device 200 of FIG. 2 A, according to some embodiments described herein. Memory device 300 can include elements that are similar to or identical to the elements of memory device 200. For simplicity, similar or identical elements between memory devices 200 and 300 are given the same reference labels.[00137] As shown in FIG. 3A, memory device 300 includes one (e.g., only a single) write data line (e.g., write data line 330) for each of memory cell groups 20 l o and 201 1 . As a comparison, memory device 200 includes more than one write data line (e.g., two write data lines 231 and 232) for each of memory cell groups 201 o and 201 1 . In FIG. 3A, write data line 330 can carry signal BL_Wo. Write data line 330 can be shared by memory cell groups 201 o and 2011 of memory device 300.[00138] FIG. 3B shows a schematic diagram of a portion of a memory device 300 of FIG. 3 A including memory cell group 201 o. As shown in FIG. 3B, memory cells 210, 211, 212, and 213 can be coupled between write data line 330 and common conductive line 290. [00139] Memory device 300 can perform a write operation to store information in memory cells 210, 211, 212, and 213. The write operation in memory device 300 can be a sequential write operation, such that information can be sequentially stored in memory cells 210, 211, 212, and 213. For example, in the sequential write operation, memory cells 210, 211, 212, and 213 can be selected to store information one at a time in an order (e.g., a sequential order) starting at memory cell 210 and ending at memory cell 213. In this sequential order, memory cell 210 can be the first memory cell of memory cell group 20 l o selected to store information, and memory cell 213 can be the last memory cell of memory cell group 20 l o selected to store information. This means that memory device 300 may store information in memory cell 211 after (e.g., only after) information has been stored in memory cell 210, memory device 300 may store information in memory cell 212 after (e.g., only after) information has been stored in memory cells 210 and 211, and memory device 300 may store information in memory cell 213 after (e.g., only after) information has been stored in memory cell 210, 211, and 212.[00140] During a write operation of memory device 300, information to be stored in a selected memory cell among memory cells 210, 211, 212, and 213 can be provided from write data line 330. The value (e.g., "0" or "1") of information to be stored in the selected memory cell can be based on the value of voltage provided to signal BL Wo.[00141] Memory device 300 can perform a read operation to read (e.g., sense) information from memory cells 210, 211, 212, and 213. The read operation in memory device 300 can be similar to the read operation (e.g., a random read operation) of memory device 200 of FIG. 2A. For example, during a read operation of memory device 300, information read from a selected memory cell among memory cells 210, 211, 212, and 213 can be provided to read data line 220. Signal BL Ro on read data line 220 can have different values depending on the value (e.g., binary 0 or binary 1) stored in the selected memory cell. Memory device 300 can detect the different values of signal BL Ro to determine the value of information stored in the selected memory cell.[00142] FIG. 3C is a chart showing example values of voltages provided to the signals of memory device 300 of FIG. 3B, during example write and read operations of memory device 300, according to some embodiments described herein. The signals in FIG. 3C (WWLOo through WWL03, PLOo through PL03, BL Wo, RSLO, and BL Ro) are the same as those shown in FIG. 3B. In the example write and read operation in FIG. 3C, memory cell 210 is assumed to be a selected memory cell, and memory cells 21 1, 212, and 213 are not selected (unselected). As described above with reference to FIG. 3B, a write operation in memory device 300 can be a sequential write operation. Thus, in the example write operation associated with FIG. 3C, memory cells 21 1, 212, and 213 may not have information stored in them when memory cell 210 is selected to store information. The following description refers to FIG. 3B and FIG. 3C.[00143] As shown in FIG. 3C during a write operation, signals WWLOo,WWLO i, WWLO2, and WWLO3 (associated with memory cells 210, 21 1, 212, and 213, respectively) can be provided with a voltage VI, such as WWLOo = WWLO i = WWLO2 = WWLO3 = VI). Based on the applied voltage VI, transistor T2 (FIG. 3B) of memory cells 210, 21 1, 212, and 213 can turn on. Information from write data line 330 can be stored in memory cell 210 (through the turned-on transistor T2 of memory cell 210) and by way of providing voltage VBL w to signal BL_Wo . The value (in volt unit) of voltage VBL w can be based on the value (e.g., "0 or " 1") of information to be stored in memory cell 210. Other signals of memory device 300 during a write operation can be provided with voltages as shown in FIG. 3C. For example, each of signals PLOo, PLO i, PLO2, and PLO3 can be provided with the same voltage V0, and each of signal RSLO and BL Ro can be also be provided with voltage V0.[00144] During a read operation associated with FIG. 3C (memory cell210 is the selected memory cell), signals WWLOo, WWLO i, WWL02, and WWLO3 can be provided with a voltage V0 (e.g., WWLOo = WWLO i = WWL02= WWLO3 = V0). Signal PLOo (associated with selected memory cell 210) can be provided with a voltage V0. Signals PLO i, PLO2, and PLO3 (associated with unselected memory cells 21 1, 212, and 213, respectively) can be provided with a voltage V2. Other signals of memory device 300 during a read operation can be provided with voltages as shown in FIG. 3C. For example, signal RSLO can be provided with a voltage V2 (to turn on select transistor 270), and signal BL Wo can be provided with voltage V0. Signal BL Ro can have a voltage VBL R. Based on the value of voltage VBL _R, memory device 300 can determine the value of information stored in memory cell 210 during a read operation described here.[00145] FIG. 3D is a chart showing example values of voltages provided to the signals of memory device 300 of FIG. 3B, during example write and read operations of memory device 300, according to some embodiments described herein. In the example write and read operation in FIG. 3D, memory cell 212 is assumed to be a selected memory cell, and memory cells 210, 21 1, and 213 are not selected (unselected). As described above with reference to FIG. 3B, a write operation in memory device 300 can be a sequential write operation. Thus, at the time that memory cell 212 is selected to store information, other information has been stored in memory cells 210 and 21 1, and no information has been stored in memory cell 213. The following description refers to FIG. 3B, FIG. 3C, and FIG. 3D.[00146] During a write operation of memory device 300 (FIG. 3C), signals WWLOo and WWLO i (associated with memory cells 210 and 21 1, respectively) can be provided with a voltage V0, such as WWLOo = WWLO i = V0). Signals WWL02and WWL03(associated with memory cells 212 and 213, respectively) can be provided with a voltage VI, such as WWL02= WWLO3 = VI). Based on the applied voltage VI, transistors T2 (FIG. 3B) of memory cells 210 and 21 1 can turn off, and transistors T2 of memory cells 212 and 213 can turn on. Information from write data line 330 can be stored in memory cell 212 (through the turned-on transistors T2 of memory cells 212 and 213) and by way of providing a voltage VBL w to signal BL_Wo. The value of voltage VBL w can be based on the value of information to be stored in memory cell 212. Other signals of memory device 300 during a write operation can be provided with voltages as shown in FIG. 3C. For example, each of signals PLOo, PLOi, PL02, and PLO3 can be provided with voltage V0, and each of signals RSL0 and BL Ro can be provided with voltage V0.[00147] During a read operation associated with FIG. 3D (memory cell212 is the selected memory cell), the signals of memory device 300 shown in FIG. 3D can be the same as those shown in FIG. 3C. For simplicity, detailed operation of the read operation associated with FIG. 3D is not repeated here. [0103] FIG. 3E shows a side view (e.g., cross-sectional view) of a structure of a portion of memory device 300 schematically shown in FIG. 3B, according to some embodiments described herein. The structure of memory device 300 shown in FIG. 3E includes elements that are similar to or identical to the structure of the memory device 200 shown in FIG. 2D. For simplicity, similar or identical elements between memory devices 200 (FIG. 2D) and 300 (FIG. 3E) are given the same reference labels.[00148] As described above with reference to FIG. 3 A, differences between memory devices 200 and 300 include the number of write data lines coupled to the memory cell groups of memory device 300. As shown in FIG. 3E, memory device 300 includes a single write data line 330 associated with memory cells 210, 211, 212, and 213. Unlike memory device 200 of FIG. 2D, memory device 300 of FIG. 3E can exclude (does not include) decoupling components 282 and 283 (FIG. 2D). In FIG. 3E, line 3F is a sectional line in which a portion (e.g., a partial top view) of memory device 300 can be viewed.[00149] FIG. 3F shows a portion (e.g., partial top view) of memory device300 including some elements viewed from line 3F of FIG. 3E down to substrate 299 (FIG. 3E), according to some embodiments described herein. As shown in FIG. 3F, write data line 330 can have a length extending the y-direction, which is the same as the direction of the length of read data line 220. The structures of other elements of memory device 300 shown in FIG. 3E are similar to the structure of memory device 200 shown in FIG. 2D through FIG. 21. Thus, for simplicity, detailed description of other elements of memory device 300 is omitted.[00150] FIG. 4A shows a schematic diagram of a portion of a memory device 400 including memory cells, in which the memory cell structure of each of the memory cells 410, 411, 412, and 413 can include parts from a single pillar, according to some embodiments described herein. The memory cell structure of the memory cells of memory device 400 is described below with reference to FIG. 4B through FIG. 4F. As shown in FIG. 4A, memory device 400 can include a memory array 401. Memory device 400 can correspond to memory device 100 of FIG. 1. For example, memory array 401 can form part of memory array 101 of FIG. 1. [00151] As shown in FIG. 4A, memory device 400 can include memory cell groups (e.g., strings) 401 A and 401 B . Each of memory cell groups 401 A and 401 B can include the same number of memory cells. For example, memory cell group 401 A can include four memory cells 410A, 41 1 A, 412A, and 413 A, and memory cell group 401 B can include four memory cells 410B, 41 1B, 412B, and 413B . FIG. 4A shows four memory cells in each of memory cell groups 401 A and 401B as an example. The memory cells in memory device 400 are volatile memory cells (e.g., DRAM cells).[00152] FIG. 4A shows directions x, y, and z that can correspond to the directions x, y, and z directions of the structure (physical structure) of memory device 400 shown in FIG. 4B through FIG. 4F. As described in more detail below with reference to FIG. 4B through FIG. 4F, memory cells in each of memory cell groups 401 A and 401 B can be formed vertically (e.g., stacked over each other in a vertical stack in the z-direction) over a substrate of memory device 400.[00153] As shown in FIG. 4A, memory device 400 can include switches(e.g., transistors) NO, Nl, and N2 coupled to the memory cells of each of memory cell groups 401 A and 401 B . Memory device 400 can include conductive lines 480a, 481a, and 482a that can carry signals CSo, CS i, and CS2, respectively. Memory device 400 can use signals CSo, CS i, and CS2to control (e.g., turn on or turn off) switches NO, Nl, and N2, respectively, during write and read operations of memory device 400.[00154] Memory device 400 can include data lines (bit lines) 430A, 431 A, and 432A associated with memory cell group 401 A. Data lines 430A, 431 A, and 432A can carry signals BLOA, BLl A, and BL2A, respectively, to provide information to be stored in respective memory cells (e.g., during a write operation) or information read (e.g., sensed) from respective memory cells (e.g., during a read operation) 410A, 41 1 A, 412A, and 413A of memory cell group 401 A.[00155] Memory device 400 can include data lines (bit lines) 430B, 431B, and 432B associated with memory cell group 401 B . Data lines 430B, 431B, and 432B can carry signals BLOB, BLI B, and BL2B, respectively, to provide information to be stored in (e.g., during a write operation) or information read (e.g., sensed) from (e.g., during a read operation) respective memory cells 410B , 411B, 412B, and 413B of memory cell group 401B .[00156] Memory device 400 can include word lines 440, 441, 442, and443 that can be shared by memory cell groups 401 A and 401B. Word lines 440, 441, 442, and 443 can carry signals WLo, WLi, WL2, and WL3, respectively. During a write operation or a read operation, memory device 400 can use word lines 440, 441, 442, and 443 to access the memory cells of memory cell groups 401 A and 401B.[00157] Memory device 400 can include plate lines 450, 451, 452, and453 that are shared by memory cell groups 401 A and 401B. Plate lines 450, 451, 452, and 453 can carry signals PLo, PLi, PL2, and PL3, respectively. Each of plate lines 450, 451, 452, and 453 can be used as a common plate (e.g., can be coupled to ground) for the capacitors (described below) of respective memory cells of memory cell groups 401 A and 401B. Memory device 400 can include a common conductive line 490, which can be similar to common conductive line 290 of memory device 200 or 300 described above.[00158] As shown in FIG. 4A, each of memory cells 410A, 411 A, 412A, and 413 A and each of memory cells 410B, 411B, 412B, and 413B can include a transistor T3 and one capacitor C, such that each of these memory cells can be called a 1T1C memory cell. For simplicity, the same label T3 is given to the transistors of different memory cells among the memory cells of memory device 400, and the same label C is given to the capacitor of different memory cells of memory device 400.[00159] As shown in FIG. 4A, capacitor C can include a capacitor plate 402a, and another capacitor plate that can be part of (e.g., electrically connected to) a respective plate line among plate lines 450, 451, 452, and 453. Capacitor plate 402a can form part of a storage node (e.g., a memory element) of a corresponding memory cell of the memory cells of memory device 400.Capacitor plate 402a of a particular memory cell can hold a charge that can be used to represent the value (e.g., "0" or "1") of information stored in a that particular memory cell. Capacitor plate 402a in a particular memory cell can be electrically connected (e.g., directly coupled) to a terminal (e.g., source or drain) of transistor T3 of that particular memory cell. [00160] As shown in FIG. 4A, memory device 400 can include other elements, such as memory cell 417A of a memory cell group 402 A, memory cell 417B of a memory cell group 402B, plate line 457 (and associated signal PL7), and conductive line 485a (and associated signal CS5). Such other elements are similar to those described above. Thus, for simplicity, detailed description of such other elements of memory device 400 is omitted from the description herein.[00161] FIG. 4B shows a side view (e.g., cross-sectional view) of a structure of a portion of memory device 400 that is schematically shown in FIG. 4A, in which the memory cell structure of each of the memory cells can include parts from a single pillar, according to some embodiments described herein.[00162] As shown in FIG. 4B, memory device 200 can include a substrate499, and pillars (e.g., semiconductor material pillars) 401 A' and 401 B ' formed over substrate 499. Each of pillars 401 A' and 401 B ' has a length extending in the z-direction (e.g., vertical direction) perpendicular to substrate 499. Each of pillars 401 A' and 401 B ' can include n+ portions and P Si portions. Memory cells 410A, 41 1 A, 412A, and 413 A can be formed (e.g., formed vertically with respect to substrate 499) along different segments of pillar 401 A' . Memory cell 410B, 41 1B, 412B, and 413B can be formed (e.g., formed vertically with respect to substrate 499) along different segments of a pillar 401 B ' . Memory device 400 can include circuitry 495 formed in substrate 499. Substrate 499, common conductive line 490, and circuitry 495 can be similar to substrate 299, common conductive line 290, and circuitry 295, respectively, of memory device 200 (FIG. 2D). The signals of memory device 400 shown in FIG. 4B (e.g., signals BLOB, BLI B, BL2b, WLOO, WLO I, WL02, WL03, PL00, PLO i, PL02, PL03, CSo, CS i, and CS2) are the same as those shown in FIG. 4A.[00163] FIG. 4C shows a portion of memory device 400 of FIG. 4B including memory cells 412A and 413 A (of memory cell group 401 A) and memory cells 412B and 413B (of memory cell group 401 B). The following description discusses the portion of memory device 400 shown FIG. 4C in more detail. Elements in other portion of memory device 400 (e.g., portion that includes memory cells 410A, 410B, 41 1 A, and 41 1B in FIG. 4B) has similar structures as the elements shown in FIG. 4C and are not described herein for simplicity.[00164] As shown in FIG. 4C, memory device 400 can include dielectrics(e.g., dielectric materials) 405 located at a respective location (adjacent respective segment) of pillar 401 A' . Dielectrics 405 can include silicon oxide or other dielectric materials. Dielectrics 405 can separate (e.g., electrically isolate) pillars 401 A' and 401B' from write word lines 440, 441, 442, and 443, plate lines 450, 451, 452, and 453, and conductive line 482a.[00165] Each of data lines 431 A and 432A can contact (e.g., electrically connect to) a respective n+ portion of pillar 401 A'. Each of data lines 431B and 432B can contact (e.g., electrically connect to) a respective n+ portion of pillar 401B ' .[00166] Capacitor plate 402a (which is part of a storage node (or memory element) of a respective memory cell) can include (e.g., can be formed from) part of an n+ portion. For example, part of n+ portion 413 A' can be a storage node (e.g., a memory element) of memory cell 413A. In another example, part of n+ portion 413B' can be a storage node (e.g., a memory element) of memory cell 413B .[00167] Transistor T3 can include transistor elements (e.g., body, source, and drain) that are parts of a combination of a portion P Si of a particular pillar (pillar 401 A' or 401B') and two n+ portions adjacent the portion P Si of the same particular pillar. Transistor T3 can also include a gate, which is part of a respective word line. For example, part of word line 443 can be the gate of transistor T3 of memory cell 413A, parts of n+ portions 413A' and 413A" can be the source and drain (or drain and source), respectively, of transistor T3 of memory cell 413 A, and P Si portion 413 A'" can be a body (e.g., floating body) of transistor T3 of memory cell 413 A (where a transistor channel can be formed in the body). In another example, part of word line 442 can be the gate of transistor T3 of memory cell 412A, parts of n+ portions 412A' and 412A" can be the source and drain (or drain and source), respectively, of transistor T3 of memory cell 412A, and P_Si portion 412A'" can be a body (e.g., floating body) of transistor T3 of memory cell 412 A (where a transistor channel can be formed in the body). [00168] Switch N2 can operate as a transistor, such that the structure of switch N2 can include the structure of a transistor. Switch N2 can include parts of a combination of a portion P Si of a particular pillar (pillar 401 A' or 401B ') and two n+ portions adjacent the portion P Si of the same particular pillar. For example, in switch N2 between memory cells 412A and 413A, part of conductive line 482A, and n+ portions of pillar 401 A' and 401B' can be the gate, source, and drain, respectively, of a transistor in switch N2.[00169] Word lines 442 and 443, data lines 431 A, 431b, 432a, and 432B, plate lines 452 and 453, and conductive line 482A can include conductive materials. Examples of the conductive materials include polysilicon (e.g., conductively doped polysilicon), metals, or other conductive materials.[00170] In FIG. 4C, lines 4D, 4E, and 4F are sectional lines. As described below, some portions (e.g., partial top views) of memory device 400 taken from lines 4D, 4E, and 4F are shown in FIG. 4D, FIG. 4E, and FIG. 4F, respectively.[00171] FIG. 4D shows a portion (e.g., partial top view) of memory device 400 including some elements viewed from line 4D of FIG. 4C down to substrate 499 (FIG. 4B), according to some embodiments described herein. For simplicity, detailed description of the same elements shown in FIG. 4A through FIG. 4C (and other figures described below) are not repeated.[00172] For purposes of illustrating relative locations of some of the elements of memory device 400, FIG. 4D through FIG. 4F show the locations for some elements of memory device 400 that are schematically shown in FIG. 4 A but are not structurally shown in FIG. 4B and FIG. 4C. For example, FIG. 4D shows memory cells 417A and 417B and word lines 447 and 443 that are schematically shown in FIG. 4A but are not structurally shown in FIG. 4B and FIG. 4C. In another example, FIG. 4D shows an X-decoder and a Y-decoder that are not shown in FIG. 4 A and FIG. 4B. However, the X-decoder and the Y- decoder in FIG. 4D can be part of circuitry 495 in substrate 499 in FIG. 4B of memory device 400. The X-decoder and the Y-decoder (FIG. 4D) can be part of respective row and column access circuitry of memory device 400. As shown in FIG. 4D, each of read data lines 432A and 432B can have a length extending in the y-direction. Each of word lines 443 and 447 can have a length extending in the x-direction and are located below (underneath) read data lines 432A and 432B. FIG. 4D does not show other word lines of memory device 400 that are located below respective word lines 443 and 447.[00173] FIG. 4E shows a portion (e.g., partial top view) of memory device400 including some elements viewed from line 4E of FIG. 4C down to substrate 499 (FIG. 4B), according to some embodiments described herein. As shown in FIG. 4E, each of plate lines 453 and 457 can have a length extending in the x- direction. FIG. 4E does not show other plate lines of memory device 400 that are located below respective plate lines 453 and 457.[00174] FIG. 4F shows a portion (e.g., partial top view) of memory device400 including some elements viewed from line 4F of FIG. 4C down to substrate 499, according to some embodiments described herein. As shown in FIG. 4F, each of conductive lines 482a and 485a can have a length extending in the x- direction. FIG. 4F does not show other conductive lines of memory device 400 that are located below respective conductive lines 482a and 485a.[00175] FIG. 4G shows a schematic diagram of a portion of memory device 400 of FIG. 4A including memory cells 412A and 413A. FIG. 4H is a chart showing example values of voltages provided to the signals of memory device 400 of FIG. 4G during three different example write operations 421, 422, and 423, according to some embodiments described herein. The following description refers to FIG. 4G and FIG. 4H.[00176] In write operation 421, memory cell 412A is selected to store information, and memory cell 413 A is unselected (e.g., not selected to store information). In write operation 422, memory cell 413 A is selected to store information, and memory cell 412A is unselected. In write operation 423 both memory cells 412A and 413A are selected to store information.[00177] As shown in FIG. 4H, signal CS2can be provided with a voltageV3 (to turn off switch N2) during a write operation (e.g., write operation 421, 422, or 423) of memory device 400 regardless of which of memory cells 412A and 413 A is selected. Voltage V3 can be 0V (e.g., ground). Each of signals PL2and PL3can be provided with a voltage V4 during a write operation (e.g., write operation 421, 422, or 423) of memory device 400 regardless of which of memory cells 412A and 413A is selected. Voltage V4 can be 0V (e.g., ground). [00178] In write operation 421 , signal WL3(associated with unselected memory cell 413 A) can be provided with a voltage V5 (to turn off transistor T3 of unselected memory cell 413 A). Voltage V5 can be 0V (e.g., ground). Signal WL2(associated with selected memory cell 412A) can be provided with a voltage V6 (to turn on transistor T3 of selected memory cell 412A). The value of voltage V6 is greater than the value of voltage V5 (V6 > V5). The value of voltage V6 can be greater than a supply voltage (e.g., VDD) of memory device 500 (e.g., V6 > VDD). Signal BL2A (associated with unselected memory cell 413 A) can be provided with a voltage Vx, which can be 0V (e.g., Vx = V3 or Vx = V4) or voltage Vx can be some voltage (e.g., an optimal voltage) between 0V and VDD (e.g., one-half VDD) depending on the memory cell leakage characteristics. Signal BLI A (associated with selected memory cell 412A) can be provided with a voltage VBLI. The value of voltage VBLI can be based on the value of information to be stored in memory cell 412A. For example, voltage VBLI can have one value (e.g., VBLI = 0V or VBLI < 0) if information to be stored in memory cell 412A has one value (e.g., "0") and another value (e.g., VBLI > 0V (e.g., VBLI = IV)) if information to be stored in memory cell 412A has another value (e.g., " 1"). As mentioned above, VDD is referred to represent some voltage levels, however, they are not limited to a supply voltage (e.g., VDD) of the memory device (e.g., memory device 400). For example, if an internal voltage generator of the memory device (e.g., memory device 400) generates an internal voltage less than VDD and uses that internal voltage to be the memory array voltage then VBLI (FIG. 4H) can be less than VDD, but more than 0V depending on the memory array voltage.[00179] In write operation 422, the voltages provided to signals WL2(associated with unselected memory cell 412A) and WL3(associated with selected memory cell 413 A) can be swapped, such that WL2= V5 and WL3= V6. Signal BL1 A (associated with unselected memory cell 412A) can be provided with a voltage Vx. Signal BL2A (associated with selected memory cell 413 A) can be provided with a voltage VBL2. The value of voltage VBL2can be based on the value of information to be stored in memory cell 413 A. For example, voltage VBL2can have one value (e.g., VBL2= 0V or VBL2< 0) if information to be stored in memory cell 413 A has one value (e.g., "0") and another value (e.g., VBL2 > 0V (e.g., VBL2 = IV, VDD, or greater than 0V)) if information to be stored in memory cell 413 A has another value (e.g., "1").[00180] In write operation 423, both memory cells 412A and 413A are selected to store information. Thus, the voltages provided to signals associated with memory cells 412A and 413A can be the same as those in write operations 421 and 422 for a selected memory cell, such as WL2= WL3= V6, BL1 A =[00181] FIG. 41 is a flow chart showing different stages of a read operation 460 of memory device 400 of FIG. 4 A through FIG. 4F, according to some embodiments described herein. As shown in FIG. 41, read operation 460 (to read information from a selected memory cell) can include different stages, such as a pre-sense (e.g., pre-read) stage 461, a sense (or read) stage 462, a reset stage 463, and a restore stage 464. These stages (461, 462, 463, and 464) can be performed one stage after another in an order shown in FIG 41, starting from pre- sense stage 461. In FIG. 41, sense stage 462 (to determine the value of information stored in a selected memory cell) can be performed in two different sense schemes. One sense scheme (e.g., shown in FIG. 4M) is based on the threshold voltage (Vt) shift of a transistor (e.g., transistor T3) coupled to the selected memory cell. An alternative sense scheme (e.g., FIG. 4M') is based on a property (e.g., self-latching) of a bipolar junction transistor, which is intrinsically built-in a transistor (e.g., transistor T3) of the selected memory cell. The stages (461, 462, 463, and 464) of read operation 460 are described in detail with reference to FIG. 4J through FIG. 4R.[00182] FIG. 4J shows a schematic diagram of a portion of memory device 400 of FIG. 4A including memory cells 412A and 413A. FIG. 4K is a chart showing values of signals in FIG. 4J during pre-sense stage 461 of read operation 460 of FIG. 41. The following description refers to FIG. 4J and FIG. 4K. Memory cell 413 A is assumed to be a selected memory cell (to be read in this example), and memory cell 412A is assumed to be an unselected memory cell (not to be read in this example).[00183] Pre-sense stage 461 can be performed to store (e.g., temporarily store) information in the body of transistor T3 of memory cell 413 A and information in the body of transistor T3 of memory cell 412A. The bodies of transistors T3 of memory cell 413A and 412A are included in P Si portions 413"' and 412" ', respectively, in FIG. 4C. Referring to FIG. 4 J and FIG. 4K, the value of the information stored in the body of transistor T3 of memory cell 413 A is based on the value of the information stored in capacitor plate 402a of memory cell 413 A. The value of the information stored in the body of transistor T3 of memory cell 412A is based on the value of the information stored in capacitor plate 402a (FIG. 4C and FIG. 4J) of memory cells 412A.[00184] Reading information from a selected memory cell (e.g., memory cell 413 A in this example) involves detection of current (e.g., an amount of current) on a conductive path (e.g., current path) between a data line associated with the selected memory cell and a data line associated with an adjacent unselected memory cell (e.g., memory cell 412A in this example). For example, in FIG. 4K, reading information from memory cell 413 A can involve detection of current on a conductive path between data lines 432A and 431 A.[00185] Information stored in capacitor plate 402a of the selected memory cell and information stored in capacitor plate 402a of the unselected memory cell may be lost after reading of information from the selected memory cell. In pre- sense stage 461 (FIG. 4K), temporarily storing information in the body of transistor T3 of each of memory cells 412A and 413A allows restoring (writing back) information to the selected memory cell and the unselected memory cell after the selected memory cell is read (e.g., sensed). Thus, in a read operation of a selected memory cell (e.g., memory cell 413 A), the body of transistor T3 to the selected memory cell and the body of transistor T3 of an adjacent unselected memory cell (e.g., memory cell 412A) can be used as temporary storage locations.[00186] The voltages shown in FIG. 4K can allow information to be stored in the body of transistor T3 in the selected memory cell and the unselected memory cell. The information temporarily stored in the body of transistor T3 can be in the form of holes. Holes in the body of transistor T3 as described here refers to an extra amount of holes that may be generated in the material (e.g., P Si material) that forms part of the body of transistor T3.[00187] As shown in FIG. 4K, in pre-sense stage 461, signals CS2can be provided with a voltage VL (e.g., 0V) to turn off switch N2. Each of signals PL2and PL3can be provided with a voltage VPL (e.g., 0V). Each of signals BL1 A and BL2A can be provided with a voltage VBL H (e.g., VBL H = VDD). Each of signals WL2and WL3can be provided with a voltage VWL. The value of voltage VWL can be selected (e.g., 0 < VWL < VBL H) to slightly turn on transistor T3 of each of memory cells 412A and 413A. This may allow impact ionization (II) current at the drain of transistor T3 of memory cells 413 A and II current at the drain of transistor T3 of memory cell 413 A. The II currents allow generation of holes in the body of the transistor T3 of memory cells 412A and holes in the body of transistor T3 of memory cell 412A. The presence or absence of holes in the body of transistor T3 of memory cells 413 A represents the value ("0" or "1") of information stored in capacitor plate 402a of memory cells 413 A. Similarly, the presence or absence of holes in the body of transistor T3 of memory cells 412A represents the value ("0" or "1") of information stored in capacitor plate 402a of memory cells 412A.[00188] Pre-sense stage 461 in FIG. 4K may or may not generate holes in the body of transistor T3 of memory cell 413 A, depending upon the value of information stored in memory cell 413 A. For example, holes may be generated (e.g., accumulated) in the body of transistor T3 of memory cell 413 A if "0" is stored in capacitor plate 402a of memory cell 413 A. In another example, holes may not be generated (e.g., not accumulated) in the body of transistor T3 of memory cell 413 A if "1" is stored in capacitor plate 402a of memory cell 413 A. Similarly, holes may be generated (e.g., accumulated) in the body of transistor T3 of memory cell 412A if "0" is stored in capacitor plate 402a of memory cell 412A. In another example, holes may not be generated (e.g., not accumulated) in the body of transistor T3 of memory cell 412A if "1" is stored in capacitor plate 402a of memory cell 412 A.[00189] The presence or absence of holes in the body of transistor T3 of memory cell 413 A can cause a change (e.g., shift) in the threshold voltage of memory cell 413 A. This change (e.g., temporary change) in the threshold voltage of transistor T3 allows a sense voltage to be provided to the gate of transistor T3 of a particular memory cell (e.g., memory cell 412A or 413A in) in sense stage 462 (e.g., described in more detail below) in order to determine the value of information that was stored (e.g., stored in capacitor plate 402a) of that particular memory cell.[00190] As shown in FIG. 4K', in pre-sense stage 461, signals CS2can be provided with a voltage VL (e.g., 0V) to turn off switch N2. Each of signals PL2and PL3can be provided with a voltage VPL (e.g., 0V). Each of signals BL1 A and BL2A can be provided with a voltage VBL L (e.g., VBL L = 0V). Each of signals WL2and WL3can be provided with a voltage VWL. The value of voltage VWL can be selected (e.g., VWL < 0) to initiate the band to band tunneling current conduction of transistor T3 of each of memory cells 412A and 413 A. This may allow GIDL current at the drain of transistor T3 of memory cells 413 A and GIDL current at the drain of transistor T3 of memory cell 413 A. The GIDL currents allow generation of holes in the body of the transistor T3 of memory cells 412A and holes in the body of transistor T3 of memory cell 412A. The presence or absence of holes in the body of transistor T3 of memory cells 413 A represents the value ("1" or "0") of information stored in capacitor plate 402a of memory cells 413 A. Similarly, the presence or absence of holes in the body of transistor T3 of memory cells 412A represents the value ("1" or "0") of information stored in capacitor plate 402a of memory cells 412A.[00191] Pre-sense stage 461 in FIG. 4K' may or may not generate holes in the body of transistor T3 of memory cell 413 A, depending upon the value of information stored in memory cell 413 A. For example, holes may be generated (e.g., accumulated) in the body of transistor T3 of memory cell 413 A if "1" is stored in capacitor plate 402a of memory cell 413 A. In another example, holes may not be generated (e.g., not accumulated) in the body of transistor T3 of memory cell 413 A if "0" is stored in capacitor plate 402a of memory cell 413 A. Similarly, holes may be generated (e.g., accumulated) in the body of transistor T3 of memory cell 412A if "1" is stored in capacitor plate 402a of memory cell 412A. In another example, holes may not be generated (e.g., not accumulated) in the body of transistor T3 of memory cell 412A if "0" is stored in capacitor plate 402a of memory cell 412 A.[00192] FIG. 4L shows a schematic diagram of a portion of memory device 400 of FIG. 4A including memory cells 412A and 413A. FIG. 4M is a chart showing values of signals in FIG. 4L during sense stage 462 using a scheme based threshold voltage shift. Sense stage 462 is performed after pre- sense stage 461 (FIG. 4K). FIG. 4N is a graph showing relationships among cell current (an amount of a current) flowing through a memory cell (e.g., 412A or 413 A), the value (e.g., "0" or "1") of information stored in a memory cell (e.g., 412A or 413A), and voltages VSENSE and VPASS (that can be applied to the gate of transistor T3 of memory cell 412A. or 413A). The following description refers to FIG. 4L, FIG. 4M, and FIG. 4N.[00193] As shown in FIG. 4M, sense stage 462 can include a sense interval 462.1 (which can occur from time Tl to time T2) and a sense interval 462.2 (which can occur from time T3 to time T4). Sense interval 462.2 occurs after sense interval 462.1 (e.g., times T3 and T4 occur after times Tl and T2). During sense interval 462.1, memory cell 413 A is sensed to determine the value of information stored in memory cell 413 A. During sense interval 462.2 (after memory cell 413 A is sensed), memory cell 412A is sensed to determine the value of information stored in memory cell 412A. Thus, in sense stage 462, memory cells 413A and 412A are sensed in a sequential fashion (one cell after another). FIG. 4M shows sensing of memory cell 413 A (during sense interval 462.1) is performed before sensing of memory cell 412A (during sense interval 462.2) as an example. Alternatively, a reversed order can be used, such that sensing of memory cell 412A can be performed before sensing of memory cell 413A.[00194] As mentioned above, information stored in both memory cells413 A and 412A may be lost after sensing of one or both of memory cells 413 A and 412A. Thus, although only memory cell 413 A is assumed to be a selected memory cell to read information from memory cell 413 A, sensing both memory cells 413A and 412A during sense stage 462 allows the value (e.g., "0" or " 1") of information stored in each of memory cells 413A and 412A to be obtained during sense stage 462. The obtained values (sensed values) can be stored (e.g., stored in storage circuitry (e.g., data buffers, latches, or other storage elements, not shown)) and can be subsequently used as values for information to be restored (e.g., written back) to both memory cells 413 A and 412A during restore stage 464 (described below with reference to FIG. 4R). Sensing of memory cells 413 A and 412A during sense stage 462 can be performed using voltages shown in FIG. 4M. [00195] As shown in FIG. 4M, some signals can be provided with the same voltages between sense intervals 462.1 and 462.2. For example, signals CS2can be provided with a voltage VH (VH > 0V, e.g., VH = VDD) to turn on switch N2 (FIG. 4L). Each of signals PL2and PL3can be provided with a voltage VPL (the same as the voltage in pre-sense stage 461 in FIG. 4K). Signal BL2A can be provided with a voltage VBL H. Signal BL1 A be provided with voltage VBL L. The value of voltage VBL L (e.g., VBL L = 0V) is less than the value of voltage VBL H .[00196] Signals WL2and WL3can be provided with voltages VSENSE and VPASS (e.g., during sense interval 462.1), respectively, or voltages VPASS and VSENSE (during sense interval 461.2), respectively, depending on which of memory cells 413A and 412A is sensed. The value of voltage VPASS is greater than the value of voltage VSENSE.[00197] Voltage VPASS can have a value such that transistor T3 of the memory cell not being sensed (e.g., memory cells 412A during sense interval 462.1) is turned on (e.g., becomes conductive) regardless of whether or not holes are present in the body of transistor T3 of the memory cell not being sensed (regardless of the value (e.g., "0" or "1") of information stored in capacitor plate 402a of the memory cell not being sensed). For example, during sense interval 462.1, transistor T3 of memory cells 412 A is turned on regardless of whether or not holes are present in the body of transistor T3 of the memory cell 412A. This also means that transistor T3 of memory cell 412A is turned on regardless of the value (e.g., "0" or "1") of information that was stored in capacitor plate 402a of memory cell 412A because the presence or absence of holes in the body of transistor T3 of the memory cell 412 A during sense stage 462 depends upon the value of information stored in capacitor plate 402a of memory cell 412A before sense stage 462, as described above in the pre-sense stage 461.[00198] In FIG. 4M, voltage VSENSE can have a value such that transistorT3 of the memory cell being sensed (e.g., memory cell 413 A during sense interval 462.1) is turned on or turned off depending whether or not holes are present in the body of transistor T3 of the memory cell being sensed. For example, during sense interval 462.1, transistor T3 of memory cell 413 A is turned on (e.g., becomes conductive) if holes are present in the body of transistor T3 of memory cell 413 A. This also means that transistor T3 of memory cell 413A is turned on if "0" (in the case of II and "1" in the case of GIDL) was stored in capacitor plate 402a of memory cell 413 A before pre-sense stage 461 (which is before sense stage 462) is performed. In another example, during sense interval 462.1, transistor T3 of memory cell 413 A is turned off (e.g., does not become conductive) if holes are absent from the body of transistor T3 of memory cell 413 A. This also means that transistor T3 of memory cell 413 A is turned off if "1" was stored in capacitor plate 402a of memory cell 413 A before pre-sense stage 461 (which is before sense stage 462) is performed.[00199] The values of voltages VSENSE and VPASS can be based on the current-voltage relationship shown in FIG. 4N for the case for the result of a pre- sense stage based on II current mechanism (FIG. 4K). Curve 410 indicates that current (cell current) may flow through a particular memory cell (e.g., through transistor T3 of that particular memory cell) if voltage VSENSE is provided to the signal (e.g., WL2or WL3) at the gate of transistor T3 of that particular memory cell, and "0" is stored in capacitor plate 402a of that particular memory cell. As described above, holes may be generated in the body of transistor T3 of that particular memory cell if "0" is stored in capacitor plate 402a of that particular memory cell.[00200] However, no current (or a negligible (e.g., undetectable) amount of current) may flow through a particular memory cell if voltage VSENSE is provided to the signal (e.g., WL2or WL3) at the gate of transistor T3 of that particular memory cell, and "1" is stored in the particular memory cell. As described above, holes may not be generated in the body of transistor T3 of that particular memory cell if "1" is stored in capacitor plate 402a of that particular memory cell.[00201] Curve 411 shows that current (cell current) may flow through a particular memory cell (e.g., through transistor T3 of that particular memory cell) if voltage VPASS is provided to the signal (e.g., WL2or WL3) at the gate of transistor T3 of that particular memory cell, regardless of the value (e.g., "0" or "1") of information stored in that particular memory cell. In the case for the result of a pre-sense stage based on GIDL current mechanism (FIG. 4K'), curve 410 of FIG. 4N can present the case where holes may be generated in the body of transistor T3 of that particular memory cell if "1" is stored in capacitor plate 402a of that particular memory cell, and curve 41 1 can present the case where no holes may be generate in the body of transistor T3 of that particular memory cell if "0" is store in capacitor plate 402a of that particular memory cell.[00202] Thus, during sense interval 462.1 (to sense memory cell 413 A), if transistor T3 of memory cell 413A is turned on (e.g., if holes are present in the body of transistor T3 of memory cell 413 A (generated during pre-sense stage 461 of FIG. 4K)), then current may flow between data lines 431 A and 432A (FIG. 4L) through transistor T3 of memory cell 413 A, switch N2 (which is turned on), and transistor T3 (which is turned on) of memory cell 412A. During sense interval 462.1, if transistor T3 of memory cell 413A is turned off (e.g., if holes are absent from the body of transistor T3 of memory cell 413 A (not generated during pre-sense stage 461 of FIG. 4K)), then current may not flow between data lines 431 A and 432A (FIG. 4L) because transistor T3 of memory cell 413 A is turned off (although switch N2 and transistor T3 of memory cell 412 A are turned on).[00203] Similarly, during sense interval 462.2 (to sense memory cell412A), if transistor T3 of memory cell 412A is turned on (e.g., if holes are present in the body of transistor T3 of memory cell 412A (generated during pre- sense stage 461 of FIG. 4K) in the body of transistor T3 of memory cell 412A), then current may flow between data lines 431 A and 432A (FIG. 4L) through transistor T3 of memory cell 413A (which is turned on), switch N2 (which is turned on), and transistor T3 of memory cell 412A. During sense interval 462.1, if transistor T3 of memory cell 412A is turned off (e.g., if holes are absent from the body of transistor T3 of memory cell 412A (not generated during pre-sense stage 461 of FIG. 4K), then current may not flow between data lines 431 A and 432A (FIG. 4L) because transistor T3 of memory cell 412A is turned off(although switch N2 and transistor T3 of memory cell 413 A are turned on).[00204] Memory device 400 can include a detection circuit (not shown) that can be coupled to data line 432A or data line 431 A. Memory device 400 can use the detection circuit to determine the value (e.g., "0" or "1") of information stored in the memory cell being sensed based on the presence or absence of current between data lines 432A and 431 A during sense intervals 462.1 and 462.2. For example, during sense interval 462.1, memory device 400 can determine that "0" was stored in memory cell 413 A if current is detected, and "1" was stored in memory cell 413A if no current (or a negligible of current) is detected. In another example, during sense interval 462.2, memory device 400 can determine that "0" was stored in memory cell 412A if current is detected, and "1" was stored in memory cell 412 A if no current (or a negligible of current) is detected. Memory device 400 can include storage circuitry (e.g., data buffers, latches, or other storage elements) to store the values (e.g., "0" or " 1") of information sensed from memory cells 412A and 413A during sense stage 462. Memory device 400 can used these stored values as values for information to be written back to memory cells 412A and 413 A in restore stage 464 (described below).[00205] FIG. 4M' is a chart showing values of signals in FIG. 4L during sense stage 462 using an alternative sense scheme based on a property (e.g., self- latching) of a built-in bipolar junction transistor. The voltage values of FIG. 4M' can be the same those shown in FIG. 4M, except that in FIG. 4M' signal WL3can be provided with voltages VG (instead of VSENSE) when memory cell 413 A is sensed, and signal WL2can be provided with voltages VG (instead of VSENSE) when memory cell 412A is sensed. As shown in FIG. 4M', sense stage 462 can include a sense interval 462. (which can occur from time Τ to time T2') and a sense interval 462.2' (which can occur from time T3' to time T4'). Sense interval 462.2' (when memory cell 412A is sensed) occurs after sense interval 462. (when memory cell 413 A is sensed). Voltage VG can be less than zero volts, such as a slightly negative voltage (e.g., VG < OV). Applying voltage VG of less than zero volts can induce a phenomenon such as impact ionization current (near data line 413 A) and subsequent BJT latch. Memory device 400 can include a detection circuit (not shown) to determine the value (e.g., "0" or "1") of information stored in memory cell 412A (when it is sensed) memory cell 413 A (when it is sensed) in ways similar to the current detection described above with reference to FIG. 4M.[00206] FIG. 40 shows a schematic diagram of a portion of memory device 400 of FIG. 2A including memory cells 412A and 413A. FIG. 4P is a chart showing values of signals in FIG. 40 during reset stage 463, which is performed after sense stage 462 (FIG. 4M).[00207] Reset stage 463 can be can be performed to clear holes from the body of transistor T3 of each of memory cells 412A and 413 A that may have been generated during pre-sense stage 461 (FIG. 4K). Clearing holes in reset stage 463 may reset the threshold voltage of transistor T3 of each of memory cells 412A and 413 A. Reset stage 463 may help maintain the relationships (e.g., FIG. 4N) among cell current flowing through memory cells 412A and 413 A, the value (e.g., "0" or " 1") of information stored in memory cells 412A and 413 A, and voltages VSENSE and VPASS. The following description refers to FIG. 40 and FIG. 4P.[00208] As shown in FIG. 4P, signals CS2can be provided with either voltage VL or voltage VH. Each of signals PL2and PL3can be provided with a voltage VPL. Each of signals BL1 A and BL2A can be provided with a voltageEach of signals WL2and WL3can be provided with voltages VwLy . Voltage VwLy can have a value such that transistor T3 of each of memory cells 412A and 413 A can be turned on. For example, the value of voltage VwLy can be greater than 0V (e.g., greater than ground) and equal to or less than the supply voltage (e.g., VDD) of memory device 400. With the values of the signals shown in FIG. 4P, holes (e.g., generated during pre-sense stage 461 in FIG. 4K) can be cleared from the body of transistor T3 of memory cells 412A and 413 A. The value of voltage VBL x can be zero volts (e.g., VBL x = 0V) or alternatively, less than zero volts, such as a slightly negative voltage (e.g., VBL x < 0V).[00209] In a particular reset stage of a different read operation, memory cells (not shown in FIG. 40) adjacent memory cells 412A and 413 A may be reset during that particular reset stage (e.g., similar to reset stage 463 in FIG. 4P) and memory cells 412A and 413 A are unselected (or unused) in that read operation. In that particular reset stage (to reset the adjacent memory cells, not shown), the value of voltages on signals WL2, WL3, and CS2(FIG. 40) can be less than zero volts (e.g., slightly less than zero volts, such as WL2= WL3= Vn (e.g., Vn = - 0.3 V) if voltages of less than zero volts are provided to signals BL1 A and BL2A during that particular reset stage. However, in order to avoid transistor leakage that may be caused by GIDL current, the value of voltages on signals WL2, WL3, and CS2(FIG. 40) can be slightly less than zero volts such as WL2= WL3= Vn, but not too much less than Vn (e.g., -IV < WL2=WL3 < -0.3V).[00210] FIG. 4Q shows a schematic diagram of a portion of memory device 400 of FIG. 2A including memory cells 412A and 413A. FIG. 4R is a chart showing values of signals in FIG. 4Q during restore stage 464, which is performed after reset stage 463 (FIG. 4P). As described above, restore stage 464 can be performed to restore (e.g., write back) information to memory cells 412A and 413A after memory cells 412A and 413A were sensed (e.g., based on either the sense scheme shown in FIG. 4M or the sense scheme shown in FIG. 4M'). The following description refers to FIG. 4Q and FIG. 4R.[00211] As shown in FIG. 4R, signals CS2can be provided with voltageVL. Each of signals PL2and PL3can be provided with a voltage VPL. Each of signals WL2and WL3can be provided with voltages V6 (e.g., V6 > VDD) such that transistor T3 of each of memory cells 412A and 413 A can be turned on.[00212] Signal BL2A (associated with memory cell 413 A) can be provided with a voltage VBL2. The value of voltage VBL2can be based on the value of information (e.g., "0" or " ) to be stored (e.g., rewritten) in memory cell 413A. The value of information to be stored in memory cell 413 A during restore stage 464 is the same as the value of information read (sensed) from memory cell 413 A during sense stage 462. In FIG 4R, voltage VBL2can have one value (e.g., VBL2= 0V or VBL2< 0) if information to be stored in memory cell 413 A has one value (e.g., "0") and another value (e.g., VBL2> 0V (e.g., VBL2= IV)) if information to be stored in memory cell 412A has another value (e.g., "1"). Based on the voltages in FIG. 4R, information (which was sensed in sense stage 462) can be restored in capacitor plate 402a of memory cell 413 A.[00213] Similarly, signal BL1 A (associated with memory cell 412A) can be provided with a voltage VBLI. The value of voltage VBLI can be based on the value of information (e.g., "0" or " ) to be stored (e.g., rewritten) in memory cell 412A. The value of information to be stored in memory cell 412A during restore stage 464 is the same as the value of information read (sensed) from memory cell 412A during sense stage 462 if the information is pre-sensed using the II pre-sense stage (associated with FIG. 4K). However, if the information is pre-sensed using the GIDL pre-sense stage (associated with FIG. 4K'), then the value of information read (sensed) from memory cell 412A during sense stage 462 can be inverted during sense stage 462. In FIG 4R, voltage VBLI can have one value (e.g., VBLI = 0V or VBLI < 0) if information to be stored in memory cell 412A has one value (e.g., "0"), and another value (e.g., VBLI > 0V (e.g., VBLI = IV)) if information to be stored in memory cell 412A has another value (e.g., "1"). Based on the voltages in FIG. 4R, information (which was sensed in sense stage 462) can be restored (e.g., in capacitor plate 402a of memory cell 412A),[00214] In the above example read operation (FIG. 4J through FIG. 4R), only memory cell 413A is assumed to be a selected memory cell. However, both memory cells 413A and 412A can be selected in a read operation. In such a read operation (both memory cells 413A and 412A are selected), sense stage 462 (FIG. 4M) can also be performed in the way described above (e.g., the same way where only memory cell 413 A is selected) because both memory cells 413 A and 412A can be sensed in a sequential fashion to determine the values of information stored in memory cells 413A and 412A.[00215] FIG. 5A shows a schematic diagram of a portion of a memory device 500 including memory cells having memory cell structure from a single- pillar, according to some embodiments described herein. Memory device 500 can include a memory array 501. Memory device 500 can correspond to memory device 100 of FIG. 1. For example, memory array 501 can form part of memory array 101 of FIG. 1. Memory device 500 can be a variation of memory device 400 of FIG. 4 A. Thus, for simplicity, detailed description of similar or the same elements (which are given the same labels in FIG. 4A and FIG. 5A) of memory devices 400 and 500 are not repeated. Differences in structures between memory devices 400 and 500 are described below.[00216] As shown in FIG. 5A, memory device 500 can include memory cells memory cell groups (e.g., strings) 501 A and 501B. Each of memory cell groups 501 A and 501B can include the same number of memory cells. For example, memory cell group 501 A can include memory cells 510A, 511 A, 512A, and 513A, and memory cell group 501B can include memory cells 510B, 511B, 512B, and 513B. FIG. 5A shows four memory cells in each of memory cell groups 501 A and 501B as an example. The memory cells in memory device 500 are volatile memory cells (e.g., DRAM cells).[00217] FIG. 5A shows directions x, y, and z that can correspond to the directions x, y, and z directions of the structure (physical structure) of memory device 500 shown in FIG. 5B through FIG. 5H. Memory cells in each of memory cell groups 501 A and 501B can be formed vertically (e.g., stacked over each other in a vertical stack in the z-direction) over a substrate of memory device 500.[00218] Memory device 500 can omit switches (e.g., transistors) Nl andN2 of memory device 400. However, as shown in FIG. 5A, memory device 500 can include a transistor T4 in each memory cell in each of memory cell groups 501 A and 501B. Memory device 500 also includes conductive lines 580, 581, 582, and 583 that can carry signals RSLo, RSLi, RSL2, and RSL3, respectively. Memory device 500 can use signals RSLo, RSLi, RSL2, and RSL3to control (e.g., turn on or turn off) transistor T4 of respective memory cells of memory cell groups 501 A and 501B. The description herein uses the term "conductive lines" (referring to lines 580, 581, 582, and 583) for ease of describing different elements of memory device 500. However, conductive lines 580, 581, 582, and 583 can be word lines of memory device 500 similar to word lines 440, 441, 442, and 443.[00219] Memory device 500 can include data lines (bit lines) 520A and521 A (in addition to data lines 430A, 431 A, and 432A) associated with memory cell group 501 A. Data lines 520A and 521 A can carry signals BLROA and BLRI A, respectively, to access (e.g., during a read operation) respective memory cells 510A, 511 A, 512A, and 513A of memory cell group 501 A.[00220] Memory device 500 can include data lines (bit lines) 520B and521B (in addition to data lines 430B, 431B, and 432B) associated with memory cell group 501B. Data lines 520B and 521B can carry signals BLROB and BLRI B, respectively, to access (e.g., during a read operation) respective memory cells 510B, 511B, 512B, and 513B of memory cell group 501B .[00221] As shown in FIG. 5A, each of memory cells 510A, 511 A, 512a, and 513A and each of memory cells 510B, 511B, 512B, and 513B can include transistors T3 and T4 and one capacitor C, such that each of these memory cells can be called a 2T1C memory cell. As a comparison, each memory cell (e.g., memory cell 413 A) of memory device 400 includes a 1T1C memory cell.[00222] As shown in FIG. 5A, memory device 500 can include other elements, such as memory cell 517A of a memory cell group 502 A, memory cell 517B of a memory cell group 502B, plate line 457 (and associated signal PL7). Such other elements are similar to those described above. Thus, for simplicity, detailed description of such other elements of memory device 500 is omitted from the description herein.[00223] FIG. 5B shows a side view (e.g., cross-sectional view) of a structure of a portion of memory device 500 that is schematically shown in FIG. 5 A, according to some embodiments described herein. The structure of memory device 500 is similar to that of the structure of memory device 400 in FIG. 4B. Thus, for simplicity, detailed description of similar or the same elements (which are given the same labels in FIG. 4B and FIG. 5B) of memory devices 400 and 500 are not repeated.[00224] As shown in FIG. 5B, conductive lines 580, 581, 582, and 583 can be similar to (or identical to) word lines 440, 441, 442, and 443,respectively. For example, each of conductive lines 580, 581, 582, and 583 can have a length extending in the x-direction and can be shared by respective memory cells of memory cell groups 501 A and 501B. Each of conductive lines 580, 581, 582, and 583 can also have a structure similar to (or identical to) the structures of word lines 440, 441, 442, and 443, such as the structure of word line 443 shown in FIG. 4D.[00225] Data lines 520A and 520B can be similar to (or identical to) data lines 430A and 430B, respectively. Data lines 521 A and 521B can be similar to (or identical to) data lines 432A and 432B, respectively. For example, each of data lines 520 A, 520B, 521 A, and 521B can have a length extending in the y- direction perpendicular to the x-direction. Each of data lines 520 A, 520B, 521 A, 521B can have a structure similar to (or identical to) the structures of data line 432A or 432B shown in FIG. 4D.[00226] FIG. 5C shows a portion of memory device 500 of FIG. 5B including memory cells 512A, 513A, 512B, and 513B. Some of the elements shown in FIG. 5C are similar to some of the elements of memory device 400 of FIG. 4C; such similar (or the same) elements are given the same labels and are not described herein for simplicity. As shown in FIG. 5C, the structures and locations of transistor T3 and capacitor plate 402a are the same as those of memory device 400 (FIG. 4B and FIG. 4C). Transistor T4 in FIG. 5C can include elements similar to those of transistor T3. For example, transistor T4 can include transistor elements (e.g., body, source, and drain) that are parts of a combination of a portion P Si and two n+ portions adjacent the portion P Si of the same pillar (pillar 501 A or 501B), and transistor element (e.g., gate) that is part of a respective conductive line (one of conductive lines 582 and 583).[00227] FIG. 5D shows a schematic diagram of a portion of memory device 500 of FIG. 5A including memory cells 512A and 513A. FIG. 5E is a chart showing example values of voltages provided to the signals of memory device 500 of FIG. 5D during three different example write operations 521, 522, and 523, according to some embodiments described herein. The following description refers to FIG. 5D and FIG. 5E.[00228] In write operation 521, memory cell 512A is selected to store information, and memory cell 513 A is unselected (e.g., not selected to store information). In write operation 522, memory cell 513 A is selected to store information, and memory cell 512A is unselected. In write operation 523 both memory cells 512A and 513A are selected to store information.[00229] As shown in FIG. 5E, each of signals PL2and PL3can be provided with a voltage V4 during a write operation (e.g., any of write operations 521, 522, and 523) of memory device 500 regardless of which of memory cells 512A and 513 A is selected. Each of signals RSL2and RSL3can be provided with a voltage Va (e.g., Va = 0V) in write operations 521, 522, and 523. Signal BLR1 A can be provided with a voltage Vb (e.g., Vb = 0V) in write operations 521, 522, and 523.[00230] In write operation 521, signal WL3(associated with unselected memory cell 513 A) can be provided with a voltage V5 (to turn off transistor T3 of unselected memory cell 513 A). Signal WL2(associated with selected memory cell 512 A) can be provided with a voltage V6 (to turn on transistor T3 of selected memory cell 512A). The value of voltage V6 can be greater than a supply voltage (e.g., VDD) of memory device 500 (e.g., V6 > VDD). Signal BL2A (associated with unselected memory cell 5 13 A) can be provided with a voltage Vx (e.g., Vx = V4). Signal BL1 A (associated with selected memory cell 512 A) can be provided with a voltage VBLI . The value of voltage VBLI can be based on the value of information to be stored in memory cell 512 A. For example, voltage VBLI can have one value (e.g., VBLI = 0V or VBLI < 0) if information to be stored in memory cell 512 A has one value (e.g., "0"), and another value (e.g., VBLI > 0V (e.g., VBLI = IV)) if information to be stored in memory cell 512A has another value (e.g., " 1 ").[00231] In write operation 522, the voltages provided to signals WL2(associated with unselected memory cell 512A) and WL3(associated with selected memory cell 513 A) can be swapped, such that WL2= V5 and WL3= V6. Signal BL1 A (associated with unselected memory cell 512 A) can be provided with a voltage Vx. Signal BLR1 A (associated with selected memory cell 513 A) can be provided with a voltage Vb. Signal BL2A (associated with selected memory cell 513 A) can be provided with a voltage VBL2. The value of voltage VBL2can be based on the value of information to be stored in memory cell 513 A. For example, voltage VBL2can have one value (e.g., VBL2= 0V or VBL2< 0) if information to be stored in memory cell 513 A has one value (e.g., "0"), and another value (e.g., VBL2> 0V (e.g., VBL2= IV)) if information to be stored in memory cell 513 A has another value (e.g., " 1").[00232] In write operation 523, both memory cells 512A and 513 A are selected to store information. Thus, the voltages provided to each of signals WL2and WL3can be the same as those in write operations 521 and 522 for a selected memory cell, such as WL2= WL3= V6, BLI A = VBLI, and BL2A =[00233] FIG. 5F is a flow chart showing different stages of a read operation 560 of memory device 500 of FIG. 5 A through FIG. 5C, according to some embodiments described herein. As shown in FIG. 5F, read operation 560 (to read information from a selected memory cell) can include different stages, such as a pre-sense stage 561 , a sense (or read) stage 562, a reset stage 563, and a restore stage 564. These stages (561 , 562, 563, and 564) can be performed one stage after another in an order shown in FIG 5F, starting from pre-sense stage 561. In FIG. 5F, sense stage 562 (to determine the value of information stored in a selected memory cell) can be performed in two different sense schemes. One sense scheme (e.g., FIG. 5J) is based on the threshold voltage (Vt) shift of a transistor (e.g., transistor T3) coupled to the selected memory cell. An alternative sense scheme of sensing (e.g., FIG. 5K') based on a property (e.g., self-latching) of a bipolar junction transistor, which is intrinsically built-in a transistor (e.g., transistor T4) of the selected memory cell.[00234] The stages (561, 562, 563, and 564) of read operation 560 are described in detail with reference to FIG. 5G through FIG. 5N.[00235] FIG. 5G shows a schematic diagram of a portion of memory device 500 of FIG. 5A including memory cells 512A and 513A. FIG. 5H is a chart showing values of signals in FIG. 5G during pre-sense stage 561 of a read operation associated with FIG. 5F. The following description refers to FIG. 5H (impact ionization pre-sense stage) and FIG. 5G. Memory cell 512A is assumed to be a selected memory cell (to be read in this example), and memory cell 513 A is assumed to be an unselected memory cell (not to be read in this example). In pre-sense stage 561, each of signals PL2and PL3can be provided with a voltage VPL (e.g., 0V). Signal BL2A can be provided with a voltage Vc (e.g., Vc = 0V). Signals WL3can be provided with a voltage VL (e.g., VL = 0V) to turn off transistor T3 of memory cell 513 A (unselected memory cell). Signal RSL3can be provided with a voltage VL (VL = OV). Signals BLR1 A and BL1 A can be provided with a voltage VBL H. Signal WL2can be provided with a voltage VWL (0 < VWL < VBL H), and RSL2can be provided with a voltage VL (VL < VBL H). Similar to a pre-sense stage 461 of FIG. 4K, pre-sense stage 561 of FIG. 5H can store the information in the body of transistor T3 of memory cell 512 A in the forms of holes. The presence or absence of holes in body of transistor T3 of memory cell 512 A depends upon the value ("0" or "1") of information stored in capacitor plate 402a of memory cell 512 A.[00236] The following description refers to FIG. 5FT (GIDL pre-sense stage) and FIG. 5G. Memory cell 512A is assumed to be a selected memory cell (to be read in this example), and memory cell 513 A is assumed to be an unselected memory cell (not to be read in this example). In pre-sense stage 561 in FIG. 5FT, each of signals PL2and PL3can be provided with a voltage VPL (e.g., 0V). Signal BL2A can be provided with a voltage Vc (e.g., Vc = 0V). Signals WL3can be provided with a voltage VL (e.g., VL = OV) to turn off transistor T3 of memory cell 513 A (unselected memory cell). Signal RSL3can be provided with a voltage VL (VL = OV). Signals BLR1 A and BL1 A can be provided with a voltage VL. Signal WL2can be provided with a voltage VWL (VWL < 0). Signal RSL2can be provided with a voltage VL (VL = OV). Similar to a pre-sense stage 461 of FIG. 4K', pre-sense stage 561 of FIG. FT can store the information in the body of transistor T3 of memory cell 512A in the forms of holes. The presence or absence of holes in body of transistor T3 of memory cell 512A depends upon the value ("0" or "1") of information stored in capacitor plate 402a of memory cell 512A[00237] FIG. 51 shows a schematic diagram of a portion of memory device 500 of FIG. 5A including memory cells 512A and 513A. FIG. 5J is a chart showing values of signals in FIG. 51 during sense stage 562 using a sense scheme based threshold voltage shift. Sense stage 562 is performed after pre- sense stage 561 (FIG. 5H). The following description refers to FIG. 51 and FIG. 5 J. The voltage values of FIG. 51 can be the same those shown in FIG. 5H, except for signals BLR1 A, RSL2, WL2, and BL1 A that can be provided with voltages VBL H, VPASS, VSENSE, and VBL L, respectively.[00238] Memory device 500 can include a detection circuit (not shown) that can be coupled to data line 521 A or data line 431 A. Memory device 500 can use the detection circuit to determine the value (e.g., "0" or "1") of information stored in memory cell 512A based on the presence or absence of current between data lines 532 A and 431 A during sense stage 562. For example, during sense stage 562, memory device 500 can determine that "0" was stored in memory cell 512 A if current is detected, and "1" was stored in memory cell 512 A if no current (or a negligible of current) is detected. The values of "0" and "1" mentioned here may be applicable to the case for the impact ionization pre-sense stage. In the case of the GIDL pre-sense stage, the logic may be reversed. Memory device 500 can include storage circuitry to store the values (e.g., "0" or "1") of information sensed from memory cell 512A during sense stage 562. Memory device 500 can use the stored value (e.g., stored in the storage circuitry) as the value for information to be written back to memory cell 512a in restore stage 564 (described below). In an alternative sense stage for FIG. 5J, the voltages provided to signals BLR1 A and BL1 A can be switched, such that BLR1 A =[00239] FIG. 5J' is a chart showing values of signals in FIG. 51 during a sense stage using an alternative sense scheme based on a property (e.g., self- latching) of a built-in bipolar junction transistor. The voltage values of FIG. 5 J' can be the same those shown in FIG. 5 J, except for signals BLR1 A, WL2, and BL1 A in FIG. 5 J' that can be provided with voltages VBL L, VG, and VBL H, respectively. Voltage VG can be less than zero volts, such as a slightly negative voltage (e.g., VG < OV). Applying voltage VG of less than zero volts can induce a phenomenon such as impact ionization current (near data line 521 A) and subsequent BJT latch. Memory device 500 can include a detection circuit (not shown) to determine the value (e.g., "0" or "1") of information stored in memory cell 512A in ways similar to the current detection described above with reference to FIG. 5J.[00240] FIG. 5K shows a schematic diagram of a portion of memory device 500 of FIG. 5A including memory cells 512A and 513A. FIG. 5L is a chart showing values of signals in FIG. 5K during reset stage 563, which is performed after sense stage 562 (FIG. 5 J). The following description refers to FIG. 5K and FIG. 5L. The voltage values of FIG. 5L can be the same those shown in FIG. 5 J, except for signals BLR1 A and B 1 A that can be provided with voltage VBL x, and signals RSL2and WL2that can be provided with voltage VwLy. The value of voltage VBL x can be zero volts (e.g., VBL x = 0V).Alternatively, the value of voltage VBL x can be less than zero volts, such as a slightly negative voltage (e.g., VBL x = -0.3 V).[00241] In a particular reset stage of a different read operation, memory cells (both shown and not shown in FIG. 5K) adjacent memory cells 513 A may be reset during that particular reset stage (e.g., similar to reset stage 563 in FIG. 5L) and memory cells 513 A are unselected (or unused) in that read operation. In that particular reset stage (to reset the adjacent memory cells, both shown and not shown), the value of voltages on signals RSL3(FIG. 5K) can be less than zero volts (e.g., slightly less than zero volts, such as RSL3= Vn (e.g., Vn = - 0.3 V) if voltages of less than zero volts are provided to signals BLR1 A and BL1 A during that particular reset stage. However, in order to avoid transistor leakage that may be caused by GIDL current, the value of voltages on signals RSL3(FIG. 5K) can be slightly less than zero volts such as RSL3 = Vn, but not too much less than Vn (e.g., -IV < RSL3 < -0.3V).[00242] FIG. 5M shows a schematic diagram of a portion of memory device 500 of FIG. 5A including memory cells 512A and 513A. FIG. 5N is a chart showing values of signals in FIG. 5M during restore stage 564, which is performed after reset stage 563 (FIG. 5K). As described above, restore stage 564 can be performed to restore (e.g., write back) information to memory cells 512A and 513A after memory cells 512A and 513A were sensed (e.g., based on either the sense scheme shown in FIG. 5 J or the sense scheme shown in FIG. 5J'). The following description refers to FIG. 5M and FIG. 5N. As shown in FIG. 5N, signal BL2A can be provided with voltage Vx, each of signals WL3, RSL2, and RSL3can be provided with signal VL (e.g., VL = 0V), signal BLR1 A can be provided with voltage Vc (e.g., Vc = 0V), signal WL2can be provided with voltage V6 (e.g., V6 > VDD), and signal BL1 A can be provided with voltage VBLI. Voltage VBLI can have one value (e.g., VBLI = 0V or VBLI < 0) if information to be restored in memory cell 512A has one value (e.g., "0"), and another value (e.g., VBLI = IV) if information to be restored in memory cell 512A has another value (e.g., "1"). Based on the voltages in FIG. 5N, information can be stored (e.g., restored) in capacitor plate 402a of memory cell 512A.[00243] FIG. 6 shows a structure of a portion of a memory cell 613 located along a segment of a pillar 601 of a memory device 600, according to some embodiments described herein. Memory device 600 can include a plate line 653, a word line 643, and a data line 631 that can correspond to one of the plate lines, one of the word lines, and one of the data lines of memory device 400 (FIG. 4B) or memory device 500 (FIG. 5B).[00244] As shown in FIG. 6, pillar 601 can include n+ portions and a P Si portion. Pillar 601 can be similar to one of the pillars (e.g., pillar 401 A' in FIG. 4B) of memory device 400 (FIG. 4B) or one of the pillars (e.g., pillar 501 A' in FIG. 5B) of memory device 500 (FIG. 5B). Portion P Si is separated from word line 643 by a dielectric (e.g., silicon dioxide) 605. [00245] As shown in FIG. 6, memory cell 613 can include a capacitor C and a transistor T3'. Capacitor C can include a capacitor plate 602a (which is part of n+ portion), conductive portion 613', conductive contacts 613", and a part of plate line 653. Conductive portion 613' can be formed from a relatively low resistance material (e.g., a material that can has a resistance lower than conductively doped polysilicon, such as metal). Conductive contacts 613" can also have a relatively low resistance material that can be similar to the material of conductive portion 613'. Dielectrics 613k and 613o can be different dielectric materials that have different dielectric constants. Dielectrics 613k can have a dielectric constant greater than the dielectric constant of dielectric 613o. For example, dielectric 613o can be silicon dioxide, and dielectric 613k can be a high-k dielectric, which is a dielectric material having a dielectric constant greater than the dielectric constant of silicon dioxide.[00246] The structure of memory cell 613 can be substituted for the structure of each of the memory cells (e.g., memory cell 413A in FIG. 4B) of memory device 400 (FIG. 4B) or the structure of each of the memory cells (e.g., memory cell 513 A in FIG. 5B) of memory device 500 (FIG. 5B). For example, the structure of capacitor C can be substituted for the structure of capacitor C in each of the memory cells of memory device 400 (FIG. 4B) or memory device 500 (FIG. 5B).[00247] The illustrations of apparatuses (e.g., memory devices 100, 200,400, 500, and 600) and methods (e.g., operations of memory devices 100, 200, 400, 500, and 600) are intended to provide a general understanding of the structure of various embodiments and are not intended to provide a complete description of all the elements and features of apparatuses that might make use of the structures described herein. An apparatus herein refers to, for example, either a device (e.g., any of memory devices 100, 200, 400, 500, and 600) or a system (e.g., an electronic item that can include any of memory devices 100, 200, 400, 500, and 600).[00248] Any of the components described above with reference to FIG. 1 through FIG. 6 can be implemented in a number of ways, including simulation via software. Thus, apparatuses, e.g., memory devices 100, 200, 400, 500, and 600, or part of each of these memory devices described above, may all be characterized as "modules" (or "module") herein. Such modules may include hardware circuitry, single- and/or multi-processor circuits, memory circuits, software program modules and objects and/or firmware, and combinations thereof, as desired and/or as appropriate for particular implementations of various embodiments. For example, such modules may be included in a system operation simulation package, such as a software electrical signal simulation package, a power usage and ranges simulation package, a capacitance- inductance simulation package, a power/heat dissipation simulation package, a signal transmission-reception simulation package, and/or a combination of software and hardware used to operate or simulate the operation of various potential embodiments.[00249] Memory devices 100, 200, 400, 500, and 600 may be included in apparatuses (e.g., electronic circuitry) such as high-speed computers,communication and signal processing circuitry, single- or multi-processor modules, single or multiple embedded processors, multicore processors, message information switches, and application-specific modules including multilayer, multichip modules. Such apparatuses may further be included as subcomponents within a variety of other apparatuses (e.g., electronic systems), such as televisions, cellular telephones, personal computers (e.g., laptop computers, desktop computers, handheld computers, tablet computers, etc.), workstations, radios, video players, audio players (e.g., MP3 (Motion Picture Experts Group, Audio Layer 3) players), vehicles, medical devices (e.g., heart monitor, blood pressure monitor, etc.), set top boxes, and others.[00250] The embodiments described above with reference to FIG. 1 through FIG. 6 include apparatuses, and methods of operations performed by the apparatuses. One of the apparatuses includes volatile memory cells located along a pillar that has a length extending in a direction perpendicular to a substrate of a memory device. Each of the volatile memory cells includes a capacitor and at least one transistor. The capacitor includes a capacitor plate. The capacitor plate is either formed from a portion a semiconductor material of the pillar or formed from a conductive material separated from the pillar by a dielectric. Other embodiments including additional apparatuses and methods are described. [00251] In the detailed description and the claims, a list of items joined by the term "at least one of can mean any combination of the listed items. For example, if items A and B are listed, then the phrase "at least one of A and B" means A only; B only; or A and B. In another example, if items A, B, and C are listed, then the phrase "at least one of A, B and C" means A only; B only; C only; A and B (excluding C); A and C (excluding B); B and C (excluding A); or all of A, B, and C. Item A can include a single element or multiple elements. Item B can include a single element or multiple elements. Item C can include a single element or multiple elements.[00252] In the detailed description and the claims, a list of items joined by the term "one of can mean only one of the list items. For example, if items A and B are listed, then the phrase "one of A and B" means A only (excluding B), or B only (excluding A). In another example, if items A, B, and C are listed, then the phrase "one of A, B and C" means A only; B only; or C only. Item A can include a single element or multiple elements. Item B can include a single element or multiple elements. Item C can include a single element or multiple elements.[00253] The above description and the drawings illustrate some embodiments of the inventive subject matter to enable those skilled in the art to practice the embodiments of the inventive subject matter. Other embodiments may incorporate structural, logical, electrical, process, and other changes.Examples merely typify possible variations. Portions and features of some embodiments may be included in, or substituted for, those of others. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. |
Examples include techniques to mirror a command/address or interpret command/address logic at a memory device. A memory device located on a dual in-line memory module (DIMM) may include circuitry having logic capable of receiving a command/address signal and mirror a command/address or interpret command/address logic indicated in the command/address signal based on one or more strap pins for the memory device. |
CLAIMS;What is claimed is:1. An apparatus comprising:circuitry for a memory device on first side of a dual in-line memory module (DIMM), the circuitry including logic, at least a portion of which comprises hardware, the logic to:receive a command/address signal that indicates a first command/address to the target memory device;determine based on a strap pin of the memory device that the first command/address indicated in the command/address signal is to be mirrored; andmirror the first command/address to the memory device such that the firstcommand/address indicated in the command/address signal is a mirror of a secondcommand/address to a memory device on a second side of the DIMM.2. The apparatus of claim 1, the logic to mirror the first command/address to the target memory device comprises the logic to swap respective even numbered command/addresses to the target memory device with a respective next higher odd numbered command/addresses to the target memory device.3. The apparatus of claim 1, the logic to determine based on the strap pin that the first command/address indicated in the command/address signal is the mirror of the second command/address comprises the logic to determine that the strap pin is connected to a power pin of the target memory device.4. The apparatus of claim 3, the power pin comprises an output storage drain power voltage (VDDQ) pin.5. The apparatus of claim 1, the DIMM comprises a registered DIMM (RDIMM), a low power DIMM (LPDIMM), a load reduced DIMM (LRDIMM), a fully -buffered DIMM (FB- DIMM), an unbuffered DIMM (UDIMM) or a small outline DIMM (SODIMM).6. The apparatus of claim 1, comprising the memory device to include non-volatile memory or volatile memory, wherein the volatile memory includes dynamic random access memory (DRAM) and the non-volatile memory includes 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level phase change memory (PCM), resistive memory, ovonic memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque MRAM (STT-MRAM).7. A method comprising:receiving, by circuitry at a target memory device on first side of a dual in-line memory module (DIMM), a command/address signal indicating a first command/address to the target memory device;determining based on a strap pin of the target memory device that the firstcommand/address indicated in the command/address signal is to be mirrored; andmirroring the first command/address to the target memory device such that the first command/address indicated in the command/address signal is a mirror of a secondcommand/address to a non-target memory device on a second side of the DIMM.8. The method of claim 7, mirroring the first command/address to the target memory device comprises swapping respective even numbered command/addresses to the target memory device with a respective next higher odd numbered command/addresses to the target memory device.9. The method of claim 7, determining based on the strap pin that the first command/address indicated in the command/address signal is the mirror of the second command/address comprises the strap pin being connected to a power pin of the target memory device.10. The method of claim 9, the power pin comprises an output storage drain power voltage (VDDQ) pin.1 1. An apparatus comprising means for performing the methods of any one of claims 7 to 10.12. An apparatus comprising:circuitry for a memory device on a first side of a dual in line memory module (DIMM), the circuitry including logic, at least a portion of which comprises hardware, the logic to:receive a command/address signal;determine based on a strap pin of the memory device whether command/address logic indicated by the command/address signal has been inverted; and interpret the command/address logic indicated by the command/address signal based on the determination.13. The apparatus of claim 12, comprising the logic to determine that the command/address signal indicates that the command/address logic has been inverted based on the strap pin being connected to a power pin of the target memory device, wherein the power pin is an output storage drain power voltage (VDDQ) pin and the command/address logic indicated by the command/address signal was inverted by circuitry for a register buffer of the DIMM.14. The apparatus of claim 13, the DIMM comprises a registered DIMM (RDIMM), a low power DIMM (LPDIMM), a load reduced DIMM (LRDIMM), a fully -buffered DIMM (FB- DIMM), an unbuffered DIMM (UDIMM) or a small outline DIMM (SODIMM).15. The apparatus of claim 12, comprising the memory device to include non-volatile memory or volatile memory, the volatile memory including dynamic random access memory (DRAM), the non-volatile memory including 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level phase change memory (PCM), resistive memory, ovonic memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque MRAM (STT-MRAM).16. A method comprising:receiving, by circuitry a target memory device on a dual in-line memory module (DIMM), a command/address signal;determining based on a strap pin of the memory device whether command/address logic indicated by the command/address signal has been inverted; andinterpreting the command/address logic indicated by the command/address signal based on the determination.17. The method of claim 16, comprising determining that the command/address signal indicates that the command/address logic has been inverted based on the strap pin being connected to a power pin of the target memory device, wherein the power pin is an output storage drain power voltage (VDDQ) pin and the command/address logic indicated by the command/address signal was inverted by circuitry for a register buffer of the DIMM.18. An apparatus comprising means for performing the methods of any one of claims 16 to 17.19. A system comprising:a dual in-line memory module (DIMM) including one or more first memory devices on a first side and one or more second memory devices on a second side; anda memory device from among the one or more first memory devices, the memory device having a first strap pin and including logic, at least a portion of which comprises hardware, the logic to:receive a first command/address signal that indicates a first command/address targeted to the memory device;determine whether the first strap pin is connected to a power pin; and mirror the first command/address targeted to the memory device based on the determination such that the first command/address indicated in the first command/address signal is a mirror of a second command/address to a memory device from among the one or more second memory devices on the second side of the DIMM.20. The system of claim 19, to logic to mirror the first command/address to the memory device from among the first one or more memory devices comprises the logic to swap respective even numbered command/addresses to the memory device from among the first one or more memory devices with a respective next higher odd numbered command/addresses to the memory device from among the first one or more memory devices.21. The system of claim 18, the power pin comprises an output storage drain power voltage (VDDQ) pin.22. The system of claim 19, comprising the memory device from among the one or more first memory devices having a second strap pin and further including logic to:receive a second command/address signal; andinterpret a command/address logic indicated by the second command/address signal based on the second strap pin being connected to a same or different power pin than what the first strap pin is connected to such that the command/address logic indicated by the second command/address signal is interpreted as being inverted, wherein the same or different power pin than what the first strap pin is connected to comprises a same or different output storage drain power voltage (VDDQ) pin.23. The system of claim 22, comprising the command/address logic indicated by the second command/address signal was inverted by circuitry for a register buffer of the DIMM.24. The system of claim 19, the DIMM comprises a registered DIMM (RDIMM), a low power DIMM (LPDIMM), a load reduced DIMM (LRDIMM), a fully -buffered DIMM (FB- DIMM), an unbuffered DIMM (UDIMM) or a small outline DIMM (SODIMM).25. The system of claim 19, comprising the memory device to include non-volatile memory or volatile memory, the volatile memory include dynamic random access memory (DRAM), the non-volatile memory including 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level phase change memory (PCM), resistive memory, ovonic memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque MRAM (STT-MRAM). |
TECHNIQUES TO MIRROR A COMMAND/ADDRESS OR INTERPRET COMMAND/ADDRESS LOGIC AT A MEMORY DEVICERELATED CASEThis application claims priority under 35 U.S.C. § 365(c) to US Application No.15/266,991 filed on September 15, 2016, entitled TECHNIQUES TO MIRROR ACOMMAND/ADDRESS OR INTERPRET COMMAND/ADDRESS LOGIC AT A MEMORY DEVICE which in turns claims the benefit of priority of US Provisional Application No.62/304,212 filed on March 5, 2016, entitled TECHNIQUES TO MIRROR ACOMMAND/ADDRESS OR INTERPRET COMMAND/ADDRESS LOGIC AT A MEMORY DEVICE. The entire disclosure of these documents are incorporated by reference herein for all purposes.TECHNICAL FIELDExamples described herein are generally related to memory devices on a dual in-line memory module (DIMM).BACKGROUNDMemory modules coupled with computing platforms or systems such as those configured as a server may include dual in-line memory modules (DIMMs). DIMMs may include various types of memory including volatile or non-volatile types of memory. As memory technologies have advanced to include memory cells having higher and higher densities, memory capacities for DIMMs have also substantially increased. Also, advances in data rates for accessing data to be written to or read from memory included in a DIMM enable large amounts of data to flow between a requestor needing access and memory devices included in the DIMM. Higher data rates may result in increased frequencies for signals transmitted to/from memory included at the DIMM.BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates an example system.FIG. 2 illustrates an example first portion of a dual in-line memory module (DIMM).FIG. 3 illustrates an example second portion of a DIMM.FIG. 4 illustrates an example pin diagram.FIG. 5 illustrates an example memory device logic.FIG. 6 illustrates an example apparatus. FIG. 7 illustrates an example first logic flow.FIG. 8 illustrates an example first logic flow.FIG. 9 illustrates an example storage medium.FIG. 10 illustrates an example computing platform.DETAILED DESCRIPTIONAs contemplated by the present disclosure, higher data rates for accessing data to be written to or read from memory or memory devices at a DIMM may result in increased frequencies for signals transmitted to/from memory devices at the DIMM. Techniques to improve signal integrity as well as save power to include command/address signal mirroring or inversion may be implemented.In some examples, memory buses transmitting data via increased frequencies may perform best when an interconnection stub between memory devices on opposite sides of a DIMM are minimized or made as short as possible. Some existing DIMMs may use a special "mirror" package or endure a long stub and the associated suboptimal signal routing. Other DIMMs may handle this by not using a different mirrored package. Rather, these other DIMMs may perform mirroring of command/addresses for pins of a memory device that can be swapped without changing functionality. For example, pins that may be purely for address bits. Pins for command bits, for instance, may not be swapped. The same may occur for this type of swapping for inversion of command/address signals. This may substantially limit the number of pins available for mirroring.Also, in some examples for how current computing systems implement inversion with memory devices at DIMMs, a memory controller may use multiple command cycles during initialization. A first cycle may be issued normally, and second cycle may be to issue a copy of the same command with the logic inverted. This may place very complex requirements on the host memory controller to flip or invert bits.FIG. 1 illustrates a system 100. In some examples, as shown in FIG. 1, system 100 includes a host 110 coupled to DIMMs 120-1 to 120-n, where "«" is any positive whole integer with a value greater than 2. For these examples, DIMMs 120-1 to 120-n may be coupled to host 110 via one or more channels 140-1 to 140-n. As shown in FIG. 1, host 110 may include an operating system (OS) 114 one or more applications (App(s)) 116 and circuitry 112. Circuitry 112 may include one or more processing element(s) 111 (e.g., processors or processor cores) coupled with a memory controller 113. Host 110 may include, but is not limited to, a personal computer, a desktop computer, a laptop computer, a tablet, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, or combination thereof.In some examples, as shown in FIG. 1, DIMMs 120-1 to 120-n may include respective memory dies or devices 120-1 to 120-n. Memory devices 120-1 to 120-n may include various types of volatile and/or non-volatile memory. Volatile memory may include, but is not limited to, random-access memory (RAM), Dynamic RAM (D-RAM), double data rate synchronous dynamic RAM (DDR SDRAM), static random-access memory (SRAM), Thyristor RAM (T- RAM) or zero-capacitor RAM (Z-RAM). Non-volatile memory may include, but is not limited to, non-volatile types of memory such as 3-Dimensional (3-D) cross-point memory that are byte or block addressable. These block addressable or byte addressable non-volatile types of memory for memory devices 120-1 to 120-n may include, but are not limited to, memory that uses chalcogenide phase change material (e.g., chalcogenide glass), multi -threshold level NAND flash memory, NOR flash memory, single or multi-level phase change memory (PCM), resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque MRAM (STT-MRAM), or a combination of any of the above, or other non-volatile memory types.According to some examples, memory devices 122-1 to 122-n including volatile and/or non-volatile types of memory may operate in accordance with a number of memorytechnologies, such as new technologies associated with DIMMs being developed that include, but are not limited to, DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (LPDDR version 5, currently in discussion by JEDEC), HBM2 (HBM version 2, currently in discussion by JEDEC), and/or other new technologies based on derivatives or extensions of such specifications.. Memory devices 122-1 to 122-n may also operate in accordance with other memory technologies such as, but are not limited to, DDR4 (double data rate (DDR) version 4, initial specification published in September 2012 by JEDEC), LPDDR4 (LOW POWERDOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide I/O 2 (WideI02), JESD229-2, originally published by JEDEC in August 2014), HBM (HIGH BANDWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), and/or other technologies based on derivatives or extensions of these specifications.According to some examples, DIMMs 120-1 to 120-n may be designed to function as a registered DIMM (RDIMM), a load reduced DIMM (LRDIMM), a low power DIMM(LPDIMM), a fully -buffered DIMM (FB-DIMM), an unbuffered DIMM (UDIMM) or a small outline (SODIMM). Examples are not limited to only these DIMM designs. In some examples, memory devices 122-1 to 122 -n at DIMMs 120-1 to 120-n may include all or combinations of types of volatile or non-volatile memory. For example, memory devices 122-1 at DIMM 120-1 may include volatile memory (e.g., DRAM) on a front or first side and may include non-volatile memory (e.g., 3D cross point memory) on a back or second side. In other examples, a hybrid DIMM may include combinations of non-volatile and volatile types of memory for memory devices 122-1 on either side of DIMM 120-1. In other examples, all memory devices 122-1 may be either volatile types of memory or non-volatile types of memory. In some examples, multiple channels may be coupled with memory devices maintained on a DIMM and in some examples, separate channels may be routed to different non-volatile/volatile types and/or groups of memory devices. For example, a first channel to memory devices including non-volatile memory and a second channel to memory devices including volatile memory. In other examples, a first channel may be routed to memory devices on a first side of a DIMMs and a second channel to memory devices on a second side of the DIMMs. Examples are not limited to the above examples of how multiple channels may be routed to memory devices included on a single DIMMs.FIG. 2 illustrates an example DIMM portion 200. In some examples, DIMM portion 200 shows how a double sided memory module assembly may have memory devices or dies 201 and 202 on opposites of a printed circuit board (PCB) 203 and share common address buses for command/address buses A and B. For these examples, pins 212 and 214 on memory device 201 becomes a mirror image of the pins 222 and 224 on memory device 202 for the common command/address buses A and B.In some examples, a stub resulting from connections between mirrored or identical pins on either side of PCT 203, depicted by letters A and B in FIG. 2, may consume PCB routing resources and may impact bus frequency scaling. As described more below, techniques to implement mirroring may reduce a length of this stub. However, DIMM portion 200 shows an example of when mirroring is not implemented.FIG. 3 illustrates an example DIMM portion 300. In some examples, command/address signals may be swapped at a target memory device such that the command/address signals may be coincident between memory devices on opposite sides of PCB 303. As a result, a common via through PCB 303 may be shared as shown in FIG. 3. A command/address signal such as command/address A may now be connected to pin 322 of memory device 320 and may also be connected to pin 312 of memory device 310 to form a shortest path or stub between these memory devices that is routed through PCB 303. As described more below, a strap pin may be utilized on a given memory device to indicate that a given command/address pin has been mirrored. For example, a first command/address to memory device 320 indicated in a command/address signal received via command/address A at pin 322 may be mirror of a second command/address to memory device 310 at pin 312 or vice versa.According to some examples, a DIMM may use circuitry or logic at a register buffer (not shown) to produce additional copies of the command/address bus to reduce bus loading. For these examples, logic and/or circuitry at the register buffer may cause multiple bus segments routed from the register buffer to memory devices on the DIMM to propagate command/address signals. The propagated command/address signals may indicate respective command/address logic having logic levels inverted with respect to each other. Inversion of logic levels indicated in these propagated command/address signals may improve power efficiency and signal integrity. However, circuitry and/or logic at a memory device and/or at the register buffer needs to be aware that command/address logic indicated in command/address signals have been inverted. In some examples, another strap pin or bit may be utilized such that the memory device and/or logic at the register buffer can un-invert the command/address logic indicated in command/address signals for correct command/address logic interpretation.FIG. 4 illustrates an example pin diagram 400. In some examples, pin diagram 400 may be for a memory device having DRAM included on a DIMM. For these examples, strap pins indicated in pin diagram 400 in box F2 (Mirror) and G2 (CAI) may indicate whether the memory device should mirror command/addresses indicated in command/address signals and/or interpret command/address logic indicated in received command/address signals as being inverted.According to some examples, a MIRROR pin (F2) of a targeted memory device designed according to pin diagram 400 may be connected to a power pin such as an output storage drain power voltage (VDDQ) pin (e.g. HI). For these examples, the targeted memory device may internally swap even numbered command/addresses (CAs) with the next higher respective odd numbered CAs in order to mirror a given CA to a targeted memory device. Example swapping pairs to mirror the given CA according to pin diagram 400 may include swapping CA2 with CA3 (not CAI), CA4 with CA5 (not CA3), CA6 with CA7 (not CA5), etc. In some examples, the MIRROR pin may be tied or connected to a ground pin such as a VSSQ pin (e.g., Gl) if no CA swap is required or needed.In some examples, with the CAI (Command Address Inversion) pin connected to a VDDQ (e.g., HI), a memory device designed to use a pin diagram such as pin diagram 400 may internally invert the command/address logic level indicated in received command/address signals (e.g., routed from a register buffer). According to some examples, the CAI pin may be connected or tied to a ground pin such as a VSSQ pin (e.g., HI) if the command/address logic is not to be interpreted as being inverted.The two independent strap pins of MIRROR and CAI may allow for four different combinations that may include [no mirror, no inversion], [no mirror, inversion], [mirror, no inversion], or [mirror, inversion].FIG. 5 illustrates an example memory device logic 500. In some examples, as shown in FIG. 5, circuitry of memory device logic 500 may be activated based on whether one or both of a strap pin 501 for MIRROR or a strap pin 502 for CAI have been connected to a power/VDDQ pin (results in 1) or connected to a ground/VSSQ pin (results in 0). As shown in FIG. 5, if a logic 1 is produced from strap pin 501, a memory device including memory device logic 500 may flip command/address signals received through CMD/ADD pins 510 (forcommand/addresses CA0 to CA13) via use of multiplexers 530. Also, if a logic 1 is produced from strap pin 502, a memory device including memory device logic 500 may invert command/address logic indicated in command/address signals received through CMD/ADD pins 510 (for command/addresses CA0 to CA13) via use of XOR gates 520.FIG. 6 illustrates an example block diagram for an apparatus 600. Although apparatus 600 shown in FIG. 6 has a limited number of elements in a certain topology, it may be appreciated that the apparatus 600 may include more or less elements in alternate topologies as desired for a given implementation.The apparatus 600 may be supported by circuitry 620 maintained or located at a memory device on a DIMM coupled with a host via one or more channels. Circuitry 620 may be arranged to execute one or more software or firmware implemented components or logic 622-a. It is worthy to note that "a" and and "c" and similar designators as used herein are intended to be variables representing any positive integer. Thus, for example, if an implementation sets a value for a = 3, then a complete set of software or firmware for components or logic 622-a may include components or logic 622-1 or 622-2. The examples presented are not limited in this context and the different variables used throughout may represent the same or different integer values. Also, these "components" or "logic" may be software/firmware stored in computer- readable media, and although the components are shown in FIG. 6 as discrete boxes, this does not limit these components to storage in distinct computer-readable media components (e.g., a separate memory, etc.).According to some examples, circuitry 620 may include a processor or processor circuitry. The processor or processor circuitry can be any of various commercially available processors, including without limitation an AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® andPowerPC® processors; IBM and Sony® Cell processors; Intel® Atom®, Celeron®, Core (2) Duo®, Core i3, Core i5, Core i7, Itanium®, Pentium®, Xeon®, Xeon Phi® and XScale® processors; and similar processors. According to some examples circuitry 620 may also be an application specific integrated circuit (ASIC) and at least some components or logic 622-a may be implemented as hardware elements of the ASIC.According to some examples, apparatus 600 may include a mirror logic 622-1. Mirror logic 622-1 may be executed by circuitry 620 to receive a first command/address signal indicating a first command/address to a target memory device that may include apparatus 600. The target memory device may be located on a first side of a DIMM. The command/address signal may be included in CMD/ADDs to mirror 605. Mirror logic 622-1 may then mirror the first command/address such that the first command/address indicated in the command/address signal is a mirror of a second command/address to a memory device on a second side of the DIMM. The mirror command/address may be included in mirrored CMD/ADDs 630.In some examples, apparatus 600 may also include an invert logic 622-2. Invert logic 622- 2 may be executed by circuitry 620 to receive a command/address signal at the memory device that includes apparatus 600. Invert logic 622-2 may determine based on a strap pin of the memory device whether command/address logic indicated by the command/address signal has been inverted and then interpret the command/address logic indicated by the command/address signal based on the determination. The inverted command/address logic may be included in CMD/ADD signals 610 and the interpreted command/address logic may be included in interpreted CMD/ADD logic 635.FIG. 7 illustrates an example logic flow 700. As shown in FIG. 7 the first logic flow includes a logic flow 700. Logic flow 700 may be representative of some or all of the operations executed by one or more logic, features, or devices described herein, such as apparatus 700. More particularly, logic flow 700 may be implemented by mirror logic 622-1.According to some examples, logic flow 700 at block 702 may receive a command/address signal indicating a first command/address to a target memory device on a first side of a DIMM. For these examples, mirror logic 622-1 may receive the command/address signal.In some examples, logic flow 700 at block 704 may determine based on a strap pin of the target memory device that the first command/address indicated in the command/address signal is to be mirrored. For these examples, mirror logic 622-2 may make this determination.According to some examples, logic flow 700 at block 706 may mirror the firstcommand/address to the target memory device such that the first command/address indicated in the command/address signal is a mirror of a second command/address to a non-target memory device on a second side of the DIMM. For these examples, mirror logic 622-1 may mirror the first command/address to the target memory device.FIG. 8 illustrates an example logic flow 800. As shown in FIG. 8 the first logic flow includes a logic flow 800. Logic flow 800 may be representative of some or all of the operations executed by one or more logic, features, or devices described herein, such as apparatus 800. More particularly, logic flow 800 may be implemented by invert logic 622-1.According to some examples, logic flow 800 at block 802 may receive a command/address signal at a memory device on a DIMM. For these examples, invert logic 622-1 may receive the command/address signal.In some examples, logic flow 800 at block 804 may determine based on a strap pin of the memory device whether command/address logic indicated by the command/address signal has been inverted. For these examples, invert logic 822-2 may determine whether thecommand/address logic has been inverted.According to some examples, logic flow 800 at block 806 may interpret thecommand/address logic indicated by the command/address signal based on the determination that the command/address logic indicated in the command/address signal has been inverted. For these examples, invert logic 822-2 may interpret the command/address logic based on the determination.FIG. 9 illustrates an example storage medium 900. As shown in FIG. 9, the first storage medium includes a storage medium 900. The storage medium 900 may comprise an article of manufacture. In some examples, storage medium 900 may include any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. Storage medium 900 may store various types of computer executable instructions, such as instructions to implement logic flow 700 or 800. Examples of a computer readable or machine readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The examples are not limited in this context.FIG. 10 illustrates an example computing platform 1000. In some examples, as shown in FIG. 10, computing platform 1000 may include a memory system 1030, a processing component 1040, other platform components 1050 or a communications interface 1060. According to some examples, computing platform 1000 may be implemented in a computing device.According to some examples, memory system 1030 may include a controller 1032 and memory devices(s) 1034. For these examples, logic and/or features resident at or located at controller 1032 may execute at least some processing operations or logic for apparatus 600 and may include storage media that includes storage medium 900. Also, memory device(s) 1034 may include similar types of volatile or non-volatile memory (not shown) that are described above for memory devices 122, 210, 220, 310 or 320 shown in FIGS. 1-3. In some examples, controller 1032 may be part of a same die with memory device(s) 1034. In other examples, controller 1032 and memory device(s) 1034 may be located on a same die or integrated circuit with a processor (e.g., included in processing component 1040). In yet other examples, controller 1032 may be in a separate die or integrated circuit coupled with or on memory device(s) 1034.According to some examples, processing component 1040 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASIC, programmable logic devices (PLD), digital signal processors (DSP), FPGA/programmable logic, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given example.In some examples, other platform components 1050 may include common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia I/O components (e.g., digital displays), power supplies, and so forth. Examples of memory units associated with either other platform components 1050 or storage system 1030 may include without limitation, various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), RAM, DRAM, DDR DRAM, synchronous DRAM (SDRAM), DDR SDRAM, SRAM, programmable ROM (PROM), EPROM, EEPROM, flash memory, ferroelectric memory, SONOS memory, polymer memory such as ferroelectric polymer memory, nanowire, FeTRAM or FeRAM, ovonic memory, phase change memory, memristers, STT-MRAM, magnetic or optical cards, and any other type of storage media suitable for storing information. In some examples, communications interface 1060 may include logic and/or features to support a communication interface. For these examples, communications interface 1060 may include one or more communication interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links. Direct communications may occur through a direct interface via use of communication protocols or standards described in one or more industry standards (including progenies and variants) such as those associated with the SMBus specification, the PCIe specification, the NVMe specification, the SATA specification, SAS specification or the USB specification. Network communications may occur through a network interface via use of communication protocols or standards such as those described in one or more Ethernet standards promulgated by the IEEE. For example, one such Ethernet standard may include IEEE 802.3-2012, Carrier sense Multiple access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications, Published in December 2012 (hereinafter "IEEE 802.3").Computing platform 1000 may be part of a computing device that may be, for example, user equipment, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet, a smart phone, embedded electronics, a gaming console, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, or combination thereof. Accordingly, functions and/or specific configurations of computing platform 1000 described herein, may be included or omitted in various embodiments of computing platform 1000, as suitably desired.The components and features of computing platform 1000 may be implemented using any combination of discrete circuitry, ASICs, logic gates and/or single chip architectures. Further, the features of computing platform 1000 may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as "logic", "circuit" or "circuitry."One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.Some examples may include an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.Some examples may be described using the expression "in one example" or "an example" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the example is included in at least one example. The appearances of the phrase "in one example" in various places in the specification are not necessarily all referring to the same example.Some examples may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms "connected" and/or "coupled" may indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.The follow examples pertain to additional examples of technologies disclosed herein.Example 1. An example apparatus may include circuitry for a memory device on first side of a DIMM. The circuitry may include logic, at least a portion of which is hardware, the logic may receive a command/address signal that indicates a first command/address to the target memory device. The logic may also determine based on a strap pin of the memory device that the first command/address indicated in the command/address signal is to be mirrored. The logic may also mirror the first command/address to the memory device such that the firstcommand/address indicated in the command/address signal is a mirror of a secondcommand/address to a memory device on a second side of the DIMM.Example 2. The apparatus of example 1 , the logic to mirror the first command/address to the target memory device may include the logic to swap respective even numberedcommand/addresses to the target memory device with a respective next higher odd numbered command/addresses to the target memory device.Example 3. The apparatus of example 1, the logic to determine based on the strap pin that the first command/address indicated in the command/address signal is the mirror of the second command/address includes the logic to determine that the strap pin is connected to a power pin of the target memory device.Example 4. The apparatus of example 3, the power pin includes a VDDQ pin.Example 5. The apparatus of example 1 , the DIMM may be an RDIMM, an LPDIMM, a LRDIMM, a FB-DIMM, an UDIMM or a SODIMM.Example 6. The apparatus of example 1, the memory device may include non-volatile memory or volatile memory. Example 7. The apparatus of example 6, the volatile memory may be DRAM.Example 8. The apparatus of example 6, the non-volatile memory may be 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level PCM, resistive memory, ovonic memory, nanowire memory, FeTRAM, MRAM memory that incorporates memristor technology, or STT-MRAM.Example 9. An example method may include receiving, by circuitry at a target memory device on first side of a DIMM, a command/address signal indicating a first command/address to the target memory device. The method may also include determining based on a strap pin of the target memory device that the first command/address indicated in the command/address signal is to be mirrored. The method may also include mirroring the first command/address to the target memory device such that the first command/address indicated in the command/address signal is a mirror of a second command/address to a non-target memory device on a second side of the DIMM.Example 10. The method of example 9, mirroring the first command/address to the target memory device may include swapping respective even numbered command/addresses to the target memory device with a respective next higher odd numbered command/addresses to the target memory device.Example 1 1. The method of example 9, determining based on the strap pin that the first command/address indicated in the command/address signal is the mirror of the second command/address may include the strap pin being connected to a power pin of the target memory device.Example 12. The method of example 1 1, the power pin may be a VDDQ pin.Example 13. The method of example 9, the DIMM may be an RDIMM, an LPDIMM, a LRDIMM, a FB-DIMM, an UDIMM or a SODIMM.Example 14. The method of example 9, the memory device may include non-volatile memory or volatile memory.Example 15. The method of example 14, the volatile memory may be DRAM.Example 16. The method of example 14, the non-volatile memory may be 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level PCM, resistive memory, ovonic memory, nanowire memory, FeTRAM, MRAM memory that incorporates memristor technology, or STT-MRAM. Example 17. An example at least one machine readable medium may include a plurality of instructions that in response to being executed by a system may cause the system to carry out a method according to any one of examples 9 to 16.Example 18. An example apparatus may include means for performing the methods of any one of examples 9 to 16.Example 19. An example apparatus may include circuitry for a memory device on a first side of a DIMM, the circuitry including logic, at least a portion of which may be hardware, the logic may receive a command/address signal. The logic may also determine based on a strap pin of the memory device whether command/address logic indicated by the command/address signal has been inverted. The logic may also interpret the command/address logic indicated by the command/address signal based on the determination.Example 20. The apparatus of example 19, the logic may determine that thecommand/address signal indicates that the command/address logic has been inverted based on the strap pin being connected to a power pin of the target memory device.Example 21. The apparatus of example 20, the power pin may be a VDDQ pin.Example 22. The apparatus of example 19, the command/address logic indicated by the command/address signal may be inverted by circuitry for a register buffer of the DIMM.Example 23. The apparatus of example 19, the DIMM may be an RDIMM, an LPDIMM, a LRDIMM, a FB-DIMM, an UDIMM or a SODIMM.Example 24. The apparatus of example 19, the memory device may include non-volatile memory or volatile memory.Example 25. The apparatus of example 24, the volatile memory may be DRAM.Example 26. The apparatus of example 24, the non-volatile memory may be 3- dimensional cross-point memory, memory that uses chalcogenide phase change material, multi- threshold level NAND flash memory, NOR flash memory, single or multi-level PCM, resistive memory, ovonic memory, nanowire memory, FeTRAM, MRAM memory that incorporates memristor technology, or STT-MRAM.Example 27. An example method comprising may include receiving, by circuitry a target memory device on a DIMM, a command/address signal. The method may also include determining based on a strap pin of the memory device whether command/address logic indicated by the command/address signal has been inverted. The method may also include interpreting the command/address logic indicated by the command/address signal based on the determination. Example 28. The method of example 27 may also include determining that the command/address signal indicates that the command/address logic has been inverted based on the strap pin being connected to a power pin of the target memory device.Example 29. The method of example 28, the power pin may be a VDDQ pin.Example 30. The method of example 27, the command/address logic indicated by the command/address signal may have been inverted by circuitry for a register buffer of the DIMM.Example 31. The method of example 27, the DIMM may be an RDIMM, an LPDIMM, a LRDIMM, a FB-DIMM, an UDIMM or a SODIMM.Example 32. The method of example 27, the memory device may include non-volatile memory or volatile memory.Example 33. The method of example 32, the volatile memory may be DRAM.Example 34. The method of example 32, the non-volatile memory may be 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level PCM, resistive memory, ovonic memory, nanowire memory, FeTRAM, MRAM memory that incorporates memristor technology, or STT-MRAM.Example 35. An example at least one machine readable medium may include a plurality of instructions that in response to being executed by a system may cause the system to carry out a method according to any one of examples 27 to 34.Example 36. An example apparatus may include means for performing the methods of any one of examples 27 to 34.Example 37. An example system may include a DIMM including one or more first memory devices on a first side and one or more second memory devices on a second side. The system may also include a memory device from among the one or more first memory devices, the memory device having a first strap pin and including logic, at least a portion of which may be hardware. For these examples, the logic may receive a first command/address signal that indicates a first command/address targeted to the memory device. The logic may also determine whether the first strap pin is connected to a power pin. The logic may also mirror the first command/address targeted to the memory device based on the determination such that the first command/address indicated in the first command/address signal is a mirror of a second command/address to a memory device from among the one or more second memory devices on the second side of the DIMM.Example 38. The system of example 37, the logic to mirror the first command/address to the memory device from among the first one or more memory devices may include the logic to swap respective even numbered command/addresses to the memory device from among the first one or more memory devices with a respective next higher odd numbered command/addresses to the memory device from among the first one or more memory devices.Example 39. The system of example 37, the power pin may be a VDDQ pin.Example 40. The system of example 37, the memory device from among the one or more first memory devices may have a second strap pin. For these examples, the memory device may further include logic that may receive a second command/address signal and interpret a command/address logic indicated by the second command/address signal based on the second strap pin being connected to a same or different power pin than what the first strap pin is connected to such that the command/address logic indicated by the second command/address signal is interpreted as being inverted.Example 41. The system of example 40, the same or different power pin than what the first strap pin is connected to may be a same or different VDDQ pin.Example 42. The system of example 40, the command/address logic indicated by the second command/address signal may have been inverted by circuitry for a register buffer of the DIMM.Example 43. The system of example 37, the DIMM may be an RDIMM, an LPDIMM, a LRDIMM, a FB-DIMM, an UDIMM or a SODIMM.Example 44. The system of example 37, the memory device may include non-volatile memory or volatile memory.Example 45. The system of example 44, the volatile memory may be DRAM.Example 46. The system of example 44, the non-volatile memory may be 3-dimensional cross-point memory, memory that uses chalcogenide phase change material, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level PCM, resistive memory, ovonic memory, nanowire memory, FeTRAM, MRAM memory that incorporates memristor technology, or STT-MRAM.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the examples. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each example. Rather, as the following examples reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus the following examples are hereby incorporated into the Detailed Description, with each example standing on its own as a separate example. In the appended examples, the terms "including" and "in which" are used as the plain- English equivalents of the respective terms "comprising" and "wherein," respectively.Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended examples is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. |
The disclosure relates to a decoding device, comprising: a receiver configured to provide a sequence of information bits comprising context redundancy information, wherein the sequence of information bits is encoded based on a predefined channel code; a trellis generation logic configured to generate a plurality of trellis states based on the sequence of information bits and the channel code; a trellis reduction logic configured to reduce the plurality of trellis states by at least one trellis state based on the context redundancy information; and a decoder configured to decode the sequence of information bits by using a metric based on the reduced number of trellis states. |
A decoding device (700), comprising:a unit (701) configured to provide a sequence of information bits (702) comprising context redundancy information within a content of the information bits, wherein the context redundancy information is a bit field in which at least one bit combination is invalid, wherein the sequence of information bits (702) is encoded by a predefined channel code;a trellis generation logic (703) configured to generate a plurality of trellis states (704) based on the sequence of information bits (702) and the channel code;a trellis reduction logic (705) configured to reduce the plurality of trellis states (704) by at least one trellis state based on the context redundancy information; anda decoder (707) configured to decode the sequence of information bits (702) by using a metric based on the reduced trellis states (706),wherein the context redundancy information comprises cross-context redundancy information,wherein the cross-context redundancy information indicates a correlation between a time division duplex uplink downlink, TDD-ULDL, configuration field (402) and a side link synchronization sub-frame, SLSS, (404), such that for an SLSS (404) in TDD mode the TDD-ULDL configuration field (402) is restricted to indicate an uplink sub-frame.The decoding device (700) of claim 1,wherein the context redundancy information is provided at predefined positions of the sequence of information bits (702) .The decoding device (700) of one of the preceding claims,wherein the sequence of information bits (702) is correlated by the context redundancy information before being encoded by the channel code.The decoding device (700) of one of the preceding claims,wherein the trellis reduction logic (705) is configured to remove trellis states which correspond to invalid bit allocations in the sequence of information bits (702).The decoding device (700) of one of the preceding claims,wherein the trellis reduction logic (705) is configured to remove trellis states which correspond to invalid field combinations in the sequence of information bits (702).The decoding device (700) of one of the preceding claims,wherein the trellis reduction logic (705) is configured to remove trellis states which can only be reached by invalid paths.The decoding device (700) of one of the preceding claims,wherein the trellis reduction logic (705) is configured to provide the reduced trellis states (706) during an offline operation of the decoding device (700).The decoding device (700) of claim 6,wherein the trellis reduction logic (705) is configured to provide the reduced trellis states (706) during an online operation of the decoding device (700).The decoding device (700) of claim 8,wherein the trellis reduction logic (705) is configured to provide the reduced trellis states (706) based on trace back for decoding the context redundancy information.The decoding device (700) of one of the preceding claims,wherein the context redundancy information comprises at least one of self-context redundancy information and cross-context redundancy information,wherein the self-context redundancy information of a bit field (402, 404) indicates a restriction for specific bit values in the bit field (402, 404), andwherein the cross-context redundancy information between two bit fields (402, 404) indicates a restriction for specific field combinations for the two bit fields (402, 404) .The decoding device (700) of claim 10,wherein the self-context redundancy information comprises self-context redundancy of a bit field (404) indicating a side link bandwidth within a sidelink master information block (SL-MIB) for device-to-device (D2D) communication.The decoding device (700) of claim 11,wherein the cross-context redundancy information comprises cross-context redundancy between a bit field (402) indicating a time division duplex uplink downlink (TDD-ULDL) configuration and the bit field (404) indicating the side link bandwidth within the sidelink master information block (SL-MIB) for device-to-device (D2D) communication.A decoding method (200), comprising:providing (201) a sequence of information bits comprising context redundancy information, wherein the context redundancy information is a bit field in which at least one bit combination is invalid, wherein the sequence of information bits is encoded based on a predefined channel code;generating (202) a plurality of trellis states based on the sequence of information bits and the channel code;reducing (203) the plurality of trellis states by at least one trellis state based on the context redundancy information; anddecoding (204) the sequence of information bits by using a metric based on the reduced number of trellis states,wherein the context redundancy information comprises cross-context redundancy information,wherein the cross-context redundancy information indicates a correlation between a time division duplex uplinkdownlink, TDD-ULDL, configuration field and a side link synchronization sub-frame, SLSS, (404), such that for an SLSS (404) in TDD mode the TDD-ULDL configuration field (402) is restricted to indicate an uplink sub-frame. |
FIELDThe disclosure relates to a decoding device and a decoding method that exploit context redundancy to reduce a number of trellis states. In particular, the disclosure relates to techniques for increasing decoding sensitivity by exploring context redundancy in mobile communication devices, in particular devices applying LTE (Long Term Evolution) D2D (device-to-device) side link communication.BACKGROUNDIn a digital communications system 100 as illustrated in Fig. 1 , a stream of information bits 112 is transferred from one point to another through a communication channel 130 and is therefore susceptible to noise 132. Forward Error Correction (FEC) techniques improve the channel capacity by carefully adding redundant information to the data being transmitted 110 through the channel 130. In a FEC system, the transmitted data is encoded in such a way so that the receiver 120 can correct, as well as detect, errors caused by channel noise 132. Convolutional encoding 111 with Viterbi decoding 121 is a FEC technique that is well suited for such communication systems. The convolutional encoder 111 inserts redundant information bits into the data stream 112 so that the decoder 121 can reduce and correct errors caused by the channel 130. In today's communication networks there is a steady need for further improvement of the coding sensitivity.The patent application US 2009/175388 A1 discloses a Viterbi decoder and decoding method in which the number of trellis states is reduced at some of the trellis stages based on knowledge of specific bits in data frames, for example in a frame control header FCH Downlink Frame Prefix DLFP according to IEEE 802.16e. An Hypothesis Engine makes multiple hypotheses when there are only a limited number of valid bit combinations for a field.The patent application US 2009/041166 A1 discloses a method of Viterbi decoding the 24 bits of a FCH message compliant with IEEE 802.16e which are encoded with a tail-biting constraint length 7 convolutional code. A-priori knowledge on some bits allows to reduce the number of trellis states in a Viterbi decoder from 64 to 4 at the beginning and end of the data block, and to also reduce the number of states within the 24 stages. For example, reserved bits are pre-determined and a 3-bit code indication field can take two values, namely "010" and "000".SUMMARYThe object to be solved is to provide further improvement of the coding sensitivity. This object is achieved by a decoding device and a decoding method having the features of the independent claims.Various embodiments provide a decoding device.The decoding device comprises a unit configured to provide a sequence of information bits comprising context redundancy information within a content of the information bits, wherein the context redundancy information is a bit field in which at least one bit combination is invalid, wherein the sequence of information bits is encoded by a predefined channel code. The decoding device further comprises a trellis generation logic configured to generate a plurality of trellis states based on the sequence of information bits and the channel code. The decoding device further comprises a trellis reduction logic configured to reduce the plurality of trellis states by at least one trellis state based on the context redundancy information. The decoding device further comprises a decoder configured to decode the sequence of information bits by using a metric based on the reduced trellis states, wherein the context redundancy information comprises cross-context redundancy information, wherein the cross-context redundancy information indicates a correlation between a time division duplex uplink downlink (TDD-ULDL) configuration field and a side link synchronization sub-frame (SLSS), such that for an SLSS in TDD mode the TDD-ULDL configuration field is restricted to indicate an uplink sub-frame. The decoding device mentioned in this paragraph provides a first example.The context redundancy information may be provided at predefined positions of the sequence of information bits. The features mentioned in this paragraph in combination with the first example provide a second example.The sequence of information bits may be correlated by the context redundancy information before being encoded based on the channel code. The features mentioned in this paragraph in combination with any one of the first example to the second example provide a third example.The trellis reduction logic may be configured to remove trellis states which correspond to invalid bit allocations in the sequence of information bits. The features mentioned in this paragraph in combination with any one of the first example to the third example provide a fourth example.The trellis reduction logic may be configured to remove trellis states which correspond to invalid field combinations in the sequence of information bits. The features mentioned in this paragraph in combination with any one of the first example to the fourth example provide a fifth example.The trellis reduction logic may be configured to remove trellis states which can only be reached by invalid paths. The features mentioned in this paragraph in combination with any one of the first example to the fifth example provide a sixth example.The trellis reduction logic may be configured to provide the reduced number of trellis states during an offline operation of the decoding device. The features mentioned in this paragraph in combination with any one of the first example to the sixth example provide a seventh example.The trellis reduction logic may be configured to provide the reduced number of trellis states during an online operation of the decoding device. The features mentioned in this paragraph in combination with any one of the first example to the seventh example provide a eighth example.The trellis reduction logic may be configured to provide the reduced number of trellis states based on trace back for decoding the context redundancy information. The features mentioned in this paragraph in combination with the eighth example provide a ninth example.The trellis reduction logic may be configured to use the decoded context redundancy information to restrict the plurality of trellis states. The features mentioned in this paragraph in combination with the ninth example provide a tenth example.The trellis reduction logic may be configured to provide the reduced number of trellis states based on evaluating probabilities for different possible context redundancy information. The features mentioned in this paragraph in combination with the eighth example provide a eleventh example.The trellis reduction logic may be configured to provide the reduced number of trellis states based on evaluating hypotheses of different possible context redundancy information. The features mentioned in this paragraph in combination with the eleventh example provide a twelfth example.The trellis reduction logic may be configured to evaluate the hypotheses based on a cyclic redundancy check. The features mentioned in this paragraph in combination with the twelfth example provide a thirteenth example.The context redundancy information may comprise at least one of self-context redundancy information and cross-context redundancy information. The features mentioned in this paragraph in combination with any one of the first example to the thirteenth example provide a fourteenth example.The self-context redundancy information may comprise self-context redundancy of a bit field indicating a side link bandwidth within a side link master information block (SL-MIB) for device-to-device (D2D) communication. The features mentioned in this paragraph in combination with the fourteenth example provide a fifteenth example.The cross-context redundancy information may comprise cross-context redundancy between a bit field indicating a time division duplex uplink downlink (TDD-ULDL) configuration and the bit field indicating the side link bandwidth within the side link master information block (SL-MIB) for device-to-device (D2D) communication. The features mentioned in this paragraph in combination with the fifteenth example provide a sixteenth example.The decoder may be configured to decode the sequence of information bits based on Viterbi decoding. The features mentioned in this paragraph in combination with any one of the first example to the sixteenth example provide a seventeenth example.Various embodiments provide a decoding method.The decoding method comprises providing a sequence of information bits comprising context redundancy information, wherein the context redundancy information is a bit field in which at least one bit combination is invalid, wherein the sequence of information bits is encoded based on a predefined channel code. The decoding method further comprises generating a plurality of trellis states based on the sequence of information bits and the channel code. The decoding method further comprises reducing the plurality of trellis states by at least one trellis state based on the context redundancy information. The decoding method further comprises decoding the sequence of information bits by using a metric based on the reduced number of trellis states, wherein the context redundancy information comprises cross-context redundancy information, wherein the cross-context redundancy information indicates a correlation between a time division duplex uplink downlink (TDD-ULDL) configuration field and a side link synchronization sub-frame (SLSS), such that for an SLSS in TDD mode the TDD-ULDL configuration field is restricted to indicate an uplink sub-frame.The decoding method mentioned in this paragraph provides an eighteenth example.The context redundancy information may be provided at predefined positions of the sequence of information bits. The features mentioned in this paragraph in combination with the eighteenth example provide a nineteenth example.The sequence of information bits may be correlated by the context redundancy information before being encoded by the channel code. The features mentioned in this paragraph in combination with any of the eighteenth example to the nineteenth example provide a twentieth example.The decoding method may comprise removing trellis states which correspond to invalid bit allocations in the sequence of information bits. The features mentioned in this paragraph in combination with any of the eighteenth example to the twentieth example provide a twenty-first example.The decoding method may comprise removing trellis states which correspond to invalid field combinations in the sequence of information bits. The features mentioned in this paragraph in combination with any of the eighteenth example to the twenty-first example provide a twenty-second example.The decoding method may comprise removing trellis states which can only be reached by invalid paths. The features mentioned in this paragraph in combination with any of the eighteenth example to the twenty-second example provide a twenty-third example.The decoding method may comprise providing the reduced number of trellis states during an offline processing. The features mentioned in this paragraph in combination with any of the eighteenth example to the twenty-third example provide a twenty-fourth example.The decoding method may comprise providing the reduced number of trellis states during an online processing. The features mentioned in this paragraph in combination with any of the eighteenth example to the twenty-fourth example provide a twenty-fifth example.The decoding method may comprise providing the reduced number of trellis states based on trace back for decoding the context redundancy information. The features mentioned in this paragraph in combination with the twenty-fifth example provide a twenty-sixth example.The decoding method may comprise using the decoded context redundancy information to restrict the plurality of trellis states. The features mentioned in this paragraph in combination with the twenty-sixth example provide a twenty-seventh example.The decoding method may comprise providing the reduced number of trellis states based on evaluating probabilities for different possible context redundancy information. The features mentioned in this paragraph in combination with the twenty-fifth example provide a twenty-eighth example.The decoding method may comprise providing the reduced number of trellis states based on evaluating hypotheses of different possible context redundancy information. The features mentioned in this paragraph in combination with the twenty-eighth example provide a twenty-ninth example.The decoding method may comprise evaluating the hypotheses based on a cyclic redundancy check. The features mentioned in this paragraph in combination with the twenty-ninth example provide a thirtieth example.The context redundancy information may comprise at least one of self-context redundancy information and cross-context redundancy information. The features mentioned in this paragraph in combination with any of the eighteenth example to the thirtieth example provide a thirty-first example.The self-context redundancy information may comprise self-context redundancy of a bit field indicating a side link bandwidth within a side link master information block (SL-MIB) for device-to-device (D2D) communication. The features mentioned in this paragraph in combination with the thirty-first example provide a thirty-second example.The cross-context redundancy information may comprise cross-context redundancy between a bit field indicating a time division duplex uplink downlink (TDD-ULDL) configuration and a bit field indicating a sub-frame number (SLSS). The features mentioned in this paragraph in combination with any of the thirty-first example to the thirty-second example provide a thirty-third example.The decoding method may comprise decoding the sequence of information bits based on Viterbi decoding. The features mentioned in this paragraph in combination with any of the eighteenth example to the thirty-third example provide a thirty-fourth example.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings are included to provide a further understanding of embodiments and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments and together with the description serve to explain principles of embodiments. Other embodiments and many of the intended advantages of embodiments will be readily appreciated as they become better understood by reference to the following detailed description.Fig. 1 is a high-level block diagram illustrating the architecture of a digital communications system 100.Fig. 2 schematically illustrates an exemplary decoding method 200 according to the disclosure.Fig. 3 is an exemplary section of a trellis diagram illustrating an exemplary state propagation process according to the disclosure.Fig. 4 is an example of an invalid field combination in an exemplary side link information field according to the disclosure.Fig. 5 is an example of processing a side link information field by exploiting context redundancy according to the disclosure.Fig. 6 schematically illustrates an exemplary decoding method 600 using applying a hypothesis based approach according to the disclosure.Fig. 7 schematically illustrates an exemplary decoding device 700 according to the disclosure.Fig. 8 is a performance diagram illustrating the decoding sensitivity of a decoding method according to the disclosure.DETAILED DESCRIPTIONThe following terms, abbreviations and notations will be used herein:3GPP:3rd Generation Partnership ProjectLTE:Long Term EvolutionBS:base station, eNodeBRF:radio frequencyUE:user equipmentUL:uplinkDL:downlinkTD-ULDL:time division uplink downlink configurationOFDM:orthogonal frequency division multiplex,MIMO:multiple input multiple output,TDD:time division duplexD2D:device to devicePSBCH:physical side link broadcast channelMIB:master information blockMIB-SL:side link master information blockSLSS:side link Synchronization sub-frameSINR:signal to interference and noise ratioCRC:cyclic redundancy checkThe methods and devices described herein may be implemented in wireless communication networks, in particular communication networks based on mobile communication standards such as LTE, in particular LTE-A and/or OFDM. The methods and devices described below may be implemented in mobile devices (or mobile stations or User Equipments (UE)), in particular in radio receivers of such mobile devices. The described devices may include integrated circuits and/or passives and may be manufactured according to various technologies. For example, the circuits may be designed as logic integrated circuits, analog integrated circuits, mixed signal integrated circuits, optical circuits, memory circuits and/or integrated passives.The methods and devices described herein may be configured to transmit and/or receive radio signals. Radio signals may be or may include radio frequency signals radiated by a radio transmitting device (or radio transmitter or sender) with a radio frequency lying in a range of about 3 Hz to 300 GHz. The frequency range may correspond to frequencies of alternating current electrical signals used to produce and detect radio waves.The methods and devices described herein after may be designed in accordance to mobile communication standards such as e.g. the Long Term Evolution (LTE) standard or the advanced version LTE-A thereof. LTE (Long Term Evolution), marketed as 4G and 5G LTE, is a standard for wireless communication of high-speed data for mobile phones and data terminals.The methods and devices described hereinafter may be applied in OFDM systems. OFDM is a scheme for encoding digital data on multiple carrier frequencies. A large number of closely spaced orthogonal sub-carrier signals may be used to carry data. Due to the orthogonality of the sub-carriers crosstalk between sub-carriers may be suppressed.The methods and devices described hereinafter may be applied in LTE TDD mode systems, e.g. LTE mode systems having a type 2 LTE frame structure. The type 2 LTE frame has an overall length of 10 milliseconds. The 10 ms frame comprises two half frames, each 5 ms long. The LTE half-frames are further split into five subframes, each 1 millisecond long.The methods and devices described herein may be applied for LTE Device to Device (D2D) communication. LTE D2D communication, also named as LTE side link, is introduced since 3GPP release 12 [3GPP TS 36.211 chapter 9]. D2D works in in-coverage and (partial) out-coverage scenarios. For (partial) out-coverage scenario the D2D system information is transmitted through side link broadcast channel (PSBCH) from a D2D transmitter. PSBCH is carrying the side link MIB (MIB-SL) information (standardized according to 3GPP TS 36.331 section 6.5.2) and is embedded within the side link synchronization sub-frame (SLSS). It is coded by conventional code and the information bits of PSBCH are the following: 1. System bandwidth of D2D transmitted signal (N6, N15, N25, N50, N75 and N100). 2. Frame (0-2^10-1) and sub-frame (0-9) numbers of the SLSS sub-frame. 3. In case of TDD mode, the TDD UL/DL configuration. (For out-coverage UEs UL/DL config cannot be received by eNodeB SIB, but only by SLSS from another D2D transmitter). 4. A Boolean flag indicating whether the UE is within or outside of eNodeB coverage. For the (partial) out-coverage scenario, a successful decoding of PSBCH is a pre-condition of setting up the communications between two devices.The methods and devices described herein may be applied in the field of channel coding. Usually, for channel coding, it is assumed that the information bits are uncorrelated before being encoded. Then the redundancy is only added by the coding techniques itself. In this disclosure the redundancy within the context of the information bits is further explored, which is additional redundancy on top of the channel codes. By doing this the decoding sensitivity can be further improved.Methods and devices according to the disclosure are designed based on the concept that certain unrealistic and/ or invalid trellis states in a Viterbi decoder can be forbidden or removed based on the self-context redundancies or cross-context redundancies embedded inside the information bits context, for example the D2D PSBCH. Removing those unrealistic states completely blocks the error propagation for the unrealistic paths, and therefore improves efficiency for valid maximum-likelihood path search in a Viterbi decoder, especially in low SINR conditions.For the scenario of PSBCH in D2D, two types of context redundancies within D2D PSBCH can be used, which are the self-context redundancy and the cross-context redundancy. With respect to self-context redundancy: SLSS sub-frame index field occupies 4 bits but the range of a sub-frame index ranges from 0 to 9. So values 10 to 15 are invalid bits and the corresponding trellis states can be forbidden in PSBCH Viterbi decoder. In parallel, system bandwidth field of PSBCH occupies 3 bits, but there are only 6 valid system bandwidth alternatives (N6 to N100). It means there are 2^3-6 = 2 invalid values, and therefore the trellis states which corresponds to the invalid fields in the PSBCH decoder can be removed or set to be forbidden. With respect to cross-context redundancy: SLSS sub-frame index fields and TDD UL/DL configuration field are correlated: in case of TDD, SLSS sub-frame index must be an UL sub-frame (DL sub-frames are only for eNodB reception). This restricts the possibilities of UL/DL configuration field, and vice versa. Such correlation is used to jointly optimize trellis structure in the Viterbi decoder (offline or during run-time) as described hereinafter.The motivation is to improve the decoding sensitivity of the Viterbi decoder for the D2D PSBCH, so that, by making use of the same amount of SLSS sub-frames, a UE device can successfully decode the PSBCH at a lower signal interference noise ratio (SINR) level.Fig. 2 schematically illustrates an exemplary decoding method 200 according to the disclosure. The decoding method 200 includes providing 201 a sequence of information bits comprising context redundancy information, wherein the sequence of information bits is encoded based on a predefined channel code. The decoding method 200 includes generating 202 a plurality of trellis states based on the sequence of information bits and the channel code. The decoding method 200 includes reducing 203 the plurality of trellis states by at least one trellis state based on the context redundancy information. The decoding method 200 includes decoding 204 the sequence of information bits by using a metric based on the reduced number of trellis states.The decoding method 200 may be used in a Viterbi decoder 121 of a receiver 120 as described above with respect to Fig. 1 , where the sequence of information bits correspond to the information bits 112 which are encoded by the convolutional encoder 111 in the transmitter 110 with a predefined channel code.The context redundancy information may be provided at predefined positions of the sequence of information bits, e.g. at bit3-bit5 and/or bit16-bit19 as described below with respect to Fig. 4 . The context redundancy information may be provided as a bit field comprising at least one invalid bit combination, e.g. as TDD-ULDL-config bit field 402 and/or SLSS bub-frame number bit field 404 as described below with respect to Fig. 4 . The sequence of information bits may be correlated by the context redundancy information before being encoded by the channel code.The decoding method 200 may further include removing trellis states which correspond to invalid bit allocations in the sequence of information bits. Invalid bit allocations may be not defined bit allocations or bit allocations that cannot occur. The decoding method 200 may further include removing trellis states which correspond to invalid field combinations in the sequence of information bits. Invalid (bit) field allocations may be not defined bit field allocations or bit field allocations that cannot occur.The decoding method 200 may further include removing trellis states which can only be reached by invalid paths. The decoding method 200 may further include providing the reduced number of trellis states during an offline processing, e.g. as described below. The decoding method 200 may further include providing the reduced number of trellis states during an online processing, e.g. as described below.The decoding method 200 may further include providing the reduced number of trellis states based on trace back for decoding the context redundancy information, e.g. as described below with respect to Fig. 3 . The decoding method 200 may further include using the decoded context redundancy information to restrict the plurality of trellis states, e.g. as described below with respect to Fig. 3 . The decoding method 200 may further include providing the reduced number of trellis states based on evaluating probabilities for different possible context redundancy information, e.g. as described below with respect to Fig. 3 . The decoding method 200 may further include providing the reduced number of trellis states based on evaluating hypotheses of different possible context redundancy information, e.g. as described below. The decoding method 200 may include evaluating the hypotheses based on a cyclic redundancy check, e.g. as described below with respect to Fig. 6 .The context redundancy information may include self-context redundancy information and/or cross-context redundancy information, e.g. as described below and with respect to Fig. 4 . The self-context redundancy information may include self-context redundancy of a bit field indicating a side link bandwidth within a side link master information block (SL-MIB) for device-to-device (D2D) communication, e.g. as described below. The cross-context redundancy information may include cross-context redundancy between a bit field indicating a time division duplex uplink downlink (TDD-ULDL) configuration and a bit field indicating a sub-frame number (SLSS), e.g. as described below. The decoding method 200 may further include decoding the sequence of information bits based on Viterbi decoding, e.g. by a Viterbi decoder 121 as described above with respect to Fig. 1 .In the following sections specific implementations of the method 200 are described. The above described method 200 may be specifically realized by the following two methods, denoted as "method 1" and "method 2" hereinafter, each one including respective modifications, denoted as "sub-method 1 of method 1", "sub-method 2 of method 1" and "modified version of method 2" hereinafter. Each one of method 1 and method 2 may be realized either standalone or combined.In method 1 (also denoted as the offline method) offline trellis reduction is done by removing the states corresponding to invalid information bit allocations or field combinations. For implementation, the path metric of the invalid states can be forced to be a very high value, and then it behaves equivalently like being removed. This method can also be divided into two sub-methods: the offline trellis reduction making use of self-context redundancy (i.e. invalid bit allocations) and offline trellis reduction making use of cross-context redundancy (i.e. invalid field combinations). This method does not need real-time demodulated soft-input bits so it can be done offline by PC simulation, for example.In method 2 (also denoted as the run-time method) during Viterbi decoding, early trace back is done to early decode the TDD-ULDL-config field and use this decoded TDD-ULDL-config information to further restrict certain trellis states of the SLSS sub-frame index field. This method needs real-time received soft-input bits so it can be done in the decoding run-time. To cover all possibilities, in parallel, a modified version of this method is provided where no early trace back is done to early detect TDD-ULDL-config but different hypothesis of different possible TDD-ULDL configurations are used. Then the SLSS sub-frame number field can be restricted based on each TDD-ULDL-config hypothesis in the same way. Among maximal 8 hypotheses, for example, the whole decoded bit stream which passes CRC check is the final decoded information bits. The modified version gives even better sensitivity because the false decoding rate of early TDD-ULDL-config can be avoided. But it needs higher computation power because PSBCH decoding is performed more than once.Each of the above described two methods (method 1, i.e. offline method and method 2, i.e. run-time method) can work as standalone without the other. But also they can work in a two-step combined way as described in the following: That is to first use the offline method to reduce the trellis before decoding is started, and then use the run-time method based on the reduced trellis for run-time decoding. This will jointly improve the decoding performance.The above described concept as described by the methods above can be extended to other conventional-coded channels, for example LTE PDCCH as long as there are context redundancies. This concept can also be extended to other coding schemes which are based on trellis search, for example Turbo codes and LTE PDSCH, as long as there are context redundancies.For method 1, offline trellis reduction is done by exploring the PSBCH self-context as well as cross-context redundancy. That can be done by removing unrealistic and/or invalid trellis states by offline computation. Here the optimal trellis structure can be derived without the need of demodulated soft-input bits, so it can be done before decoding starts.In sub-method 1 of method 1, the self-context redundancy is explored by removing the trellis states corresponding to invalid bit allocations.Figure 3 shows an example for forbidden stage propagation including backward propagation and forward propagation as described in the following. The forbidden state propagation process can be done in an iterative way until all invalid states are cleared.For example, the Viterbi decoder for PSBCH contains 64 trellis states per stage. That is because there are shift registers of the encoder that contain 6 bits (64 = 2^6). From MSB to LSB, an information field is right shifted into state register in the corresponding stage of the trellis. For example, for the field of side link bandwidth 402 as shown in Fig. 4 , it occupies bit 0 - bit 2 of side link MIB. So the states in stage t=3 are represented in the following format: I0I1I2XXX, where the I2I1I0 are the information bits for side-link bandwidth field and I2 is MSB of it. XXX is representing the other 3 free bits in the state register which is unknown. Note that the information bits are flipped due to the shifting behavior.Considering the self-context redundancy of side link bandwidth field 402 as shown in Fig. 4 , it can only range from 0 to 5, value 6 (110)2 and 7 (111)2 are invalid, so it means the trellis stages in stage 3 which is encoded as '111XXX' and ' 011XXX' can be removed because they are invalid. It results in 2*2^3 = 16 removed states. Note that for implementation, the path metric of the invalid states can be forced to be a very high value, and then it behaves equivalently like being removed.Similarly, considering the self-context redundancy of SLSS sub-frame index field 404 as shown in Fig. 4 (it is located in bit 16- bit 19 of side link MIB), it can only range from 0 to 9, so that value 10 to 15 are invalid. So it means the trellis stages in stage t=20 as illustrated in Fig. 3 which is encoded as 1111XX', ' 0111XX' , '1011XX', '0011XX', '1101XX' and " 0101XX" can be removed because they are invalid. It results in 6*2^2 = 24 removed states.After the invalid states in the main stages (state t=3 for side link bandwidth field 402, state t=20 for SLSS sub-frame index field 404) are removed, the further actions can be performed: For any of the states in earlier stages, if both of its out-coming paths lead to an invalid state, this state can also be removed. For example, in stage t=19, states coded with '011XXX' will lead to either '1011XX' or '0011XX' in stage t=20, see Fig. 3 . But it is known that both destination states are invalid, so states coded with '011XXX' in stage t=19 can be removed, see Figure 3 illustrating the removed states in stage t=19 (24, 25, 26, 27). In parallel, for any of the states in later stages, if both of its in-coming paths are from two invalid states, this state can also be removed. For example, in stage t=20, the invalid states coded with '0011XX' can only lead to destination state 'X0011X' in stage t = 21. The latter states can therefore also be removed in stage t=21. See Fig. 3 where in stage t=20 the states 12, 13, 14, 15 and 44, 45 can be removed and where in stage t=21 the states 6, 7 and 38, 39 can be removed.In sub-method 2 of method 1, the cross-context redundancy between TDD-ULDL-config field 402 (see Fig. 4 ) and SLSS sub-frame field 404 (see Fig. 4 ) is explored by checking invalid field combinations to further reduce the trellis. An approach is to offline setup a table with all invalid field combinations. For example, TDD-ULDL-config two, together with SLSS sub-frame number of 3 generates an invalid field combination: in TDD-ULDL-config two, sub-frame index 3 is a DL sub-frame but D2D SLSS sub-frame in TDD mode can only be an UL sub-frame. Such example of invalid field combination is further shown in Figure 4 .Note that an invalid field combination results in not only 1 path but 2^(40-7), i.e. 2 to the power of 33 paths through the complete trellis of the PSBCH Viterbi decoder. The total number of invalid paths is M*2^(40-7) where M is the total number of invalid field combinations between TDD-ULDL-config 402 and SLSS Sub-frame 404 number. After getting all invalid paths, they are marked in the Viterbi decoder trellis. And then, states which can only be reached by invalid paths are declared to be invalid states and can be removed from the trellis. This process can be performed offline by PC simulation, for example and so it does not introduce any complexity to real decoder implementation.For method 2, the PSBCH cross-context redundancy is explored to disable invalid states in decoding run-time. Here demodulated soft-input bits may be required. More specifically, during the Viterbi decoding process, before reaching the state for the field of SLSS Sub-frame number field, early trace back can be done and then the TDD-ULDL-config field 402 can be decoded beforehand. Then, based on the decoded TDD-ULDL-config field 402 the SLSS sub-frame number fields can be better restricted and therefore more invalid states can be disabled which are corresponding to it. For example, when the decoding result of TDD-ULDL-config 402 is 2, then it is known that the valid values for SLSS Sub-frame number can only be 2 or 7. This further restricts the number of possible SLSS Sub-frame index from 10 down to 2.The way of removing invalid states are is the same as in sub-method 1 of method 1, but now more trellis stages in stage 20 can be removed. It results in a total number of 8*2^2 = 48 removed states in stage 20. The more invalid states are removed the better decoding sensitivity can be achieved. For implementation, the path metric of the invalid states can be forced to be a very high value, and then it behaves equivalently like being removed.The procedure of method 2 is shown in Figure 5 : The first step 501 includes trellis path metric computation and path selection until the first bit of SLSS sub-frame number field 404. The second step 502 includes early trellis trace back to detect TDD-ULDL config 402. The third step 503 includes forbid more invalid states for invalid SLSS-sub-frame 404 numbers based on detected TDD-ULDL-config 402. The fourth step 504 includes continue with trellis path metric computation and path selection until the final information bit. The fifth step 505 includes do final trace back to decode the full stream.An early trace back as described herein means a trace back at an early stage in the trellis in order to early decode the context redundancy information and hence to restrict of remove unused trellis states as early as possible.In the modified version of method 2, e.g. as shown in Fig. 6 , early TDD-ULDL-Config decoding step can be replaced by a hypothesis based approach while leaving the remaining parts the same. In the modified version of method 2, instead of doing early trace back to early decode TDD-ULDL-config, a different hypothesis of different possible TDD-ULDL configurations can be put. For example a first hypothesis is applied to a first possible TDD-ULDL configuration 601, a second hypothesis is applied to a second possible TDD-ULDL configuration 602, a third hypothesis is applied to a third possible TDD-ULDL configuration 603, etc. Then, for each hypothesis PSBCH decoding can be done separately as shown in Fig. 6 .For each hypothesis, during the Viterbi decoding processing, when reaching the stage of TDD-ULDL-config field 402, the trellis state which is violating the assumed TDD-ULDL-config 402 value can be disabled. And then, when reaching the SLSS sub-frame number 404 field, based on each TDD-ULDL-config 402 hypothesis, the invalid trellis states can be further removed based on further conflicting SLSS sub-frame number field values. Again, the way of forbidding invalid states corresponding to SLSS sub-frame fields are the same like in method 1, but just more states can be forbidden or removed compared with method 1. In this modified version of method 2, there may be maximal 8 hypotheses, for example. The whole decoded bit stream that passes CRC check is selected to give the best hypothesis selection 604 and thus the final decoded information bits. The modified version of method 2 gives even better sensitivity because the plenty of false detection rate of early TDD-ULDL-config 402 decoding can be avoided. However, it may need higher computation power because PSBCH decoding may be performed more than once.This concept and method can be implemented in control channel decoder, for example in the outer control channel receiver (OCRX) within LTE PHY. It provides better PSBCH decoding performance for D2D communications and therefore better link quality.Fig. 7 schematically illustrates an exemplary decoding device 700 according to the disclosure. The decoding device 700 may implement any one of the methods 200 or modified versions of the method 200 as described above with respect to Figures 2 to 6 .The decoding device 700 includes a receiver 701, a trellis generation logic 703, a trellis reduction logic 705 and a decoder 707. These units may be implemented as circuits in hardware or as blocks or modules in software. The receiver 701 provides a sequence of information bits 702 including context redundancy information, e.g. as described above with respect to Figures 2 to 6 , wherein the sequence of information bits 702 is encoded based on a predefined channel code, e.g. as described above with respect to Fig. 1 . The trellis generation logic 703 generates a plurality of trellis states 704 based on the sequence of information bits 702 and the channel code. The trellis reduction logic 705 reduces the plurality of trellis states by at least one trellis state based on the context redundancy information, e.g. as described above with respect to Figures 2 to 6 . The decoder 707 decodes the sequence of information bits 702 by using a metric based on the reduced number of trellis states 706. The metric may be a distance, e.g. Hamming distance or any metric that is used by a Viterbi decoder for performing the Viterbi decoding.The context redundancy information may be provided at predefined positions of the sequence of information bits, e.g. as described above with respect to Figures 2 to 6 . The context redundancy information may be provided as a bit field comprising at least one invalid bit combination, e.g. as described above with respect to Figures 2 to 6 . The sequence of information bits 702 may be correlated by the context redundancy information before being encoded based on the channel code. The trellis reduction logic 705 may be configured to remove trellis states which correspond to invalid bit allocations in the sequence of information bits, e.g. as described above with respect to Figures 2 to 6 .The trellis reduction logic 705 may remove trellis states which correspond to invalid field combinations in the sequence of information bits, e.g. as described above with respect to Figures 2 to 6 . The trellis reduction logic 705 may remove trellis states which can only be reached by invalid paths, e.g. as described above with respect to Figures 2 to 6 . The trellis reduction logic 705 may provide the reduced number of trellis states during an offline operation of the decoding device, e.g. as described above with respect to Figures 2 to 6 . The trellis reduction logic 705 may provide the reduced number of trellis states during an online operation of the decoding device. The trellis reduction logic 705 may provide the reduced number of trellis states based on trace back for decoding the context redundancy information, e.g. as described above with respect to Figures 2 to 6 . The trellis reduction logic 705 may use the decoded context redundancy information to restrict the plurality of trellis states, e.g. as described above with respect to Figures 2 to 6 . The trellis reduction logic 705 may provide the reduced number of trellis states based on evaluating probabilities for different possible context redundancy information.The trellis reduction logic 705 may provide the reduced number of trellis states 706 based on evaluating hypotheses of different possible context redundancy information, e.g. as described above with respect to Figure 6 . The trellis reduction logic 705 may evaluate the hypotheses based on a cyclic redundancy check, e.g. as described above with respect to Figure 6 .The context redundancy information may include self-context redundancy information and/or cross-context redundancy information, e.g. as described above with respect to Figures 2 to 6 . The self-context redundancy information may include self-context redundancy of a bit field indicating a side link bandwidth within a side link master information block (SL-MIB) for device-to-device (D2D) communication, e.g. as described above with respect to Figure 4 . The cross-context redundancy information may include cross-context redundancy between a bit field indicating a time division duplex uplink downlink (TDD-ULDL) configuration and the bit field indicating the side link bandwidth within the side link master information block (SL-MIB) for device-to-device (D2D) communication, e.g. as described above with respect to Figure 4 . The decoder 707 may decode the sequence of information bits 702 based on Viterbi decoding.Fig. 8 is a performance diagram illustrating the decoding sensitivity in terms of successful decoding rate over SINR in dB of a decoding method according to the disclosure. A first graph 801 shows the performance of PSBCH decoding without using context redundancy. A second graph 802 shows the performance of enhanced PSBCH decoding exploring context redundancy according to method 2 alone. A third graph 803 shows the performance of enhanced PSBCH decoding exploring context redundancy according to a combination of method 1 and method 2.Compared with PSBCH decoding without exploiting context redundancy 801, the disclosed concept to exploit context redundancy in the decoding improves the decoding sensitivity, in particular under low SINR conditions. Meanwhile, the offline trellis reduction also reduces the computation power and computation complexity for a Viterbi decoder. From Fig. 8 can be seen that by using method 2 only 802 already shows about 0.4dB gain improvement. It also shows that a combination 803 of method 1 and method 2 gives a further 0.25dB gain improvement. |
Methods, apparatus, systems, and articles of manufacture to reduce thermal fluctuations in semiconductor processors are disclosed. An apparatus includes a temperature analyzer to determine a current temperature of a processor. The apparatus further includes a controller to provide an idle workload to the processor to execute in response to the current temperature falling below a setback temperature. |
An apparatus comprising:means for sensing a current temperature of a processor; andmeans for controlling an idle workload procedure, the controlling means to provide an idle workload to the processor to execute in response to the current temperature falling below a setback temperature.The apparatus of claim 1, wherein the controlling means is to provide the idle workload to the processor when the idle workload procedure is armed and to not provide the idle workload to the processor when the idle workload procedure is disarmed.The apparatus of claim 2, wherein the controlling means is to arm the idle workload procedure in response to the current temperature exceeding a threshold temperature.The apparatus of claim 3, wherein the threshold temperature is defined as a target temperature delta above a disarmed minimum temperature, the disarmed minimum temperature corresponding to a lowest value observed for the current temperature of the processor since the idle workload procedure was last disarmed.The apparatus of one of the claims 2-4, wherein the controlling means is to disarm the idle workload procedure in response to a timeout period elapsing since the idle workload procedure was last armed.The apparatus of one of the claims 2-5, wherein the controlling means is to disarm the idle workload procedure in response to an idle period of the processor exceeding a threshold time period.The apparatus of one of the claims 2-6, wherein the controlling means is to disarm the idle workload procedure in response to a difference between the current temperature and an armed maximum temperature exceeding a threshold, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor since the idle workload procedure was last armed.The apparatus of any one of claims 1-7, wherein the setback temperature is a fixed temperature delta higher than a target temperature.The apparatus of claim 8, wherein the target temperature is defined by a dynamic temperature value, the dynamic temperature value corresponding to a target temperature delta below an armed maximum temperature, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor during a relevant period of timeA method comprising:measuring a current temperature of a processor; andcausing the processor to execute an idle workload to generate heat in response to the current temperature falling below a setback temperature.The method of claim 10, further including providing the idle workload to the processor when the processor is in an idle state, wherein the idle workload is not provided to the processor when the processor is in an active state.The method of claim 11, further including determining whether the processor is in the idle state or the active state based on whether a standard workload is scheduled for execution by the processor.The method of any one of claims 10-12, wherein the idle workload is provided to the processor when an idle workload procedure is armed, and the idle workload is not provided to the processor when the idle workload procedure is disarmed.The method of claim 13, further including arming the idle workload procedure in response to the current temperature exceeding a threshold temperature.A computer readable storage device comprising instructions that, when executed, cause a machine to implement the method of any one of claims 10-14. |
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND DEVELOPMENTThis invention was made with Government support under Agreement Number 8F-30005, awarded by the Department of Energy. The Government has certain rights in this invention.FIELD OF THE DISCLOSUREThis disclosure relates generally to semiconductor devices, and, more particularly, to methods and apparatus to reduce thermal fluctuations in semiconductor processors.BACKGROUNDProcessors and other semiconductor devices generate heat when they are performing computations and/or other operations. Furthermore, more intensive computational workloads typically correspond to greater increases in heat. Thus, higher performance computing devices typically experience greater thermal stresses which can deleteriously impact the reliability and/or useful life of such devices.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an example temperature profile of a processor while executing standard workload including five computational kernels.FIG. 2 illustrates an example temperature profile of the processor of FIG. 1 when executing the standard workload of FIG. 1 in accordance with teachings disclosed herein.FIG. 3 illustrates another example temperature profile of the processor of FIG. 1 when executing the standard workload of FIG. 1 in accordance with teachings disclosed herein.FIG. 4 illustrates an example computing system constructed in accordance with teachings disclosed herein.FIGS. 5 and 6 are flowcharts representative of example machine readable instructions that may be executed to implement the example thermal fluctuations controller of FIG. 4 .FIG. 7 is a block diagram of an example processing platform structured to execute the instructions of FIGS. 5 and 6 to implement the example thermal fluctuations controller of FIG. 4 .The figures are not necessarily to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other. As used herein, stating that any part is in "contact" with another part is defined to mean that there is no intermediate part between the two parts.Unless specifically stated otherwise, descriptors such as "first," "second," "third," etc. are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name. As used herein "substantially real time" refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc. Thus, unless otherwise specified, "substantially real time" refers to real time +/- 1 second.DETAILED DESCRIPTIONMany semiconductor-based processors (e.g., central processing units (CPUs), graphics processing units (GPUs), accelerators, etc.) are housed in a ball gird array (BGA) package. In addition to solder joints within the packages (die-die, die-to-package substrate), BGA packages include an array of metallic balls that may be individually connected to a printed circuit board (PCB) via a corresponding array of solder joints. The reliability of such joints over time (e.g., the useful life of such joints) and other integrated circuit package and/or packaging materials can be negatively impacted when exposed to thermal stresses such as relatively frequent fluctuations in temperatures over relatively large temperature ranges. While such thermal fluctuations may occur in any type of processor (whether associated with a BGA package or otherwise), they can have especially significant impacts on processor packages intended for high performance computing applications (e.g., supercomputers and/or data centers). By reducing the thermal fluctuations of an integrated circuit, one can reduce the reliability limiting stress on that integrated circuit and/or its associated package, thereby increasing the reliability lifetime of the processor package.High performance computing exacerbates the problems associated thermal fluctuation induced stress because high performance processors are typically implemented in larger packages that operate at higher powers to perform more computationally intensive tasks. Such conditions result in greater maximum temperatures (as such packages can produce more heat) and, thus, larger fluctuations in temperatures between when a high performance processor is being used and when it is idle. Furthermore, high performance computing applications often involve relatively frequent changes between active periods when a processor is being run at or near its full capacity (thereby producing large amounts of heat) and idle periods when the processor is not being used (during which the processor may cool off to near ambient temperatures).As used herein, an active period of a processor is when the processor is in an active state. As used herein, an active state is when the processor is either executing a standard workload (SWL) or the processor is scheduled to execute an SWL. As used herein, an SWL is any workload associated with particular tasks to be completed by the processor as part of its normal or standard operation. Many SWLs are user workloads that are initiated, provided, and/or defined by a user of the associated computing device. However, other SWLs may be implemented automatically to accomplish particular tasks (e.g. maintenance tasks) without specific input from a user. As used herein, an idle period of a processor is when the processor is in an idle state. As used herein, an idle state is when the processor is not executing any SWLs and is not scheduled to execute any SWLs.Temperature fluctuations in a GPU implementing typical high performance computations for a standard workload (SWL) including five computational kernels (K1 through K5) is shown in FIG. 1 . While FIG. 1 is described with reference to a GPU, similar temperature profiles may occur in any other type of processor and teachings disclosed herein may be suitably adapted to such processors. In the illustrated example, the shaded bands 102, 104, 106, 108, 110 represent active periods of the GPU corresponding to the five computational kernels being executed. The periods of time outside and between the active periods 102, 104, 106, 108, 110 correspond to idle periods 112, 114, 116, 118, 120, 122 when the GPU is in an idle state (i.e., not executing instructions or scheduled to execute instructions). As shown in the illustrated example, the temperature of the GPU during the active periods 102, 104, 106, 108, 110 is represented by a solid line and the temperature of the GPU during the idle periods 112, 114, 116, 118, 120, 122 is represented by a line with alternating dots and dashes. In the illustrated example of FIG. 1 , during the first idle period 112 (e.g., prior to beginning the first active period 102), the temperature of the GPU is at a true idle temperature 124 (Tidle) of the GPU, which is typically slightly above an ambient temperature 126 (Tambient). The particular temperature values for the idle temperature 124 and the ambient temperature 124 can differ from system to system and the associated environment and/or application for which the system is used. By way of example, the idle temperature 124 and the ambient temperature 124 of many systems are below 50°C. During the first active period 102 (while the first kernel K1 is being executed), the temperature rises until it reaches a maximum temperature 128 (Tmax) and then hovers around that temperature until the first active period 102 is completed. As used herein, the maximum temperature 128 of a processor corresponds to the temperature reached by the processor when operating at the thermal design power (TDP) of the processor . The particular temperature value of the maximum temperature 128 can differ widely from system to system and the associated environment and application for which the system is used. By way of example, it is possible for the maximum temperature 128 of some systems to reach as high as 100°C or higher. A processor is typically designed to reach its TDP when the processor is being driven to its full capability over a period of time. Many high performance computing applications drive processors to their full capabilities such that the processors will be operating at or near their TDP when in an active state. As a result, it is not uncommon for the maximum temperature 128 to be reached during the execution of SWLs as shown in FIG. 1 .Following the first active period 102, there is a brief idle period 114 during which the GPU is not performing any operations and, therefore, the temperature drops as heat is dissipated from the GPU. However, the duration of this second idle period 114 is not long enough for the GPU to cool very far before the second active period 104 is initiated. Although the second active period 104 is much shorter than the first active period 102, the temperature of the GPU again reaches the maximum temperature 128 because the GPU was already relatively warm when the second active period 104 began. By contrast, the third idle period 116 is much longer than the second idle period 114. As a result, during the third idle period 116 the temperature of the GPU drops nearly back down to the idle temperature 124 before rising again during the third active period 106. The temperature then falls during a fourth idle period 118 before rising again during the fourth active period 108. Thereafter, the temperature drops again and stabilizes at the idle temperature 124 during the fifth idle period 120 before again being driven to the maximum temperature 128 during the fifth active period 110. Following completion of the fifth active period 110, the GPU returns to an idle state (e.g., in a sixth idle period 122) where the temperature of the GPU cools back down to the idle temperature 124.As shown in the illustrated example of FIG. 1 , there are relatively small fluctuations in temperature in the second, third, and fourth active periods 104, 106, 108 as well as in the second, fourth, and fifth idle periods 114, 118, 120. While these fluctuations may cause some stress on the GPU, testing has shown that it is the relatively large temperature fluctuations that are most problematic to the reliability of a GPU over time. That is, the large temperature increases during the first and fifth active periods 102, 110 coupled with the large temperature drops during the third and sixth idle periods 116, 122 in FIG. 1 introduce significant thermal stresses on the solder joints and/or other interfaces of the GPU that can cause degradation and/or failure to occur much more quickly than if the GPU was only subjected to the smaller temperature fluctuations noted above. Particular temperatures changes that constitute relatively large temperature fluctuations as used herein may differ widely from system to system and/or the associated application for which the system is used. In some examples, the size of acceptable temperature fluctuations (e.g., fluctuations that do not constitute relatively large fluctuations that are undesirable) may depend on the level of reliability of the system desired. By way of example, in some examples, a temperature swing of at least 15°C may be considered to be a relatively large temperature fluctuation the occurrence of which is to be reduced (e.g., minimized and/or avoided). In other examples, the threshold temperature delta that constitutes a relatively large temperature fluctuation may be higher (e.g., 20°C, 25°C, 30°C, 35°C, 40°C, etc.).Examples disclosed herein reduce the frequency of large thermal fluctuations by reducing the amount by which the temperature of a processor cools during idle periods between adjacent active periods. More particularly, in examples disclosed herein the frequency of large thermal fluctuations are reduced by opportunistically causing the processor to execute workloads during the idle periods as needed to cause the processor to produce heat sufficient to maintain the temperature within a threshold range of the peak temperature reached during the active periods. In the illustrated example of FIG. 1 , the peak temperature corresponds to the maximum temperature 128. However, in other examples, the peak temperature may be less than the maximum temperature 128. The workload executed during the idle periods is referred to herein as an idle workload (IWL) to distinguish it from the SWLs executed during the active periods. In some examples, the IWL is controlled to only execute during the idle periods of the processor. As a result, the IWL has no impact on the performance of the GPU when the SWL is being executed because the SWL is only executed during the active periods and not the idle periods.While the IWL does not affect the performance of the GPU when executing the SWL (because they are executed at different time), execution of the IWL does require additional power. However, this increase in power consumption is a trade-off made to achieve better reliability of the GPU over time (e.g., increase the useful life of the GPU). In some examples, the extent of excess power used to execute the IWL is reduced by setting a timeout period after which the IWL will not execute even if the temperature of the GPU will consequently drop below the threshold temperature above which the GPU was being maintained before the timeout period. That is, in some examples, rather than always maintaining the temperature of a processor within a threshold range of the peak temperature, execution of the IWL may timeout, thereby allowing the GPU to fully cool down during an idle period. This can save power in situations where an idle period extends for a relatively long duration of time. That is, there may be a long duration of time when there is no SWL to be executed such that there is no need to maintain the GPU at an elevated temperature and doing so unnecessarily consumes power. Limiting the implementation of the IWL to a timeout period ensures that the IWL does not unnecessarily consume power indefinitely when there may be extended periods of no standard (e.g., user) activity (e.g., no SWL to be executed).FIG. 2 illustrates the temperature of the same GPU of FIG. 1 executing the same five-kernel SWL represented in FIG. 1 except with an IWL executed during at least some portion(s) of some of the idle periods 112, 114, 116, 118, 120, 122 to maintain the temperature of the GPU above a fixed target temperature 202 (Ttarget). While FIG. 2 is described with reference to a GPU, teachings disclosed herein may be suitably adapted to any type of processor. In this example, the target temperature 202 is a configurable parameter that is set with a fixed temperature value defined based on known properties of the underlying processor (e.g., the GPU) in conjunction with expected uses of the processor. More particularly, in some examples, the target temperature 202 is defined to be within a particular range of an expected peak temperature for the processor (e.g., the maximum temperature 128 in FIG. 2 ). In some examples, the particular range is defined to be less than the temperature differential of large thermal fluctuations that are to be avoided to improve the reliability of the processor. How large the thermal fluctuations need to be to constitute large thermal fluctuations may depend on the level of reliability and/or useful life desired for the processor and on physical characteristics of the processor, its packaging, the location where it is mounted (e.g., on a PCB), and/or the characteristics of the mounting mechanism employed (e.g., flip chip, conventional, solder type, etc.).In the illustrated example of FIG. 2 , the target temperature 202 defines the temperature of the GPU at which an IWL procedure is armed or initiated. Thus, as shown in FIG. 1 , the point 204 where the temperature of the GPU reaches the target temperature 202 defines when the IWL procedure is armed or activated. The IWL procedure involves the monitoring of the temperature of the GPU during idle periods to identify conditions that trigger the execution of an IWL to maintain the temperature of the GPU within a threshold temperature range of the maximum temperature 128. In other words, while the arming or enabling of the IWL procedure does not necessarily imply that a particular IWL will be executed, arming the IWL procedure at least initiates the system to begin monitoring for conditions that may call for an IWL to be executed.In some examples, the threshold temperature range within which the IWL procedure maintains the GPU corresponds to the difference between the maximum temperature 128 and the target temperature 202. In some examples, the temperature of the GPU is kept within this range (e.g., kept above the target temperature 202) by executing an IWL in response to a trigger condition corresponding to the temperature of the GPU falling below a setback temperature 206 (Tsetback). In some examples, the trigger condition is limited to idle periods 112, 114, 116, 118, 120, 122. That is, in some examples, no IWL is executed during an active period 102, 104, 106, 108, 110 even if the temperature drops below the setback temperature 206.In the illustrated example, the setback temperature 206 is a configurable parameter defined to be higher than the target temperature by a particular temperature difference or delta. Additionally or alternatively, in some examples, the setback temperature 206 may be defined as some temperature delta below the maximum temperature 128 (e.g., that is less than the difference between the maximum temperature 128 and the target temperature 202). As shown in the illustrated example, during the second idle period 114, the temperature of the GPU remains above the setback temperature 206. As a result, there is no need to add heat to the GPU such that no IWL is executed in that period 114. However, during the third idle period 116, the temperature of the GPU does drop below the setback temperature. However, unlike the temperature cooling off to near the idle temperature 124 (as in FIG. 1 ), in the illustrated example of FIG. 2 , once the temperature drops below setback temperature 206, an IWL is executed by the GPU to produce heat, thereby maintaining the temperature of the GPU above the target temperature 202 and near the setback temperature 206. In some examples, the IWL is executed entirely in the GPU (e.g., without writing to an off chip memory). In other examples, the IWL may be executed by the GPU in communication with a separate processor, memory, and/or IC package. In some examples, off chip operations may be implemented in a package that is adjacent to the GPU so that heat produced by the adjacent package contributes to increase the temperature of the GPU. In some examples, execution of the IWL may be continuous through the end of the idle period 116 (e.g., the IWL may include a set of commands that may loop indefinitely). In other examples, execution of the IWL may be intermittent with each instance heating up the GPU before there is a break in which the GPU cools down before another instance of the IWL is executed to again heat up the GPU. In some examples, this intermittent heating and cooling of the GPU around the setback temperature 206 incorporates some hysteresis such that the temperature of the GPU rises above the setback temperature 206 and falls below the setback temperature 206 as represented in FIG. 2 . In other examples, the intermittent heating and cooling of the GPU by intermittently executing instances of an IWL may maintain the temperature at or below the setback temperature 206 (and above the target temperature 202) throughout the process. In some examples, execution of an IWL may be triggered when the temperature of the GPU drops to the target temperature 202 and execution of the IWL is stopped when the temperature returns to the setback temperature 206. In some examples, execution of the IWL has no purpose other than to heat the GPU. In this manner, execution of the IWL can be interrupted at any time without any meaningful loss of data to quickly transition to executing SWL if a new active period begins during ongoing execution of the IWL. In some examples, execution of the IWL may provide a useful purpose that is secondary and/or separate to heating the processor. For instance, in some examples, a primary processor (CPU) may offload tasks to the GPU that serve a purpose to the operation of the primary processor. In some examples, such offloaded tasks may be non-critical tasks so that they can be terminated and/or interrupted to enable the GPU to transition to its primary purposes of executed SWLs. Additionally or alternatively, in some examples, other remoted devices (e.g., in an edge network) may provide requests to the GPU that serve as IWLs to heat the GPU during otherwise idle periods.As shown by comparison with FIG. 1 , the execution of the IWL during the third idle period 116 as represented in FIG. 2 maintains the temperature of the GPU above the target temperature 202 throughout the entirety of the third idle period 116. As a result, the large temperature drop represented in the third idle period 116 of FIG. 1 is avoided. Furthermore, the temperature of the GPU remains above the target temperature 202 through the third and fourth active periods 106, 108 and the fourth and fifth idle periods 118, 120. As a result, the temperature of the GPU is already elevated when the fifth active period 110 begins, as represented in FIG. 2 , thereby avoiding the large temperature increase represented in FIG. 1 during the corresponding active period 110. Thus, whereas the temperature profile of the GPU represented in FIG. 1 includes two large thermal fluctuations reaching up to around the maximum temperature 128 and down to around the idle temperature 124, the temperature profile of the GPU represented in FIG. 2 includes only one such cycle through the high and low temperatures at the beginning and after the ending of the IWL procedure.As shown in the illustrated example, the IWL procedure is associated with an IWL timeout period 208 that defines a duration for the IWL procedure beginning when it is first armed or initiated (e.g., when the temperature of the GPU first passes the target temperature 202). After the timeout period 208 has elapsed, the IWL procedure is disarmed or deactivated meaning that the temperature of the GPU is no longer monitored for the trigger condition (e.g., dropping below the temperature setback 206 during an idle period) that causes execution of an IWL. Rather, as represented in FIG. 2 , after the IWL procedure is disarmed or disabled, the temperature of the GPU is allowed to fall below the setback temperature 206 and the target temperature 202 (e.g., during the sixth idle period 122). If a subsequent SWL is executed that causes the temperature to again rise above the target temperature 202, the IWL procedure would again be armed or enabled and continue for another timeout period 208.In the particular example of FIG. 2 , the timeout period 208 ends during the fifth active period 110. However, this is merely a function of timing of the active periods 102, 104, 106, 108, 110 of the illustrated example relative to the duration of the timeout period 208. In other situations, the timeout period 208 may end during an idle period. For instance, assume that the SWL only included the first four kernels (K1 through K4) such that the entire time extending from the fifth idle period 120 through the sixth idle period 120 was one continuous idle period. In such a scenario, the IWL procedure would maintain the temperature of the GPU hovering around the setback temperature 206 (as represented by the last portion of the fifth idle period 120 in FIG. 2 ) all the way until the IWL procedure is disarmed or terminated. After that point, the GPU would be allowed to cool to the idle temperature 124. Notably, if there was no timeout period 208, the IWL procedure would maintain the temperature of the GPU hovering around the setback temperature 206 indefinitely. Executing an IWL indefinitely with no subsequent SWL to execute is a waste of energy. Accordingly, applying the timeout period 208 to the IWL procedure serves to save power while still reducing the number of large thermal fluctuations.In the illustrated example of FIG. 2 , the IWL timeout period 208 is a configurable parameter that is defined to have a fixed duration. In some examples, the duration of the timeout period 208 is defined based on a threshold frequency of large thermal fluctuations to which the GPU is to be subject. For instance, testing has shown that the reliability of processors begins to degrade significantly when such processors experience a significant amount of large thermal fluctuations during their user. What constitutes a significant amount of large thermal fluctuations depends on the particular processor being heated and cooled, the rate of the heating and cooling, and/or other factors. However, if the number of large thermal fluctuations within a given period that would begin to cause the reliability of a processor to degrade, it may be possible to select a suitable timeout period to avoid that number of fluctuations in the given period. For example, assume that a particular processor is found to begin to degrade when more than 30 large thermal fluctuations a day. In such a scenario, to guarantee that no more than 30 thermal fluctuations are experienced in a day, the timeout period 208 may be set to 1/30th of a day (e.g., 48 minutes). In some examples, the timeout period 208 may be set for a longer or shorter period depending on the level of reliability desired and/or the importance of reducing power consumption. More generally, the timeout period 208 and/or any other parameter(s) defining the implementation of teachings disclosed herein may be modified in response to an ongoing quasi-static or dynamic reliability lifetime analysis and/or based on any other factors relevant to the particular scenario in which teachings disclosed herein are implemented. For instance, in some examples, the timeout period 208 depends on the time of day (e.g., shorter during evening hours when it is less likely to be used by a user), day of the week (e.g., shorter during weekends), time of year (e.g., winter versus summer), etc. In some examples, the timeout period 208 is determined based on historical usage patterns. In some examples, the timeout period 208 changes based on the number of thermal fluctuations that have occurred within a given period (e.g., the timeout period increases if fluctuations are observed relatively regularly and decreases if fluctuations are observed relatively rarely).In some examples, rather than defining the timeout period 208 as a fixed duration measured from when the IWL procedure is first enabled or armed, the IWL procedure may be disabled or disarmed based on the duration and/or spacing of the active periods relative to the idle periods. For instance, in some examples, the IWL procedure is disarmed whenever a single continuous idle period extends beyond a threshold idle time period. That is, in some examples, a timer begins counting as soon as an idle period has begun. If a subsequent active period begins before the threshold idle period duration elapses, the timer is reset and does not begin counting again until the subsequent active period ends and a new idle period begins. However, if an idle period extends longer than the threshold idle period duration, the IWL procedure ends and is disarmed. In some examples, the threshold idle period duration is significantly less than the timeout period 208 described above so as to reduce the amount of time that power is consumed executing IWLs when there is no immediate need to maintain the GPU at an elevated temperature. While this approach can improve power efficiency, this approach may increase the total number of thermal fluctuations experienced over time if the active periods are spaced apart by more than the threshold duration but occur at a frequency that is more often than the timeout period 208 described above. To avoid this possibility, in some examples, the threshold idle time period only begins counting after the timeout period 208 has elapsed. That is, in some examples, the IWL procedure is configured to continue for at least the timeout period 208. Thereafter, the IWL period only ends and is disarmed once a subsequent idle period extends longer than the threshold idle time period.Defining a fixed target temperature 202 and an associated fixed setback temperature 206 as described in connection with FIG. 2 is suitable when the peak temperature reached by the GPU is relatively consistent and known in advance. For instance, fixed values for these parameters are often suitable for high performance computing applications where it is expected that the GPU will reach the maximum temperature 128 and that the maximum temperature is known. In some situations, the maximum temperature 128 may not be known and/or the usage of the GPU may be such that it does not always reach the maximum temperature 128. Accordingly, in some examples, a target temperature may be defined dynamically relative to maximum and/or minimum temperatures observed for GPU during relevant periods of time as shown and described in connection with FIG. 3 .In particular, FIG. 3 illustrates the temperature of the same GPU of FIG. 1 executing the same five-kernel SWL represented in FIG. 1 except with an IWL executed during at least some portion(s) of some of the idle periods 112, 114, 116, 118, 120, 122 to maintain the temperature of the accelerator above a dynamic target temperature (represented over time by the line identified by reference numeral 302). As with FIGS. 1 and 2 , while FIG. 3 is described with reference to a GPU, teachings disclosed herein may be suitably adapted to any type of processor. As shown in the illustrated example, during the IWL procedure, the dynamic target temperature 302 is defined to be threshold or target temperature delta 304 below an armed maximum temperature of the GPU. As used herein, the armed maximum temperature is the maximum temperature of the GPU observed since the IWL procedure was first armed or initiated. Thus, in the illustrated example, at point 204 when the IWL procedure is first armed, the maximum temperature observed corresponds to the current temperature of the GPU. Thus, the initial armed maximum time corresponds to the temperature of the GPU at the time the IWL procedure is armed. As time progresses, the temperature of the GPU continues to increase. As a result, the armed maximum temperature also increases because each new sample of the temperature constitutes a new maximum since the IWL procedure was armed. If the current temperature of the GPU drops from a previously higher maximum (e.g., at the dip 306 in FIG. 3 ), the armed maximum temperature does not drop but remains at the highest observed temperature. Thus, once the temperature of the GPU reaches the maximum temperature 128 (e.g., at point 308 in FIG. 3 ), this will become the armed maximum temperature for the remainder of the IWL procedure. As defined above, the dynamic target temperature 302 is defined to be the target temperature delta 304 below the armed maximum temperature. Thus, as shown in the illustrated example, the dynamic target temperature 302 begins at the idle temperature 124 (which is the target temperature delta 304 below the temperature of the GPU at point 204 in this example) and varies over time to follow the increasing temperature of the GPU during the first active period 102. Notably, the dynamic target temperature 302 remains constant during the dip 306 in the GPU temperature because the armed maximum temperature is also constant during the time. The dynamic target temperature 302 then rises in conjunction with the temperature reaching the maximum temperature 128 at point 308 and then the dynamic target temperature remains constant for the remainder of the IWL procedure.As described above, the target temperature 202 in FIG. 2 is a fixed value that defines both the temperature at which the IWL procedure is armed or initiated and also the target temperature above which the GPU is maintained during the IWL procedure. By contrast, in the illustrated example of FIG. 3 , the temperature at which the IWL procedure is armed is defined independent of the dynamic target temperature 302 used during the IWL procedure. In particular, while the dynamic target temperature 302 during the IWL procedure is defined to be a target temperature delta 304 below the armed maximum temperature, the IWL procedure is first armed or activated when the temperature of the GPU reaches the target temperature delta 304 above a disarmed minimum temperature. As used herein, the disarmed minimum temperature refers to the minimum temperature of the GPU observed since the IWL procedure was last disarmed or ended. Thus, in the illustrated example, during the first idle period 112 the minimum temperature observed corresponds to the idle temperature 124. Thus, the IWL procedure is armed when the GPU temperature rises to a temperature corresponding to the target temperature delta 304 above the idle temperature 124.As mentioned above, the dynamic target temperature 302 initially begins at the target temperature delta 304 below the temperature of the GPU at the time the IWL is initially armed. Inasmuch as the IWL procedure is initially armed when the GPU temperature is above the disarmed minimum temperature by the target temperature delta 304, the initial temperature of the dynamic target temperature 302 corresponds to the disarmed minimum temperature prior to the IWL procedure being armed. This is the reason that the initial target temperature 302 corresponds to the idle temperature 124 when the IWL procedure begins as represented in the illustrated example of FIG. 3 as noted above. However, if the disarmed minimum temperature was higher than idle temperature 124 when the IWL procedure is initiated (e.g., the GPU did not have an opportunity to fully cool since the ending of a previous IWL procedure), the initial dynamic target temperature 302 would also be higher than the idle temperature 124.During the IWL procedure of FIG. 3 , the temperature of the GPU is monitored to identify a trigger condition to cause the GPU to execute an IWL to maintain the temperature of the GPU above the target temperature 302. In the illustrated example, the trigger condition corresponds to the temperature of the GPU falling below a setback temperature (Tsetback) (represented over time by the line identified by reference numeral 310). In some examples, the setback temperature 310 of FIG. 3 serves the same purpose as the setback temperature 206 described above in connection with FIG. 2 . However, unlike the setback temperature 206 of FIG. 3 , which is a fixed temperature, the setback temperature 310 varies across time relative to changes in the dynamic target temperature 302. More particularly, the setback temperature 310 of FIG. 3 is a configurable parameter defined to be a setback temperature delta 312 above the target temperature 302. Thus, as the target temperature 302 increases, the setback temperature 310 also increases. In this manner, regardless of how high the temperature of GPU rises (e.g., regardless of the armed maximum temperature), if the temperature begins to drop and decreases below the setback temperature 310, the execution of an IWL will be initiated to prevent the temperature from falling as low as the target temperature 302. As a result, large thermal fluctuations that can impose undue stress on the GPU are avoided during the IWL procedure.As described above, the IWL procedure implemented in the illustrated examples of FIGS. 2 and 3 limits the execution of IWLs to the idle periods 112, 114, 116, 120, 122 so as to not interfere with the performance of the GPU during the active periods. In such an approach, there is still the possibility of large thermal fluctuations during the active periods 102, 104, 106, 108, 110. That is, in some situations, SWLs may be scheduled for execution, thereby establishing the GPU as being in an active state, but the SWLs may not require the GPU to operate at full capacity. As a result, the temperature of the GPU may drop to relatively low temperatures (e.g., below the target temperature 202, 302). In some examples, the possibility of such thermal fluctuations are assumed to be relatively rare and/or to have an acceptable impact on the reliability of the GPU in view of the importance of not interfering with the performance of the GPU. However, in some examples, if reliability is of greater concern to a user than performance, the IWL procedure may provide IWLs to the GPU for execution during the active periods if needed to maintain the temperature of the GPU above the target temperature 202, 302. In such examples, there would be no need to monitor or determine whether the GPU is in an active state or an idle state. Rather, the only trigger for the execution of an IWL would be whether the current temperature is above or below the setback temperature 206, 310.As noted above, in examples where the execution of IWLs is limited to idle periods so as not to affect performance of the execution of any SWL during the active periods, there is a possibility that some large thermal fluctuations may occur within the active periods. In some examples, the IWL procedure includes a mechanism to disarm or deactivate prior to the timeout period 208 expiring in response to such situations so as not to exacerbate the problem and increase the frequency and/or extent of thermal fluctuations. For instance, assume the particular SWL executed by the GPU during the first active period 102 results in the temperature of the GPU initially rising enough to trigger the activation or arming of the IWL procedure and then dropping back down close to the idle temperature before the active period 102 ends. With the IWL procedure now armed, upon entering the following idle period 114, the trigger condition for executing IWLs based on the temperature of the GPU being below the setback temperature 206, 310 would be satisfied. As a result, the IWL procedure would provide IWLs to the GPU for execution to drive up the temperature of the GPU toward the setback temperature 206, 301. However, as can be seen, because the initial temperature of the GPU at the beginning of the idle period 114 was low (e.g., at or near the idle temperature), this process would produce a large thermal fluctuation rather than avoid it. Accordingly, in some examples, if a relatively large drop in temperature (e.g., above a threshold) is detected during an active period, the IWL procedure is automatically disarmed or deactivated.FIG. 4 illustrates an example computing system 400 constructed in accordance with teachings disclosed herein. The example computing system 402 includes a processor 402 to execute SWLs provided by a user. The processor402 may be any type of processor such as a central processing unit (CPU), graphics processing unit (GPU), an accelerator, etc. As shown in the illustrated example, the processor 402 includes an example workload scheduler 404 and one or more temperature sensor(s) 406. The example workload scheduler 404 receives workloads submitted from a user (e.g., SWLs) and schedules the workloads for execution. The example temperature sensor(s) 406 monitor the temperature of the processor 402 and output signals indicative of the temperature. Thus, the temperature sensor(s) 406 are a means for sensing the temperature of the processor 402. In some examples, one or more of the temperature sensor(s) are included within a package of the processor 402. In some examples, one or more of the temperature sensor(s) 406 are mounted on a surface of a package of the processor 402. In some examples, one or more of the temperature sensor(s) are mounted adjacent to a package of the processor 402 (e.g., on an adjacent PCB).Additionally, the example computing system 400 of FIG. 4 includes an example thermal fluctuation controller 408 to reduce thermal fluctuations in the processor 402 due to the heating and cooling of the processor 402 during active and idle periods as outlined above in connection with FIGS. 1-3 . That is, the example thermal fluctuation controller 408 monitors the temperature of the processor 402 and the activity of the processor 402 to provide IWLs for execution by the processor 402 at suitable times (e.g., during idle periods) to maintain the temperature of the processor 402 within a threshold range of a peak temperature. As represented in the illustrated example of FIG. 4 , the thermal fluctuation controller 408 is external to the processor 402 and associated with a separate processor. For example, the thermal fluctuation controller 408 may be implemented by a CPU that interacts with the processor 402, which may be a GPU. In other examples, the thermal fluctuation controller 408 may be internal to and implemented by the processor 402 itself. In some examples, at least some functionalities of the thermal fluctuation controller 408 are implemented internally by the processor 402 and at least some functionalities of the thermal fluctuation controller 408 are implemented externally by a different processor.As shown in the illustrated example of FIG. 4 , the thermal fluctuation controller 408 includes an example workload analyzer 410, an example temperature analyzer 412, an example idle workload controller 414, an example timer 416, example memory 418, and an example idle workload database 420. The example workload analyzer 410 analyzes the current state of the processor 402 to determine whether the processor 402 is currently active (e.g., executing or scheduled to execute a SWL) or currently idle (e.g., in an idle period). More particularly, in some examples, the workload analyzer 410 is in communication with the workload scheduler 404 of the processor 402 and/or has access to schedule information generated by the workload scheduler 404. The workload analyzer 410 analyzes such schedule information to confirm whether any SWL submissions have been provided to the scheduler for execution. If at least one SWL submission is scheduled for execution, the workload analyzer 410 determines that the processor 402 is in an active state with pending SWLs to execute. If the schedule information indicates that no pending SWL submissions are scheduled to be executed, the workload analyzer 410 determines that the processor 402 is in an idle state. Thus, in some examples, the workload analyzer 410, as a structure, is a means for analyzing a workload to determine whether the processor 402 is in an idle state or an active state. In some examples, the workload analyzer 410 is one of hardware, firmware, or software. In some examples, the workload analyzer 410 is a processor, a dedicated processor unit, a digital signal processor (DSP), etc. Alternatively, the example workload analyzer 410 may be a block of code embodied as transistor logic (firmware) or software.The example temperature analyzer 412 is in communication with the temperature sensor(s) 406 of the processor 402 and/or has access to the temperature data output by the temperature sensor(s) 406. In some examples, the temperature analyzer 412 analyzes the temperature data to determine a temperature of the processor 402. In some examples, different portions of the processor 402 may be at different temperatures such that different temperature sensors 406 output different measured temperatures. In some examples, the temperature analyzer 412 identifies the highest reported temperature as the temperature of the processor 402 to be used in subsequent analysis. In some examples, a different temperature than the highest reported temperature may be used (e.g., the lowest reported temperature, an average of temperatures reported by some or all of the temperature sensors 406, etc.). In some examples, the temperature analyzer 412 compares the temperature of the processor 402 to one or more parameters (e.g., thresholds, set points, temperature ranges, etc.) associated with the initiation and/or implementation of the IWL procedure discussed above in connection with FIGS. 1-3 . For example, if a fixed target temperature 202 is defined, the temperature analyzer 412 compares the temperature of processor 402 to the target temperature 202 to determine when to arm or enable the IWL procedure. If a dynamic target temperature 302 is to be used, the temperature analyzer 412 compares the temperature of the processor 402 to the disarmed minimum temperature plus the target temperature delta 304 to determine when to arm the IWL procedure. Once the IWL procedure is armed, the temperature analyzer 412 compares the temperature of the processor 402 to the setback temperature (which may be a fixed setback temperature 206 as described in FIG. 2 or a dynamic setback temperature 310 as described in FIG. 3 ) to determine whether an IWL needs to be executed to maintain the temperature of the processor 402 above the corresponding target temperature 202, 302. In some examples, the temperature analyzer 412 is one of hardware, firmware, or software. In some examples, the temperature analyzer 412 is a processor, a dedicated processor unit, a digital signal processor (DSP), etc. Alternatively, the example workload analyzer 410 may be a block of code embodied as transistor logic (firmware) or software. In some examples, the temperature analyzer 412 includes and/or is incorporated with the temperature sensor(s) 406.The example idle workload controller 408 of the illustrated example controls the initiation, operation, and termination of the IWL procedure. Thus, in some examples, the idle workload controller 408, as a structure, is a means for controlling an IWL procedure. That is, when feedback from the temperature analyzer 412 indicates that the temperature conditions indicate the IWL procedure is to be armed, the idle workload controller 414 arms or initiates the IWL procedure. When feedback from the temperature analyzer 412 indicates the temperature of the processor 402 has dropped below the setback temperature 206, 310, the idle workload controller determines whether to submit an IWL submission to the workload scheduler 404 of the processor 402 to execute the IWL. In some examples, execution of an IWL is to be limited to idle periods. Accordingly, in some examples, the idle workload controller 414 also uses feedback from the workload analyzer 410 to determine whether the processor 402 is currently in an active state or an idle state. In some examples, the particular IWL provided to the processor 402 for execution is selected from the idle workload database 420. In some examples, there may be multiple different IWLs that the idle workload controller 414 may select. The different IWLs may correspond to any suitable set of commands that may be provided to the processor for execution. Different IWLs may be defined to affect the temperature of the processor 402 in different ways (e.g., heat it faster or slower). In some examples, the IWLs are defined as relatively simply workloads that may be looped so that execution may be ongoing until such time as the IWL is no longer needed (e.g., the temperature of the processor 402 has been raised back up to or above the setback temperature 206, 310). In some examples, the IWLs are defined to have multiple threads to cause different execution units of the processor 402 to operate at the same time for a more evenly distributed heating of the GPU.The example timer 416 of the example thermal fluctuation controller 408 is used by the idle workload controller 414 to determine when to end or disarm the IWL procedure. That is, in some examples, the idle workload controller 414 starts the timer 416 when the IWL procedure is first armed. When the timer 416 reaches the timeout period 208, the idle workload controller 414 terminates or disarms the IWL procedure.The example memory 418 is used to store values for the parameters used during the IWL procedure. In some examples, these values may be configured once by a user (or defined by an original equipment manufacturer) and remain fixed until changed by the user (e.g., the fixed target temperature 202, the temperature setback 206, the timeout period 208, the target temperature delta 304, the setback temperature delta 312). In some examples, the values in the memory are updated on an ongoing basis based on changing circumstances (e.g., the dynamic target temperature 302, the dynamic setback temperature 310, the armed maximum temperature, the disarmed minimum temperature, the current temperature of the processor 402, etc.).While an example manner of implementing the thermal fluctuation controller 408 is illustrated in FIG. 4 , one or more of the elements, processes and/or devices illustrated in FIG. 4 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example workload analyzer 410, the example temperature analyzer 412, the example idle workload controller 414, the example timer 416, the example memory 418, the example idle workload database 420, and/or, more generally, the example thermal fluctuation controller 408 of FIG. 4 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example workload analyzer 410, the example temperature analyzer 412, the example idle workload controller 414, the example timer 416, the example memory 418, the example idle workload database 420 and/or, more generally, the example thermal fluctuation controller 408 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example workload analyzer 410, the example temperature analyzer 412, the example idle workload controller 414, the example timer 416, the example memory 418, and/or the example idle workload database 420 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example thermal fluctuation controller 408 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIG. 4 , and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase "in communication," including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the thermal fluctuation controller 408 of FIG. 4 is shown in FIGS. 5 and 6 . The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 712 shown in the example processor platform 700 discussed below in connection with FIG. 7 . The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 712, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 712 and/or embodied in firmware or dedicated hardware. Further, although the example programs are described with reference to the flowcharts illustrated in FIGS. 5 and 6 , many other methods of implementing the example thermal fluctuation controller 408 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The processor circuitry may be distributed in different network locations and/or local to one or more devices (e.g., a multi-core processor in a single machine, multiple processors distributed across a server rack, etc.).The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that may together form a program such as that described herein.In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.As mentioned above, the example processes of FIGS. 5 and 6 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media."Including" and "comprising" (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of "include" or "comprise" (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase "at least" is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term "comprising" and "including" are open ended. The term "and/or" when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.As used herein, singular references (e.g., "a", "an", "first", "second", etc.) do not exclude a plurality. The term "a" or "an" entity, as used herein, refers to one or more of that entity. The terms "a" (or "an"), "one or more", and "at least one" can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.The flowchart of FIG. 5 represents example machine readable instructions to implement the thermal fluctuation controller 408 of FIG. 4 using a fixed target temperature 202 as described above in connection with FIG. 2 . The program of FIG. 5 begins at block 502 where the example temperature analyzer 412 obtains the current temperature of the processor 402. At block 504, the example idle workload controller 414 determines whether the IWL procedure is armed. If not, control advances to block 508 where the example temperature analyzer 412 determines whether the current temperature is at or above the target temperature 202. If not, there is no need to arm the IWL procedure. Accordingly, control advances to block 526 where the thermal fluctuation controller 408 determines whether to continue the process. If so, control returns to block 502 to obtain an updated measurement of the current temperature. Returning to block 506, if the example temperature analyzer 412 determines that the current temperature is at or above the target temperature 202, control advances to block 508 where the example idle workload controller 414 arms the IWL procedure. In the illustrated example, arming the IWL procedure initiates the monitoring of the temperature of the processor 402 relative to the setback temperature 206. Further, arming the IWL procedure includes starting the example timer 416 to count towards the timeout period 208. In some examples, rather than starting a timer, the idle workload controller 414 may store the current time (as indicated by the timer 416) in the example memory 418 as a point of reference to compare against the timeout period 208 as time progresses. After the IWL procedure is armed at block 508, control advances to block 526.Returning to block 504, if the example idle workload controller 414 determines that the IWL procedure is armed, control advances to block 510. At block 510, the example idle workload controller saves and/or updates (e.g., in the example memory 418) an armed maximum temperature controller. That is, if the most recent measurement of the current temperature (obtained at block 502) is the highest temperature observed since the IWL procedure was first armed, that temperature is set as the armed maximum temperature. If the current temperature is less than the armed maximum temperature, the armed maximum temperature remains unchanged.At block 512, the example idle workload controller 414 determines whether the timeout period 208 has elapsed. If not, control advances to block 514 where the example workload analyzer 410 determines whether any SWL is scheduled. If so, then no IWL is to be executed so as to not interfere with the performance of the processor 402 when executing the SWL. Accordingly, control advances to block 516 where the example temperature analyzer 412 determines whether the armed maximum temperature (set at block 510) minus the current temperature (obtained at block 502) satisfies (e.g., is greater than) a threshold. Block 516 serves to identify situations where large temperatures drops (e.g., exceeding the threshold) occur during an active period so as to not inadvertently cause the temperature of the processor 402 to increase during an idle period if it has already cooled during a preceding active period. Thus, if the threshold is satisfied (e.g., the difference between the armed maximum temperature and the current temperature exceeds the threshold), control advances to block 518 where the example idle workload controller 414 disarms the IWL procedure. Thereafter, control advances to block 526 to determine whether to continue the process as discussed above. If the example temperature analyzer 412 determines, at block 516, that the threshold is not satisfied, control advances directly to block 526 such that the IWL procedure remains armed.Returning to block 514, if the example workload analyzer 410 determines that no SWL is scheduled, control advances to block 520 where the example temperature analyzer 412 determines whether the current temperature is lower than the setback temperature 206. In some examples, if reliability is more important than performance and the potential for large thermal fluctuations during active periods are to be avoided, blocks 514 and 516 may be omitted. In such examples, if the timeout period has not elapsed (as determined at block 512), control advances directly to block 520. If the example temperature analyzer 412 determines, at block 520, that the current temperature is not lower than the setback temperature 206, then no action needs to be taken so control advances directly to block 526. However, if the current temperature is lower than the setback temperature 206, control advances to block 522 where the example idle workload controller 414 selects an IWL from the example idle workload database 420. At block 524, the example idle workload controller 414 provides the IWL to the processor 402 for execution. Thereafter, control advances to block 526.Returning to block 512, if the example idle workload controller 414 determines that the timeout period 208 has elapsed, control advances to block 518 where the example idle workload controller 414 disarms the IWL procedure. Thereafter, control advances to block 526 to determine whether to continue the process as discussed above. If so, control again returns to block 502. If not, the example process of FIG. 5 ends.The flowchart of FIG. 6 represents example machine readable instructions to implement the thermal fluctuation controller 408 of FIG. 4 using a dynamic target temperature 302 as described above in connection with FIG. 3 . The program of FIG. 6 begins at block 602 where the example temperature analyzer 412 obtains the current temperature of the processor 402. At block 604, the example idle workload controller 414 determines whether the IWL procedure is armed. If not, control advances to block 608 where the example temperature analyzer 412 determines the current temperature is less than the disarmed minimum temperature. If so, control advanced to block 608 where the idle workload controller 414 updates the disarmed current temperature with the current temperature. That is, the current temperature becomes the new disarmed minimum temperature. Thereafter, control advances to block 636 where the thermal fluctuation controller 408 determines whether to continue the process. If so, control returns to block 602 to obtain an updated measurement of the current temperature.Returning to block 606, if the current temperature is not less than the disarmed minimum temperature, control advances to block 610 where the example temperature analyzer 412 determines whether the current temperature minus the disarmed minimum temperature is less than the target temperature delta 304. If not, there is no need to arm the IWL procedure. Accordingly, control advances to block 636 where the thermal fluctuation controller 408 determines whether to continue the process. If the current temperature minus the disarmed minimum temperature is less than the target temperature delta 304, control advances to block 612 where the example idle workload controller 414 arms the IWL procedure. In the illustrated example, arming the IWL procedure initiates the monitoring of the temperature of the processor 402 relative to the setback temperature 310. Further, arming the IWL procedure includes starting the example timer 416 to count towards the timeout period 208. In some examples, rather than starting a timer, the idle workload controller 414 may store the current time (as indicated by the timer 416) in the example memory 418 as a point of reference to compare against the timeout period 208 as time progresses. After the IWL procedure is armed at block 612, control advances to block 614 where the example idle workload controller 414 resets the disarmed minimum temperature to an upper bound. In some examples, the upper bound can be any suitable higher any expected temperature for the processor 402 (e.g., higher than the maximum temperature 128). In this manner, whenever the IWL becomes disabled again, the current temperature of the processor 402 at that time will be less than the disarmed minimum temperature (as determined at block 606) to then define the disarmed minimum temperature as the current temperature (at block 608). After the disarmed minimum temperature is resent, control advances to block 636 to determine whether to continue the process.Returning to block 604, if the example idle workload controller 414 determines that the IWL procedure is armed, control advances to block 616. At block 616, the example idle workload controller 414 determines whether the timeout period 208 has elapsed. If not, control advances to block 618 where the example temperature analyzer 412 determines whether the current temperature is greater than the armed maximum temperature. If so, control advances to block 620 where the idle workload controller 414 updates the armed maximum temperature with the current temperature. That is, the current temperature becomes the new armed maximum temperature. At block 622, the idle workload controller updates the setback temperature 310 based on the updated armed maximum temperature. More particularly, in some examples, the setback temperature 310 is defined as the armed maximum temperature minus the target temperature delta 304 plus the setback temperature delta 312. Thereafter, control advances to block 624. Returning to block 618, if the current temperature is not greater than the armed maximum temperature, control advances directly to block 624.At block 624, the example workload analyzer 410 determines whether any SWL is scheduled. If so, then no IWL is to be executed so as to not interfere with the performance of the processor 402 when executing the SWL. Accordingly, control advances to block 626 where the example temperature analyzer 412 determines whether the armed maximum temperature (set at block 620) minus the current temperature (obtained at block 602) satisfies (e.g., is greater than) a threshold. Block 626 serves to identify situations where large temperatures drops (e.g., exceeding the threshold) occur during an active period so as to not inadvertently cause the temperature of the processor 402 to increase during an idle period if it has already cooled during a preceding active period. Thus, if the threshold is satisfied (e.g., the difference between the armed maximum temperature and the current temperature exceeds the threshold), control advances to block 628 where the example idle workload controller 414 disarms the IWL procedure. Thereafter, control advances to block 636 to determine whether to continue the process as discussed above. If the example temperature analyzer 412 determines, at block 626, that the threshold is not satisfied, control advances directly to block 636 such that the IWL procedure remains armed.Returning to block 624, if the example workload analyzer 410 determines that no SWL is scheduled, control advances to block 630 where the example temperature analyzer 412 determines whether the current temperature is lower than the setback temperature 310. In some examples, if reliability is more important than performance and the potential for large thermal fluctuations during active periods are to be avoided, blocks 622 and 624 may be omitted. In such examples, control advances from blocks 618 and 622 directly to block 630. If the example temperature analyzer 412 determines, at block 630, that the current temperature is not lower than the setback temperature 206, then no action needs to be taken so control advances directly to block 636. However, if the current temperature is lower than the setback temperature 310, control advances to block 632 where the example idle workload controller 414 selects an IWL from the example idle workload database 420. At block 634, the example idle workload controller 414 provides the IWL to the processor 402 for execution. Thereafter, control advances to block 636.Returning to block 616, if the example idle workload controller 414 determines that the timeout period 208 has elapsed, control advances to block 628 where the example idle workload controller 414 disarms the IWL procedure. Thereafter, control advances to block 636 to determine whether to continue the process as discussed above. If so, control again returns to block 602. If not, the example process of FIG. 6 ends.FIG. 7 is a block diagram of an example processor platform 700 structured to execute the instructions of FIGS. 5 and 6 to implement the thermal fluctuation controller 408 of FIG. 4 . The processor platform 700 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.The processor platform 700 of the illustrated example includes a processor 712. The processor 712 of the illustrated example is hardware. For example, the processor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example workload analyzer 410, the example temperature analyzer 412, the example idle workload controller 414, and the example timer 416.The processor 712 of the illustrated example includes a local memory 713 (e.g., a cache). The processor 712 of the illustrated example is in communication with a main memory including a volatile memory 714 and a non-volatile memory 716 via a bus 718. The volatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 714, 716 is controlled by a memory controller.The processor platform 700 of the illustrated example also includes an interface circuit 720. The interface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.In the illustrated example, one or more input devices 722 are connected to the interface circuit 720. The input device(s) 722 permit(s) a user to enter data and/or commands into the processor 712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.One or more output devices 724 are also connected to the interface circuit 720 of the illustrated example. The output devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.The interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.The processor platform 700 of the illustrated example also includes one or more mass storage devices 728 for storing software and/or data. Examples of such mass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. In this example, the mass storage device implements the example memory 418, and/or the example idle workload database 420The machine executable instructions 732 of FIGS. 5 and 6 may be stored in the mass storage device 728, in the volatile memory 714, in the non-volatile memory 716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that improve the reliability and/or useful life of a processor by reducing the frequency, number, and/or severity of large thermal fluctuations in the processor between active and idle periods of use. Furthermore, examples disclosed herein achieve this technological benefit without impacting the performance of the processor. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.Example 1 includes an apparatus comprising a temperature analyzer to determine a current temperature of a processor, and a controller to provide an idle workload to the processor to execute in response to the current temperature falling below a setback temperatureExample 2 includes the apparatus of example 1, wherein the controller is to provide the idle workload to the processor when the processor is in an idle state and to not provide the idle workload to the processor when the processor is in an active state.Example 3 includes the apparatus of example 2, further including a workload analyzer to determine whether the processor is in the idle state or the active state based on whether a standard workload is scheduled for execution by the processor.Example 4 includes the apparatus of any one of examples 1-3, wherein the controller is to provide the idle workload to the processor when an idle workload procedure is armed and to not provide the idle workload to the processor when the idle workload procedure is disarmed.Example 5 includes the apparatus of example 4, wherein the controller is to arm the idle workload procedure in response to the current temperature exceeding a threshold temperature.Example 6 includes the apparatus of example 5, wherein the threshold temperature is defined by a fixed target temperature.Example 7 includes the apparatus of example 5, wherein the threshold temperature is defined as a target temperature delta above a disarmed minimum temperature, the disarmed minimum temperature corresponding to a lowest value observed for the current temperature of the processor since the idle workload procedure was last disarmed.Example 8 includes the apparatus of any one of examples 4-7, wherein the controller is to disarm the idle workload procedure in response to a timeout period elapsing since the idle workload procedure was last armed.Example 9 includes the apparatus of any one of examples 4-7, wherein the controller is to disarm the idle workload procedure in response to an idle period of the processor exceeding a threshold time period.Example 10 includes the apparatus of any one of examples 4-7, wherein the controller is to disarm the idle workload procedure in response to a difference between the current temperature and an armed maximum temperature exceeding a threshold, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor since the idle workload procedure was last armed.Example 11 includes the apparatus of any one of examples 1-4, wherein the setback temperature is a fixed temperature delta higher than a target temperature.Example 12 includes the apparatus of example 11, wherein the target temperature is defined by a fixed temperature value.Example 13 includes the apparatus of example 11, wherein the target temperature is defined by a dynamic temperature value, the dynamic temperature value corresponding to a target temperature delta below an armed maximum temperature, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor during a relevant period of time.Example 14 includes the apparatus of any one of examples 1-13, wherein execution of the idle workload serves no purpose other than to increase the current temperature of the processor.Example 15 includes a non-transitory computer readable medium comprising instructions that, when executed, cause a machine to at least determine a current temperature of a processor, and provide an idle workload to the processor to execute in response to the current temperature falling below a setback temperature.Example 16 includes the non-transitory computer readable medium of example 15, wherein instructions cause the machine to provide the idle workload to the processor when the processor is in an idle state and to not provide the idle workload to the processor when the processor is in an active state.Example 17 includes the non-transitory computer readable medium of example 16, wherein instructions cause the machine to determine whether the processor is in the idle state or the active state based on whether a standard workload is scheduled for execution by the processor.Example 18 includes the non-transitory computer readable medium of any one of examples 15-17, wherein instructions cause the machine to provide the idle workload to the processor when an idle workload procedure is armed and to not provide the idle workload to the processor when the idle workload procedure is disarmed.Example 19 includes the non-transitory computer readable medium of example 18, wherein instructions cause the machine to arm the idle workload procedure in response to the current temperature exceeding a threshold temperature.Example 20 includes the non-transitory computer readable medium of example 19, wherein the threshold temperature is defined by a fixed target temperature.Example 21 includes the non-transitory computer readable medium of example 19, wherein the threshold temperature is defined as a target temperature delta above a disarmed minimum temperature, the disarmed minimum temperature corresponding to a lowest value observed for the current temperature of the processor since the idle workload procedure was last disarmed.Example 22 includes the non-transitory computer readable medium of example 18, wherein instructions cause the machine to disarm the idle workload procedure in response to a timeout period elapsing since the idle workload procedure was last armed.Example 23 includes the non-transitory computer readable medium of any one of examples 18-22, wherein instructions cause the machine to disarm the idle workload procedure in response to an idle period of the processor exceeding a threshold time period.Example 24 includes the non-transitory computer readable medium of any one of examples 18-22, wherein instructions cause the machine to disarm the idle workload procedure in response to a difference between the current temperature and an armed maximum temperature exceeding a threshold, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor since the idle workload procedure was last armed.Example 25 includes the non-transitory computer readable medium of any one of examples 15-19, wherein the setback temperature is a fixed temperature delta higher than a target temperature.Example 26 includes the non-transitory computer readable medium of example 25, wherein the target temperature is defined by a fixed temperature value.Example 27 includes the non-transitory computer readable medium of example 25, wherein the target temperature is defined by a dynamic temperature value, the dynamic temperature value corresponding to a target temperature delta below an armed maximum temperature, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor during a relevant period of time.Example 28 includes the non-transitory computer readable medium of any one of examples 15-27, wherein the processor is a first processor, and the machine corresponds to a second processor different than the first processor.Example 29 includes the non-transitory computer readable medium of any one of examples 15-27, wherein the machine corresponds to the processor.Example 30 includes a method comprising measuring a current temperature of a processor, and causing the processor to execute an idle workload to generate heat in response to the current temperature falling below a setback temperature.Example 31 includes the method of example 30, further including providing the idle workload to the processor when the processor is in an idle state, wherein the idle workload is not provided to the processor when the processor is in an active state.Example 32 includes the method of example 31, further including determining whether the processor is in the idle state or the active state based on whether a standard workload is scheduled for execution by the processor.Example 33 includes the method of any one of examples 30-32, wherein the idle workload is provided to the processor when an idle workload procedure is armed, and the idle workload is not provided to the processor when the idle workload procedure is disarmed.Example 34 includes the method of example 33, further including arming the idle workload procedure in response to the current temperature exceeding a threshold temperature.Example 35 includes the method of example 34, wherein the threshold temperature is defined by a fixed target temperature.Example 36 includes the method of example 34, wherein the threshold temperature is defined as a target temperature delta above a disarmed minimum temperature, the disarmed minimum temperature corresponding to a lowest value observed for the current temperature of the processor since the idle workload procedure was last disarmed.Example 37 includes the method of any one of examples 33-36, further including disarming the idle workload procedure in response to a timeout period elapsing since the idle workload procedure was last armed.Example 38 includes the method of any one of examples 33-36, further including disarming the idle workload procedure in response to an idle period of the processor exceeding a threshold time period.Example 39 includes the method of any one of examples 33-36, further including disarming the idle workload procedure in response to a difference between the current temperature and an armed maximum temperature exceeding a threshold, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor since the idle workload procedure was last armed.Example 40 includes the method of any one of example 30-33, wherein the setback temperature is a fixed temperature delta higher than a target temperature.Example 41 includes the method of example 40, wherein the target temperature is defined by a fixed temperature value.Example 42 includes the method of example 40, wherein the target temperature is defined by a dynamic temperature value, the dynamic temperature value corresponding to a target temperature delta below an armed maximum temperature, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor during a relevant period of time.Example 43 includes an apparatus comprising means for sensing a current temperature of a processor, and means for controlling an idle workload procedure, the controlling means to provide an idle workload to the processor to execute in response to the current temperature falling below a setback temperature.Example 44 includes the apparatus of example 43, wherein the controlling means is to provide the idle workload to the processor when the processor is in an idle state and to not provide the idle workload to the processor when the processor is in an active state.Example 45 includes the apparatus of example 44, further including means for analyzing a workload to determine whether the processor is in the idle state or the active state based on whether a standard workload is scheduled for execution by the processor.Example 46 includes the apparatus of any one of examples 43-45, wherein the controlling means is to provide the idle workload to the processor when the idle workload procedure is armed and to not provide the idle workload to the processor when the idle workload procedure is disarmed.Example 47 includes the apparatus of example 46, wherein the controlling means is to arm the idle workload procedure in response to the current temperature exceeding a threshold temperature.Example 48 includes the apparatus of example 47, wherein the threshold temperature is defined by a fixed target temperature.Example 49 includes the apparatus of example 47, wherein the threshold temperature is defined as a target temperature delta above a disarmed minimum temperature, the disarmed minimum temperature corresponding to a lowest value observed for the current temperature of the processor since the idle workload procedure was last disarmed.Example 50 includes the apparatus of any one of examples 46-49, wherein the controlling means is to disarm the idle workload procedure in response to a timeout period elapsing since the idle workload procedure was last armed.Example 51 includes the apparatus of any one of examples 46-49, wherein the controlling means is to disarm the idle workload procedure in response to an idle period of the processor exceeding a threshold time period.Example 52 includes the apparatus of any one of example 46-40, wherein the controlling means is to disarm the idle workload procedure in response to a difference between the current temperature and an armed maximum temperature exceeding a threshold, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor since the idle workload procedure was last armed.Example 53 includes the apparatus of any one of examples 43-46, wherein the setback temperature is a fixed temperature delta higher than a target temperature.Example 54 includes the apparatus of example 53, wherein the target temperature is defined by a fixed temperature value. the apparatus of example 50, wherein the target temperature is defined by a dynamic temperature value, the dynamic temperature value corresponding to a target temperature delta below an armed maximum temperature, the armed maximum temperature corresponding to a highest value observed for the current temperature of the processor during a relevant period of time.Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure. |
In one embodiment, a converged protocol stack can be used to unify communications from a first communication protocol to a second communication protocol to provide for data transfer across a physical interconnect. This stack can be incorporated in an apparatus that includes a protocol stack for a first communication protocol including transaction and link layers, and a physical (PHY) unit coupled to the protocol stack to provide communication between the apparatus and a device coupled to the apparatus via a physical link. This PHY unit may include a physical unit circuit according to the second communication protocol. Other embodiments are described and claimed. |
1. A device comprising:Logic for operating a transactional and link layer based on Peripheral Component Interconnect Express (PCIeTM); andA physical unit coupled to the logic to transmit and receive data over the physical link, the physical unit including an M-PHY electrical layer and a logic layer for interfacing logic with the M-PHY electrical layer, the logic layer including Link training and state machine (LTSSM), link training for performing physical links, and multichannel configuration supporting physical links, the logic layer also includes mapping logic for mapping control symbols from the first code to the second Mapping.2.The apparatus of claim 1, wherein the LTSSM comprises a state machine that initiates link training in a detection state, the path of the physical link transitions to the HIBERN8 state in the detection state, and thereafter continues to the configuration state, the M-PHY The parameters of the electrical layer are configured in the configuration state.3.The apparatus of claim 1, wherein the LTSSM further causes transmission of at least one electrical, always-ordered set (EIOS) such that the path of the physical link enters the STALL state after the active data transfer.4. The apparatus of claim 3, wherein after the EIOS and prior to entering the STALL state, the LTSSM also causes the transmission of the TAIL of BURST indication.5.The apparatus of claim 1, wherein the LTSSM configures and initializes a physical link to a link width determined prior to training the physical link.6.The apparatus of claim 1, wherein the LTSSMs power-gamble the physical unit in a deep-low power state.7.The apparatus of claim 6, wherein the LTSSM exits the deep-low power state when a negative drive signal is driven on the path of the physical link.8.The apparatus of claim 1, wherein the LTSSM provides configuration and initialization of physical links, support for data transfer, support for state transitions when recovering from link errors, and restarting from ports in a low power state.9.The apparatus of claim 1, wherein the LTSSM is configured to support an asymmetric link width configuration of the physical link.10.The apparatus of claim 1, wherein the LTSSM is configured to support dynamic bandwidth scaling of physical links.11. A device comprising:Transaction layer and link layer for loading / storing communication protocols; andA physical unit coupled to the link layer to interface with a physical link and including a physical unit circuit of a second communication protocol having a transmission circuit and a reception circuit and a logic layer configured to store a link layer Interface with the physical unit circuit, the logic layer includes link logic to perform link training of the physical link further comprising mapping logic to map the encoding of the control symbols for the load / store communication protocol to the second Mapping of communication protocols.12.The apparatus of claim 11, wherein the link logic comprises a state machine, the state machine begins link training in a detection state, the path of the physical link transitions to a HIBERN8 state in the detection state, and thereafter continues to the configuration state, the physical The parameters of the unit circuit are configured in the configuration status.13.The apparatus of claim 11, wherein the state machine further causes transmission of at least one Electrical Idle Ordered Set (EIOS) so that a physical link's channel enters a STALL state after active data transfer.14. The apparatus of claim 13, wherein the state machine also causes the transmission of the TAIL of BURST indication after the EIOS and prior to entering the STALL state.15.The apparatus of claim 11, wherein the link logic configures and initializes the physical link to a link width determined prior to training the physical link.16.The apparatus of claim 11, wherein the link logic power-gathers the physical cells in a deep-low power state.17.The apparatus of claim 16, wherein the link logic exits the deep-low power state when a negative drive signal is driven on the path of the physical link.18.The apparatus of claim 11, wherein the link logic provides configuration and initialization of physical links, support for data transfer, support for state transitions when recovering from link errors, and restarting from ports in low power states.19.The apparatus of claim 11, wherein the link logic is configured to support a multichannel configuration of physical links.20.The apparatus of claim 11, wherein the link logic is configured to support the asymmetric link width configuration of the physical link.21.The apparatus of claim 10, wherein the link logic is configured to support dynamic bandwidth scaling of the physical link.22.A system comprising:System on Chip (SoC), including:Multiple kernelsPeripheral Component Rapid Interconnect Transaction Layer and Link Layer for the TM (PCIeTM) communication protocol; andA physical unit coupled to the link layer to enable communication via a physical link, the physical unit including an electrical layer and a logic layer other than a PCIeTM communication protocol, the logic layer to interface the link layer with the electrical layer, the logic layer including a chain A Path Training and State Machine (LTSSM) for performing link training of a physical link, the logic layer further comprising mapping logic for mapping control symbols from a first code to a second mapping;A physical link coupled between the SoC and the first transceiver;A first transceiver coupled to a SoC via a physical link, the first transceiver comprising:A second transaction layer and a second link layer of a PCIeTM communications protocol; andA second physical unit coupled to the second link layer to enable communication via a physical link, the second physical unit including a second electrical layer and a second logical layer other than the PCIeTM communications protocol, and a second logical layer to communicate the second The link layer and the second electrical layer interface;An image capture device coupled to the SoC to capture image information; andTouch screen display, coupled to the SoC.23.The system of claim 22, wherein the electrical layer includes a plurality of physical unit circuits, each of the physical unit circuits communicating via a single channel of a physical link.24.The system of claim 22, wherein the LTSSM is configured to support dynamic bandwidth scaling of physical links.25.The system of claim 22, wherein the SoC further comprises at least one ordered kernel and at least one disordered kernel.26.The system of claim 22, further comprising a second transceiver coupled to the SoC via the second physical link.27.The system of claim 22, wherein the system comprises a tablet computer.28.The system of claim 22, wherein the LTSSM is configured to support multichannel configuration of physical links.29.The system of claim 22, wherein the LTSSM is configured to support an asymmetric link width configuration of the physical link.30.A method comprising:The link training of the physical link is performed via a link training and state machine (LTSSM), the physical link being coupled between a system-on-chip (SoC) and a device, the LTSSM of the logic layer of the M-PHY, -PHY's Electrical Layer and Peripheral Components Quickly Interconnect TM (PCIeTM) Link Layer Interface;Multichannel configuration supporting physical links via LTSSM; andThe control symbols are mapped from the first code to the second mapping via mapping logic of the logic layer.31.The method of claim 30, further comprising: operating a physical link having an asymmetric width from the SoC to the device and from the device to the SoC.32.The method of claim 30, further comprising: configuring and initializing the physical link to a link width determined prior to link training.33.The method of claim 30, further comprising: power-gating the M-PHY in a deep-low power state.34.The method of claim 30, further comprising: exiting the deep low power state when the negative drive signal is driven on the path of the physical link.35.The method of claim 30, further comprising: supporting an asymmetric link width configuration of the physical link via the LTSSM.36.The method of claim 30, further comprising: supporting dynamic bandwidth scaling of the physical link via the LTSSM. |
Optimized link training and management mechanismThis application is a divisional application filed on July 16, 2013 and having an application number of 201380021347.8 and entitled "Optimized Link Training and Management Mechanism".Technical fieldEmbodiments relate to interconnect technologies.backgroundIn order to provide communication between different devices within the system, some type of interconnection mechanism is used. A wide variety of such interconnects are possible depending on the system implementation. Often in order to enable two devices to communicate with each other, they share a common communication protocol.A typical communication protocol for communication between devices in a computer system is based on the rapid interconnection of peripheral components of a link based on the PCI Express ™ Specification Base Specification Version 3.0 (published Nov. 18, 2010) (hereinafter referred to as the PCIe ™ Specification) (PCI ExpressTM (PCIeTM)) communication protocol. This communication protocol is an example of a load / store input / output (IO) interconnect system. The inter-device communication is usually performed serially at very high speeds according to this protocol. When developing the PCIeTM communication protocol in the context of a desktop computer, various parameters for this protocol were developed for maximum performance without regard to power efficiency. As a result, many of its features can not be reduced to lower power solutions that can be incorporated into mobile systems.In addition to these power issues with conventional load / store communication protocols, existing link management schemes are often complex and involve a large number of states, resulting in the lengthy process of performing transitions between states. This is due in part to existing link management mechanisms that were developed to appreciate the many different form factor requirements such as connectors, different system consolidations, and the like. One such example is link management based on the PCIeTM communication protocol.Brief Description of the DrawingsFIG. 1 is a high-level block diagram of a protocol stack for a communication protocol according to an embodiment of the present invention.2 is a block diagram of a system on chip (SoC) according to an embodiment of the present invention.3 is a block diagram of a physical unit according to another embodiment of the present invention.FIG. 4 is a block diagram illustrating further details of a protocol stack according to an embodiment of the present invention. FIG.Figure 5 is a state diagram for a link training state machine, which can be part of a link manager according to an embodiment of the invention.FIG. 6 is a flowchart of various states for a sideband mechanism according to an embodiment of the present invention. FIG.FIG. 7 is a flowchart of a method according to an embodiment of the present invention.Figure 8 is a block diagram of components present in a computer system in accordance with an embodiment of the present invention.Figure 9 is a block diagram of an example system with which an embodiment can be used.detailed descriptionEmbodiments may provide input / output (IO) interconnect technology with a low power, load / store architecture and are particularly well-suited for use in mobile applications including cellular telephones such as smart phones, tablet computers, eReaders, Used in equipment.In embodiments, the protocol stack for a given communication protocol can be used with different communication units or with at least one physical (PHY) unit different from the one used for a given communication protocol. A physical unit includes both physical and electrical layers of a physical layer or a physical layer that provide physical, electrical communication of information signals over an interconnect, such as a link that links two separate semiconductor dies, two separate semiconductor dies May be two semiconductor dies within a single integrated circuit (IC) package or a separate package, such as coupled via circuit board routing, traces, or the like. In addition, the physical unit is capable of performing framing / deframing of data packets, performing link training and initialization, and processing data packets for receipt or delivery from a physical interconnect onto a physical interconnect.Although different implementations are possible, in one embodiment, the protocol stack may have a conventional personal computer (PC) based communication protocol (such as in accordance with PCI Express ™ Specification Fundamentals Version 3.0 (November 18, 2010) Peripheral Component Interconnect Express (PCIeTM)) communication protocol), a further version of an application protocol extension, or another such protocol, published under the PCIe specification (hereinafter referred to as the PCIe (TM) Specification) PCIeTM communication protocol. For the purpose of low power operation, this physical unit can be specifically designed to allow a substantially unchanged PCIeTM protocol stack to merge with this low power physical circuit. As such, a broad traditional foundation of PCIe ™ communication protocols can be leveraged for ease of incorporation into portable and other non-PC based form factors operating at low power. The scope of the present invention is not limited in this regard and in one embodiment this physical unit may be a mobile unit comprised of a mobile platform such as a mobile industrial processor interface (MIPI) federation that is a set of standards for mobile computing devices M-PHY Specification Version 1.00.00 - The so-called M-PHY compliant physical unit of February 8, 2011 (MIPI Board approved April 28, 2011) (hereinafter MIPI Specification). However, other low-power physical units can be used, such as in accordance with other low-power specifications such as for coupling individual dies within a multi-chip package together, or a custom low-power solution. As used herein, the term "low power" means at a lower level of power consumption than conventional PC systems, and it can be applied to a wide variety of mobile and portable devices. As an example, "low power" may be a physical unit consuming less power than a conventional PCIe ™ physical unit.Thus, by re-aggregating the traditional PCIeTM protocol stack with different types of physical units, the heavily reused legacy components developed for PCIeTM can be used to incorporate into mobile or other portable or low-power platforms.Embodiments may also make use of the recognition that existing load / store IO technologies, and in particular PCIeTM, are designed to achieve maximum performance where power efficiency is not a major issue, and therefore not reduced to low power applications. By combining portions of a conventional load / store protocol stack with low power design physical elements, embodiments may preserve the performance benefits of PCIeTM while achieving the best power at the device and platform level.As such, embodiments may be software compatible with the ubiquitous PCIe ™ architecture with a large traditional foundation. In addition, embodiments may also enable direct PHY reuse mobile design PHY, such as M-PHY. In this way, low active and idle power can be achieved with the efficient power / bits delivered and the method of friendly becoming electromagnetic interface / radio frequency interface (EMI / RFI) since the PHY can operate at low power without interfering with associated radios (since clock rates for PHY Harmonic Noninterference Typical radio solutions operate at clock rates of the conventional radios (eg, 1.8, 1.9, 2.4 GHz) or other such radios) that they operate on.Embodiments may further provide architectural improvements to the optimized link training and management mechanism (LTSSM); optimized flow control and retry buffering and management mechanisms; architectural protocols for changing the mode of operation of the link; fast hardware support for device state preservation And recovery; and a unified sideband mechanism for link management with optional in-band support.In various embodiments, the PCIeTM transaction and data link layer can be implemented as part of a protocol stack with limited modification to account for different link speeds and asymmetric links. In addition, modified link training and management can be provided to include support for multi-lane communications, asymmetric link configuration, sideband uniformity, and dynamic bandwidth scalability. Embodiments may further provide support for existing bridging between PCIe-based and non-PCIe-based logic and circuits such as M-PHY logic and circuits.This layered approach allows existing software stacks (eg, operating system (OS), hypervisor and drivers) to run seamlessly across different physical layers. The impact on the data link and transaction layer is minimized and may include updating related timers to update the reply frequency, replay timer, and the like.Thus, embodiments can limit some of the flexibility provided in PCIe ™ systems because of the flexibility that can be created in some cases in both PCIe ™ systems and other systems. This is true because both protocols provide great flexibility in implementing plug-and-play capabilities. In contrast, embodiments are capable of tailoring solutions that minimize the amount of design flexibility because, when incorporated into a given system, such as a system-on-chip (SoC) interconnected with another integrated circuit Knowledge and fixed configuration. Because it is known in terms of the precise configuration of presence, when the SoC and connected devices are both attached to the platform, such as a circuit board soldered to the system, these devices do not require plug-and-play capability and thus may There is no need for the inherent flexibility of PCIe ™ or other PC-based communications protocols that allows different devices to seamlessly incorporate into plug-and-play capable systems.As an example, the SoC can function as a root complex implemented in a first IC and coupled to a second IC that can be a radio solution that can include one or more of a plurality of wireless communication devices A device. The scope of such a device can range from local wireless communications such as Bluetooth ™ -based low-power short-range communications systems, such as the so-called WiFi ™ system according to a given Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, High-power wireless systems for cellular communication protocols, such as 3G or 4G communication protocols.Referring now to FIG. 1, a high-level block diagram of a protocol stack for a communication protocol in accordance with an embodiment of the present invention is shown. As shown in FIG. 1, the stack 100 may be a combination of software, firmware, and hardware within a semiconductor component, such as an IC, for providing processing of data communications between the semiconductor device and another device coupled thereto. In the embodiment of FIG. 1, a high level view begins with high level software 110, which may be various types of software executing on a given platform. Such high-level software may include operating system (OS) software, firmware, application software, and the like. The data to be transmitted via the interconnect 140 can be passed through layers of a protocol stack, typically shown in FIG. 1, which may be a given physical interconnect that couples the semiconductor device to another component. As can be seen, each portion of this protocol stack may be part of a conventional PCIe stack 120 and may include a transaction layer 125 and a data link layer 128. Typically, the transaction layer 125 is used to generate transactional layer data packets (TLPs) that can be request-based or time-separated response-based data packets, allowing the link to carry other services while the target device collects data for the response . The transaction layer further handles credit-based flow control. Thus, the transaction layer 125 provides an interface between the processing circuitry and the interconnect fabric of the device, such as the data link layer and the physical layer. In this regard, the main responsibilities of the transaction layer are the assembly and disassembly of data packets (ie, transaction layer data packets (TLP)) and the handling of credit-based flow control.In turn, the data link layer 128 can sort the transaction layer generated TLPs and ensure reliable delivery of the TLPs (including handling error checking) and acknowledgment processing between the two endpoints. Therefore, the link layer 128 serves as an intermediate stage between the transactional layer and the physical layer and provides a reliable mechanism for exchanging TLPs between the two components over the link. One side of the link layer receives the TLP, application identifier assembled by the transaction layer, computes and applies error detection code (eg, cyclic recovery code (CRC)), and submits the modified TLP to the physical layer for Transmitted to external devices across physical links.After processing in the data link layer 128, the data packet can be transmitted to the PHY unit 130. In general, the PHY unit 130 may include a low-power PHY 134, which may include both logical and physical (including electrical) sub-layers. In one embodiment, the physical layer represented by PHY unit 130 physically transports data packets to external devices. The physical layer includes a transmission section that prepares outgoing information for transmission and a receiver section that identifies and prepares it before passing the received information to the link layer. Symbols to be serialized and transmitted to the external device are supplied to the transmitter. A serialized symbol from an external device is supplied to the receiver, and the receiver transforms the received signal into a bit stream. The bit stream is serialized and supplied to a logical sub-block.In one embodiment, the low power PHY 134, which can be a particular low power PHY specifically developed or adapted by another PHY such as an M-PHY, can provide processing of packed data for processing along the interconnect 140 Send. As further seen in FIG. 1, a link training and management layer 132 (also referred to herein as a link manager) may also exist within the PHY unit 130. In various embodiments, the link manager 132 may include specific logic that may be implemented according to another communication protocol, such as the PCIe ™ protocol, and proprietary logic that processes the conventional and inter-physical PHY 134 interfaces with different protocols, such as the PCIe ™ protocol stack described above.In the embodiment of FIG. 1, interconnect 140 can be implemented as a differential pair, which can be two pairs of unidirectional lines. In some embodiments, multiple sets of differential pairs can be used to increase the bandwidth. Note that according to the PCIeTM communication protocol, the number of differential pairs in each direction is required to be the same. However, according to various embodiments, a different number of pairs can be provided in each direction, which allows for more efficient operation and lower power. The entire aggregated stack and link 140 may be referred to as a Mobile Fast PCIe ™ interconnect or link. Although shown at this high level in the embodiment of FIG. 1, it is to be understood that the scope of the present invention is not limited thereto. That is, it is to be understood that the view shown in FIG. 1 is only about various other circuits or other semiconductor devices that include this stack, including the protocol stack from the transaction layer and the high-level software through the physical layer, and the SoC is not shown.Referring now to FIG. 2, a block diagram of a SoC according to an embodiment of the invention is shown. As shown in FIG. 2, the SoC 200 can be any type of platform for implementation in various types of SoCs, ranging from devices such as smart phones, personal digital assistants (PDAs), tablet computers, notebooks, Relatively small low-power portable devices to more advanced SoCs that can be implemented in high-level systems.As seen in FIG. 2, SoC 200 may include one or more cores 2100-210n. Thus in various embodiments there may be multicore SoCs, all of which may be homogeneous cores with a given architecture, such as an ordered or out-of-order processor. Or there may be heterogeneous cores, such as some relatively small, low-power cores, such as cores with an ordered architecture; with existing additional cores, which may have larger and more complex architectures, such as a chaotic architecture. The protocol stack enables data communication between one or more of these kernels and the rest of the system. As can be seen, this stack can include software 215, which can be higher level software such as OS, firmware, and application level software executing on one or more cores. In addition, the protocol stack includes a transaction layer 220 and a data link layer 230. In various embodiments, these transaction and data link layers may have a given communication protocol, such as the PCIeTM protocol. Of course, there may be layers in other embodiments such as different protocol stacks according to a Universal Serial Bus (USB) protocol stack. Moreover, in some implementations, the low power PHY circuitry described herein can be multiplexed with existing replacement protocol stacks.Still referring to FIG. 2, this protocol stack, in turn, can be coupled to a physical unit 240, which can include a plurality of physical units capable of providing communication via a plurality of interconnects. In one embodiment, the first physical unit 250 may be a low power PHY unit that in one embodiment may correspond to an M-PHY according to the MIPI specification for providing communication via the main interconnect 280. In addition, a sideband (SB) PHY unit 244 may exist. In the embodiment shown, this sideband PHY unit may provide communication via a sideband interconnect 270, which may be a data link for eg at a slower data rate than the main interconnect 280 coupled to the first PHY 250 A sideband that provides some side information. In some embodiments, each layer of the protocol stack can have separate sidebands coupled to this SB PHY 244 to enable communication along this sideband interconnect.In addition, the PHY unit 240 may further include an SB link manager 242 that can be used to control the SB PHY 244. In addition, there may be a link training and status manager 245, and it can be used to adapt a protocol stack with a first communication protocol to a first PHY 250 with a second communication protocol, as well as to provide for the first PHY 250 and interconnect 280 The overall control.As further seen, various components may exist in the first PHY 250. More specifically, transmitter and receiver circuits (ie TX253 and RX254) may be present. In general, such circuitry may be used to perform serialization operations, deserialize operations, and transmit and receive data via the main interconnect 280. There may be a save state manager 251 and may be used to save configuration and other status information about the first PHY 250 when it is in a low power state. Moreover, there can be an encoder 252 for performing line coding, for example according to the 8b / 10b protocol.As can be further seen in FIG. 2, there may be a mechanical interface 258. This mechanical interface 258 may be a given interconnect for providing communications from the root complex 200, and more particularly to / from the first PHY 250 via the primary interconnect 280. In various embodiments, such mechanical connections can be made using pins of a semiconductor device, such as a ball grid array (BGA) or other surface mount device, or plated through a hole connection.In addition to these primary communication mechanisms, an additional communication interface may utilize low power serial (LPS) PHY unit 255, low power serial (LPS) PHY unit 255 via a network comprising software layer 216, transaction layer 221, and link layer 231 The split stack couples between the core 210 and one or more off-chip devices 260a-c, which can be devices such as sensors, accelerometers, temperature sensors, global positioning system (GPS) circuitry, compass circuitry, touchscreen circuitry, Keyboard circuits, mouse circuits and the like for various low data rate peripherals.It is noted that in various embodiments both the sideband interconnect 270 or the main interconnect 280 can be coupled between the SoC 200 and another semiconductor component, such as another IC, such as a multi-band radio solution.Again, although the illustration of FIG. 2 is a relatively high level, it can vary. For example, multiple low-power PHYs may be provided to enable higher rate data communications, eg, via multiple channels, with each channel being associated with a separate PHY. Referring now to FIG. 3, a block diagram of a physical unit according to another embodiment of the present invention is shown. As shown in FIG. 3, the physical unit 300 includes a link training and status manager 310. This state manager can be as described above, and can be a logical set for enabling a protocol stack having a first communication protocol to interface with a physical unit having a second (eg, different) communication protocol.As further seen in FIG. 3, link training and status manager 310 may communicate with multiple M-PHYs 3200-320n. By providing more than one such PHY, higher rate data communications are possible. It is to be noted that although each M-PHY shown in FIG. 3 may include some number of logic for enabling its individual independent communications to occur, the overall control over the communication of these different M-PHYs may be via link training and status Manager 310. Also, it is to be understood that although multiple M-PHYs are shown in FIG. 3, in other embodiments, another type of multiple PHY cells can exist and additional multiple heterogeneous PHY cells can be provided. Note that each M-PHY unit can be used as part of a unique logical link or in a group with groups associated with a single logical link. Each device can typically consume a single logical link, but in some embodiments a single physical device can consume multiple logical links, for example, to provide proprietary link resources for different functions of the multi-function component.Referring now to FIG. 4, shown is a block diagram illustrating further details of a protocol stack in accordance with an embodiment of the present invention. As shown in FIG. 4, the stack 400 includes various layers including a transaction layer 410, a data link layer 420, and a physical layer 430. As mentioned above, these different layers can be configured using either the conventional transaction and data link portions of the PCIe ™ protocol stack or a modified version of such a stack to accommodate these layers with the first communication protocol and the physical layer with another communication protocol , The physical layer in the embodiment of FIG. 4 may be an M-PHY according to the MIPI specification.As seen in FIG. 4, with regard to the direction of transmission of information from the protocol stack 400, other circuitry, such as from the SoC, such as the core Or other processing logic) to the stack arrival information. After being assembled into transport packets (transport packets can in various embodiments be packets having, for example, 1 to 4096 bytes (or having a smaller maximum allowable size, for example, 128 or 256)), the assembled The data packets are provided to a flow controller 414 that determines whether sufficient flow control credits are available based on the number of next requested TLs (one or more) queued for transmission and controls the injection of data packets into the data link Road layer 420. More specifically, these injected packets are provided to the error detector and sequencer 422, which in one embodiment may generate the TLP sequence number and the LCRC. As further seen, the data link layer 420 further includes a transmit message mechanism 426, which in turn generates a DLLP for link management functions and is coupled to a data link transmit controller 425, which is used for flow control And data link integrity (ACK / NAK) mechanisms; note that this can be subdivided so that these functions are implemented using different logic blocks.As further seen, the processed data packet is provided to a retry buffer 424, where the retry buffer 424 holds a copy of each TLP until it is answered by a component on the other side of the link, taking note that in practice this can Is buffered at a more upper portion of the stack (within or above the assembler 412) and they can be stored in corresponding entries until they are selected for transmission to the physical layer 430 via the data / message selector 428. In general, the above transaction and data link layers can operate in accordance with conventional PCIe ™ protocol stack circuits, some of which are further described below.Rather, with respect to the physical layer 430, much more modifications to some of the logical components of this layer, such as modified in accordance with the PCIeTM protocol stack, and to provide for the actual physical portion of a physical unit with another communication protocol interface. As seen, incoming data packets may be applied to a frame generator 432, which adds physical layer frame symbols and frames the data packets and provides them to a bandwidth / location mapper 434 whose shifted data path To generate the required calibrations for external transfers to adjust the data path width as necessary and in turn to a trainer and hop sequencer 436 that can be used to perform link training and hop ordering. As can be seen, the frame generator 432, trainer / sequencer 436, and data / sequence selector 438 may all be coupled to a physical layer transmission controller 435, which is a transceiver portion of the LTSSM and associated logic . Block 436 is logic for generating physical layer transmissions, such as a training set (TS) and hopping ordered set. In this way, framed packets may be selected and provided to the physical circuitry to perform encoding, serializing, and driving serialized signals corresponding to the processed data packets onto the physical interconnect. In one embodiment, the mapping of symbol differences between different communication protocols may be performed in the frame generator 432. In one embodiment,As seen, multiple physical channels or channels can be provided for this physical interconnect. In the illustrated embodiment, each physical channel or channel can include its own independent PHY unit transmission circuits 4450-445j, which in one embodiment may each be part of an M-PHY unit according to the MIPI specification. As described herein, there may be a different number of transmitters and receivers than a PCIeTM that matches the number of transmitters and receivers. Thus as can be seen, each transmission circuit 445 can include an encoder for encoding symbols according to 8b / 10b encoding, a serializer for serializing the encoded symbols, and a driver for driving the signals onto the physical interconnect driver. As further seen, each channel or channel may be associated with logic 4400-440j, which may be a logic circuit according to the MIPI specification for M-PHY, for thereby managing physical communications via the corresponding channel.Note that these multiple channels can be configured to operate at different rates, and embodiments can include different numbers of such channels. In addition, it is possible to have different numbers of channels and channel speeds in the transmission and reception directions. Thus, while a given logic unit 440 controls the operation of the corresponding lane of the PHY 445, it is to be understood that the physical layer transport controller 435 can be used to control the overall transport of information over the physical interconnect. Note that in some cases some very basic functions are performed by different logic associated with each channel; multiple LTSSM instances can be provided for situations in which channels can be assigned to more than a single link; for a trained Link, there is a single LTSSM in each of the components that control both the transceiver and receiver sides. This overall control can include power control, link speed control, link width control, initialization, and the like.Still referring to FIG. 4, incoming information received via the physical interconnect may similarly be communicated via the physical layer 430, the data link layer 420, and the transaction layer 410 via the reception mechanisms of these layers. In the embodiment shown in FIG. 4, each PHY unit may further include receive circuits, ie, receive circuits 4550-455k, which in the illustrated embodiment receive circuits 4550-455k are capable of receiving, for each of the physical links exist. It is to be noted that the numbers of the receiver circuit 455 and the transmitter circuit 445 are different in this embodiment. As can be seen, this physical circuit can include an input buffer for receiving incoming information, a deserializer to deserialize the information, and a decoder that can be used to decode symbols transmitted in 8b / 10b encoding. As further seen, each channel or channel may be associated with a logic unit 4500-450k, which may be a logic circuit according to a given specification (eg, the MIPI specification for M-PHY) for Manage physical communication via the corresponding channel.The decoded symbols may then be provided to the logic portion of the physical layer 430 which, as seen, may include a resilient buffer 460, wherein the resilient buffer accommodates the clock differential between this component and another component on the link; Note that its position in various embodiments may be shifted, for example, under the 8b / 10b decoder, or combined with the lane deskew buffer, and store the incoming decoded symbols. In turn, this information may be provided to a width / position mapper 462 where it is provided to a channel deskew buffer 464 that performs deskew across multiple channels, and buffer 464 is capable of handling inter-channel signal skew for multi-channel situations Differences to realign bytes. In turn, the anti-skewed information may be provided to a frame processor 466, which may eliminate the frames present in the incoming message. As seen, physical layer receive controller 465 may couple to and control elastic buffer 460, mapper 462, deskew buffer 464, and frame processor 466.Still referring to FIG. 4, the recovered data packet may be provided to the receiving message mechanism 478 and the error detector, sequence checker, and link level retry (LLR) requester 475. This circuit can perform error correction checks on incoming packets, for example, by performing a CRC checksum operation, performing a sort check, and requesting a link-level retry on the erroneously received packets. Both receive message mechanism 478 and error detector / requester 475 may be under the control of data link receive controller 480.Still referring to FIG. 4, the data packets processed in unit 475 may be provided to transaction layer 410, and more specifically to flow controller 485, which performs flow control on these data packets to provide them to data packet interpreter 495 . The packet interpreter 495 performs interpretation of the data packets and forwards them to a selected destination, such as a given core or other logic of the receiver. Although shown at this high level in the embodiment of FIG. 4, it is to be understood that the scope of the present invention is not limited thereto.Note that the PHY 440 may use the same 8b / 10b encoding as that supported by PCIeTM for transmission. The 8b / 10b encoding scheme provides special symbols that differ from the data symbols used to represent the characters. These special symbols can be used for the various link management mechanisms described in the PCIeTM Physical Layer chapter. The use of additional special symbols by the M-PHY is described in the MIPI M-PHY specification. An embodiment may provide a mapping between PCIeTM and MIPI M-PHY symbols.Referring now to Table 1, an example mapping of PCIeTM symbol to M-PHY symbols according to one embodiment of the present invention is shown. Thus, this table shows the mapping of special symbols for aggregated protocol stacks according to one embodiment of the present invention.The 8b / 10b decoding rule is the same as defined for the PCIeTM specification. The only exception to the 8b / 10b rule is when a TAIL OF BURST is detected, which is a specific sequence that violates the 8b / 10b rule. According to various embodiments, the physical layer 430 can provide the data link layer 420 with any erroneous notification encountered during a TAIL OF BURST.In one embodiment, the framing of symbols and their application to channels can be as defined in the PCIeTM specification while data scrambling can be the same as defined in the PCIeTM specification. However, it is to be noted that data symbols transmitted in the PREPARE phase of communications according to the MIPI specification are not disturbed.With respect to link initialization and training, the link manager may provide the configuration and initialization of links capable of channeling one or more lanes, as discussed above, support for normal data transfer, support for state transitions from link error recovery Support and restart from the port of low power status.In order to achieve such an operation, the following physical and link related features may be known in advance (eg, prior to initialization): PHY parameters (eg, including initial link speed and supported speed; and initial link width and supported chains Road width).In one embodiment, training may include a variety of operations. Such operations may include initializing the link at the configured link speed and width, bit per channel lock, per-channel symbol lock, channel polarity, and channel-to-channel skew for multi-lane links. In this way, training can detect the channel polarity and adjust accordingly. However, it should be noted that the link training according to the embodiment of the present invention may not include link data rate and width negotiation, link speed and width degradation. On the contrary, as described above, both entities know the initial link width and speed in advance, once the link is initialized, and thus the time associated with the negotiation and the computational cost can be avoided.The PCIeTM Ordered Set can be used for the following modifications: The TS1 and TS2 Ordered Sets are used for IP reuse, but ignore many of the fields of the Train Ordered Set. Also, fast training sequences are not used. The Electrical Idle Ordered Set (EIOS) can be reserved for IP re-use as an OS is skipped, but the OS can be hopped at a different frequency than at the PCIe ™ specification. Also note that the data flow ordered sets and symbols can be the same as PCIeTM specifications.(1) presence, which can be used to indicate that there is an active PHY on the remote side of the link; and (2) configuration preparation, which is triggered to indicate completion of PHY parameter configuration and The PHY prepares for operation in the configured profile. In one embodiment, the system type information can be signaled via a unified sideband signal in accordance with an embodiment of the present invention.For purposes of controlling electrical idle conditions, the PHY has a TAIL OF BURST sequence that indicates that the transmitter is entering an electrical idle state. In one embodiment, the sideband channel can be used for signaling to exit electrical idle. Note that this indication can be combined with the PHY suppression breaking mechanism. The symbolic OPENS sequence can be transmitted as an EIOS to indicate that an electrical idle state has been entered.In some embodiments, no fast training sequence (FTS) is defined. Instead, the PHY can use a specific sequence of physical layers for exit from down / sleep states to burst conditions that can be used to address bit locks, symbol locks, and lane-to-lane skew. A small number of FTSs can be defined as robust symbol sequences. The beginning of a data flow ordered set can be based on the PCIeTM specification as link error recovery.Regarding the link data rate, in various embodiments, the initial data rate of the link initialization may be a predetermined data rate. The data rate change from this initial link speed can occur by experiencing a recovery state. Embodiments may support asymmetric link data rates, where different data rates are allowed in the opposite direction.In an embodiment, the supported link widths may be according to those of the PCIeTM specification. In addition, as described above, since the link width is predetermined, the embodiment may not support the protocol for negotiating the link width, and thus link training may be simplified. Of course, embodiments may provide support for asymmetric link widths in the opposite direction. At the same time, the initial link width and initial data rate configured for each direction of the link can be known beforehand in advance of training.With respect to the physical port of the PHY unit, it is not required that the xN port form the capability of an xN (where N can be a 32, 16, 12, 8, 4, 2 and 1) link and an x1 link and the xN port forms N and 1 The ability to arbitrarily link the width is optional. An example of this behavior includes an x16 port that can be configured as a single link only, but the width of the link can be configured to the required width of x12, x8, x4, x2, and x16 and x1. In this way, a designer seeking to implement a device using a protocol stack in accordance with an embodiment of the present invention can connect ports between these components in a manner that allows two different components to meet the above requirements. The behavior is undefined if the ports between the components are connected in ways that do not meet the intended use as defined by the component's port description / data sheet.In addition, the ability to split one port into two or more links is not disallowed. If such support is suitable for a given design, the port can be configured to support a particular width during training. An example of this behavior would be an x16 port that could be capable of configuring two x8 links, four x4 links, or 16 x1 links.When using 8b / 10b encoding, the unambiguous channel-to-channel anti-skew mechanism in the PCIeTM specification is an ordered set of COM symbols received during a training sequence or SKP ordered set since all channels in the configured link Simultaneous transmission of ordered sets. MK0 symbols transmitted during the HS-BURST synchronization sequence can be used for channel-channel skew.As outlined above with reference to FIG. 4, the link training and status manager can be configured to perform various operations including adapting the upper layers of the PCIe ™ protocol stack to lower PHY units of different protocols. In addition, this link manager can configure and manage single or multiple lanes and can include support for symmetric link bandwidth, state machine compatibility with PCIeTM transaction and data link layer, link Training, optional symmetrical link downtime, and control of sideband signals for robust communications. Therefore, embodiments provide for implementing PCIeTM transactions and data link layers with limited modifications to account for different link speeds and asymmetric links. In addition, with the link manager according to the embodiment of the present invention, multi-lane support, asymmetric link configuration, sideband uniformity and dynamic bandwidth scaling can be implemented while further bridging between different communication protocol layers.Referring now to FIG. 5, a state diagram 500 for a link training state machine is shown, which can be part of a link manager according to an embodiment of the invention. As shown in FIG. 5, link training can begin with the detection state 510. This state occurs on power-on reset, and applies to upstream and downstream ports. Upon reset, all configured channels can transition to the given state, HIBERN8 state, at which each end of the link is able to use the sideband channel to signal, for example, via the PRESENCE signal. Note that in this test state, a high-impedance signal, DIF-Z signal, can be driven on all channels.Therefore, when signaling and receiving a PRESENCE event, control passes from detection state 510 to configuration state 520 and drives this high impedance across all configured channels. In configuration state 520, the PHY parameters can be configured and, once done on all of the configuration lanes at each end of the link, the configuration preparation signal (CFG-RDY) can be indicated, for example using a sideband interconnect, while at all channels Maintain high impedance.Thus, once this configuration preparation indication is sent and received via the strip interconnect, control is passed to the shutdown state 530. That is, in this L0.STALL state, the PHY transitions to the STALL state and continues to drive high impedance across all configuration channels. As seen, control can pass to active state L1 (state 530), low power state (L1 state 540), deep low power state (L1.OFF state 545), or return configuration depending on whether data is available for transmission or reception State 520.Therefore, in the STALL state, the negative drive signal DIF-N can be transmitted on all configured channels. Then, when booting by the initiator, the BURST sequence can be started. Therefore, after the MARKER0 (MK0) symbol is transmitted, control passes to the active state 530.In one embodiment, the receiver may detect an exit from the STALL state on all configured channels and perform a bit lock and a symbol lock according to, for example, the MIPI specification. In an embodiment with a multichannel link, this MK0 symbol can be used to establish channel-to-channel skew.Conversely, when configured to a low power state (ie, L1 state 540), all configured channels may transition to the SLEEP state. Then when being directed to a deeper, lower power state (ie L1.OFF state 545), all configured channels can transition to the HIBERN8 state. Finally, when directed back to the configuration state, similarly, all configured channels transition to the HIBERN8 state.Still referring to FIG. 5, for active data transfer, control is therefore passed to the active state 550. In particular, this is the state in which the link and transaction layers begin to exchange information using data link layer packets (DLLPs) and TLPs. In this way payload transmission can take place and at the end of such transmissions TAIL OF BURST symbols can be transmitted.As can be seen, control can be passed from this active state back to the STALL state 530, to the recovery state 560 (eg, in response to a receiver error, or when otherwise being steered), or to a deeper low power (eg, L2) status 570.To return to the down state, the transmitter can send an EIOS sequence on all configured channels followed by a TAIL of BURST indication.If an error occurs or is otherwise directed, control can also pass to the recovery state 560. Here, transitioning to recovery causes all configured channels to enter the STALL state in both directions. To do this, GO TO STALL can be sent on the sideband interconnect and the transmitter of this signal can wait for a response. When this down signal has been sent and received, control passes back to the STALL state 530, as indicated by the GO TO STALL indication received on the sideband interconnect. It should be noted that this recovery state therefore uses the sideband to establish the protocol to coordinate the simultaneous entry into the STALL state.For low power states L1 and L1.OFF, operations are based on states 540 and 545. In particular, control passes from the STALL state to the L1 low power state 540 to enable the PHY to be placed in the SLEEP state. In this state, the negative drive signal, ie DIF-N signal, can be driven on all configured channels. When directed to exit the state, control passes back to the STALL state, eg, the PRESENCE signal is signaled on the sideband interconnect.As also seen, L1.OFF can be entered in a deeper state when all L1.OFF conditions are satisfied. In one embodiment, these conditions may include full power gating or power down to the PHY unit. In this deeper power state, the PHY can be placed in the HIBERN8 state and the high-impedance signal is driven on all configured channels. To exit this state, control is passed back to the STALL state by driving DIF-N on all configured channels.As further seen in FIG. 5, there may be an additional state, ie, a further deepened low power state (L2) 570 that can enter this further deepened low power state (L2) from an active state when it is ready to turn off power 570. In one embodiment, this state may be the same as that of the PCIeTM specification.Referring now to Table 2, a mapping between LTSSM states according to PCIeTM specifications and corresponding M-PHY states according to embodiments of the present invention is shown.Table 2LTSSM Status M-PHY Status Details Detect, poll SAVE Configuration via state transition in SAVE sub-state BURST BURST (PREP, SYNC) Sub-state recovery BURST / SLEEP / STALL can be in BURST state but will be converted to BURST by SLEEP / STALL L0 BURST (payload) BURST mode and exchange transaction L0s STALL STALL state L1 SLEEP SLEEP state L1.OFF HIBERN8 HIBERN8 L2 UNPOWERED UNPOWERED state DISABLED DISABLED state loopback no action link speed can be changed from configuration back to loopback thermal reset INLINE RESET INLINE RESET statusAs described above with reference to FIG. 2, an embodiment provides a unified sideband mechanism that can be used for link management and optional in-band support. In this manner, using sideband circuitry and interconnects, link management and control can occur independent of the higher speed (and larger power) circuitry used for the physical layers of the primary interconnect. Further in this manner, this sideband channel can be used to achieve reduced power consumption when portions of the PHY unit associated with the primary interconnect are powered down. Moreover, this unified sideband mechanism can be used prior to training the primary interconnect and can also be used if there is a failure on the primary interconnect.Still further, via this unified sideband mechanism, there can be a single interconnect in each direction, such as a differential pair, thereby reducing pin count and enabling new capabilities. Embodiments may also enable faster, more robust clock / power gating and use this link to eliminate ambiguity in conventional protocols such as PCIe (TM) sidebanding.The scope of the present invention is not limited in this regard and in various embodiments the sideband interconnect (eg, sideband interconnect 270 of FIG. 2) can be implemented as a one-way bidirectional sideband signal, a two- In-band signaling mechanisms, such as those available using M-PHY in low-power pulse-width modulation (PWM) mode, are implemented or implemented as in-band high speed signaling mechanisms such as physical layer ordered sets or DLLPs.As examples and not for purposes of limitation, various physical layer approaches may be supported. The first method, when using sideband interconnects, can be a single-wire bidirectional sideband signal that provides a minimum number of pins. In some embodiments, this signal, such as the PERST #, WAKE #, or CLKREQ signal, can be multiplexed on the existing sideband. The second method can be a bi-directional bidirectional one-way signal set that can be simpler and somewhat more efficient than a one-wire approach at the expense of additional pins. This implementation can be multiplexed on the existing sidebands, such as PERST # for host devices and CLKREQ # for device hosts (in this example, maintaining the existing signal directivity, simplifying dual mode Embodiment). The third method may be a low-speed in-band signaling mechanism, such as M-PHY LS PWM mode, which reduces the pin count relative to the sideband mechanism and may still support similar low power levels. Because this mode of operation is mutually exclusive with high-speed operations, it can be combined with high-speed in-band mechanisms such as physical layer ordered sets or DLLPs. Although this method is not low-power, it maximizes its commonality with existing high-speed IOs. When combined with low-speed in-band signaling, this approach provides a good low-power solution.To implement one or more of these configurations in a given system, a semantic layer can be provided that can be used to determine the meaning of the information exchanged at the physical layer and above the policy layer, which meaning can be used to comprehend device / platform level Action / reaction. In one embodiment, these layers may exist in the SB PHY unit.By providing a tiered approach, embodiments allow for the ability to include sidebands (which may be preferred in certain embodiments due to simplicity and / or low power operation) and in-band (which may be preferred for other implementations , For example to avoid the need for additional pin counts).In one embodiment, multiple sideband signals can be configured, for example, via the semantic layer, into a single data packet for communication via a unified sideband mechanism (or in-band mechanism). In one embodiment, Table 3 below shows the various signals that may exist in one embodiment. In the table shown, the logical direction of the signal is shown by the arrows, where the up arrow is defined as the direction to the host (eg root complex), and the down arrow is defined as the direction to the device (eg peripheral device such as radio Solution) direction.table 3Equipment exists ↑Good power ↓Power ↓Reference clock is good ↓Basic reset ↓Configuration preparation ↑ ↓Prepare for training ↑ ↓Start training ↑ ↓L1pg request ↑ ↓L1pg rejected ↑ ↓L1pg authorized ↑ ↓OBFF CPU active ↓OBFF DMA ↓OBFF idle ↓Wake upAnswer the handshake to receive ↑ ↓.Referring now to FIG. 6, a flowchart is shown for various states of the sideband mechanism according to an embodiment of the present invention. As shown in FIG. 6, these various states may pertain to a root complex (eg, a host control operation). The state diagram 600 may provide control of various states via the host. As seen, operation begins with a pre-boot state 610 in which a presence signal can be transmitted. Note that this presence signal may be as described above with respect to link management operations. Control then passes to the pilot state 620 where various signals, ie, power good signals, reset signals, reference clock status signals, and readiness training signals may be transmitted. Note that all of these signals can be transmitted via a single data packet, where each of these signals may correspond to an indicator or a field (eg, a 1-bit indicator of a data packet) of the data packet.Still referring to FIG. 6, control is next passed to an active state 630 where the system may be active (eg, SO), a corresponding device (eg, the downstream device may be an active device state (eg, D0) and the link may be active , Shutdown or low power state (eg, L0, L0s, or L1) As can be seen, various signals can be transmitted in this state including the OBFF signal, the clock request signal, the reference clock state, the L0 signal, signal.Next, control may be passed to the low power state 640, eg, after the above signal transmission is performed. As can be seen, in this low power state 640, the system may be active while the device may be in a relatively low latency low power state (eg, D3 heat). In addition, the link may be in a given low power state (eg, L2 or L3). As seen in these states, signals transmitted via unified sideband data packets may include wake-up signals, reset signals, and power-good signals.When the system enters a deeper, lower power state, the second low power state 650 (eg, when the system is in the SO state and the device is in the D3 cold state and the link is similarly in the L2 or L3 state As can be seen, the same wake-up, reset, and power good signals can be transmitted.As also seen in Figure 6, between a lower power state 660 (eg, system low power state S3) and a device low power state , D3 cold) and the same link low power states L2 and L3 While this particular set of sideband information is conveyed, it is to be understood that the scope of the present invention is not limited thereto.The embodiments thus provide a layered structure with the ability to balance simplicity with low latency with respect to flexibility. In this way, the existing sideband signals and additional sideband signals can be replaced with a smaller number of signals and the future expansion of the sideband mechanism can be achieved without adding more pins.Referring now to FIG. 7, a flowchart of a method according to an embodiment of the present invention is shown. As shown in FIG. 7, method 700 may be used to transfer data via an aggregated protocol stack that includes an upper layer of a communication protocol and a lower layer of a different communication protocol, such as a physical layer. In the example shown, it is assumed that the aggregated protocol stack as described above, ie the upper transaction and data link layer with PCIeTM protocol and the physical layer of different specifications (eg MIPI specification). Of course, there may be additional logic that enables the two communication protocols to be aggregated into a single protocol stack, such as the logic and circuitry discussed above with respect to FIG. 4.As seen in FIG. 7, method 700 can begin by receiving a first transaction in a protocol stack of a first communication protocol (block 710). For example, various logic, such as the root complex of the kernel, other execution engines, etc. seek to send information to another device. Therefore, this information can be passed to the business layer. As seen, control passes to block 720, where the transactions can be processed and provided to the logic portion of the PHY of the second communication protocol. Such processing may include the various operations discussed above with respect to the flow of FIG. 4 in which various operations such as receiving data, performing flow control, link operations, packing operations, etc., can occur. In addition, various operations that provide data link layer packets to the PHY can occur. Next, control passes to block 730, where this first transaction can be translated into a second format transaction in the logic portion of the PHY. For example, any conversion of symbols can be performed (when needed). In addition, a variety of translation operations can be performed to translate the transaction into a format for transmission over the link. Thus, control can pass to block 740 where this second format transaction can be transmitted from the PHY to the device via the link. As an example, the second format transaction can be serialized data after line encoding, serialization, or the like. Although shown at this high level in the embodiment of FIG. 7, it is to be understood that the scope of the invention is not limited thereto.Referring now to FIG. 8, a block diagram of components present in a computer system in accordance with an embodiment of the present invention is shown. As shown in FIG. 8, system 800 can include many different components. These components can be implemented as ICs, portions thereof, discrete electronic devices, or other modules that are adapted to a circuit board such as a motherboard or add-in card of a computer system or otherwise incorporated within a chassis of a computer system Component. Also note that the block diagram of FIG. 8 is intended to illustrate a high-level view of many of the components of a computer system. However, it is to be understood that additional components may be present in some embodiments and that, in addition, different arrangements of the components shown may occur in other embodiments.As seen in FIG. 8, the processor 810 (which may be a low-power multi-core processor socket such as an ultra-low voltage processor) may act as a main processing unit and a central hub for communicating with the various components of the system . This processor can be implemented as a SoC. In one embodiment, the processor 810 may be an Intel® Core ™ based processor such as i3, i5, i7, or another such processor available from Intel Corporation of Santa Clara, California. However, it is to be understood that other low-power processors such as those available from Advanced Micro Devices (AMD), Sunnyvale, Calif., Other ARM processor-based ARM , Or MIPS-based designs from MIPS Technologies, Sunnyvale, California, or their licensors or adopters.The processor 810 may communicate with a system memory 815, which in an embodiment can be implemented by a plurality of memory devices to provide a given amount of system memory. As an example, the memory can be configured according to a low-power dual data rate (LPDDR) design of Joint Electron Device Engineering Conference (JEDEC), such as the current LPDDR2 standard according to JEDEC JESD 209-2E (published in April 2009) The next generation LPDDR standard for LPDDR3 that will provide LPDDR2 expansion to increase bandwidth. As an example, there may be 2/4/8 gigabytes (GB) of system memory and can be coupled to the processor 810 via one or more memory interconnects. In various implementations, individual memory devices can have different package types, such as single die package (SDP), dual die package (DDP), or four die package (QDP). These devices, in some embodiments, can be soldered directly onto the motherboard to provide a low profile solution, while in other embodiments, the device can be configured as one or more memory modules, which in turn can be powered by a given connection The coupler is coupled to the motherboard.To provide persistent storage of information, such as data, applications, one or more operating systems, etc., mass storage 820 may also be coupled to the processor 810. In various embodiments, such mass storage may be implemented via SSD in order to achieve a thinner and lighter system design and to improve system responsiveness. However, in other embodiments, mass storage may be implemented primarily using a hard disk drive (HDD) with lesser amounts of SSD memory acting as SSD caches to enable non-volatile storage of background status and other such information during shutdown events So that a quick power-on can occur on restarting system activity. Also shown in FIG. 8, the flash memory device 822 may be coupled to the processor 810, for example via a serial peripheral interface (SPI). This flash memory device can provide non-volatile storage of system software including basic input / output software (BIOS) and other firmware for the system.Various input / output (IO) devices may exist within the system 800. The embodiment of FIG. 8 specifically shows a display 824, which may be a high resolution LCD or LED panel disposed within the cover of the chassis. This display panel may also provide a touch screen 825, eg, externally adapted to the display panel such that user input can be provided to the system for the desired operation via, for example, user interaction with this touch screen, eg, with respect to information display, information access Wait. In one embodiment, the display 824 may be coupled to the processor 810 via a display interconnect that can be implemented as a high performance graphics interconnect. Touch screen 825 may be coupled to processor 810 via another interconnect, which in one embodiment is capable of I2C interconnection. As further shown in FIG. 8, in addition to the touch screen 825, user input via touch gestures can also occur via the touch pad 830, which can be configured within the chassis and can also be coupled to the same I2C-interaction with the touch screen 825 even.For perceptual computing and other purposes, various sensors may reside within the system and be coupled to the processor 810 in different ways. Some inertial and environmental sensors may be coupled to the processor 810 through a sensor hub 840 (eg, via an I 2 C interconnect). In the embodiment shown in FIG. 8, these sensors may include an accelerometer 841, an ambient light sensor (ALS) 842, a compass 843, and a gyroscope 844. In one embodiment, other environmental sensors may include one or more thermal sensors 846 that may be coupled to the processor 810 via a system management bus (SMBus) bus. It is also to be understood that one or more of the sensors may be coupled to the processor 810 via an LPS link in accordance with an embodiment of the present invention.It is also seen in FIG. 8 that various peripheral devices may also be coupled to the processor 810 via a low pin count (LPC) interconnect. In the illustrated embodiment, various components can be coupled through an embedded controller 835. These components can include a keyboard 836 (eg, coupled via a PS2 interface), a fan 837, and a thermal sensor 839. In some embodiments, the touchpad 830 may also be coupled to the EC 835 via a PS2 interface. In addition, a secure processor, such as Trusted Platform Module (TPM) 838 according to the Trusted Computing Group (TCG) TPM Specification Release 1.2 (October 2, 2003)) may also be coupled to processing via this LPC interconnect 810.System 800 can communicate with peripherals in a variety of ways, including wireless. In the embodiment shown in FIG. 8, there are various wireless modules, each of which can correspond to a radio configured for a particular wireless communication protocol. One approach to wireless communication within short range (such as near field) may be via a Near Field Communication (NFC) unit 845, which in one embodiment may communicate with the processor 810 via an SMBus. It is to be noted that, via this NFC unit 845, devices immediately adjacent to each other can communicate. For example, a user may enable the system 800 to communicate with another (for example) portable device (such as a mobile device, such as a mobile device, a mobile device, User's smartphone). You can also use the NFC system to perform wireless power transfer.As further seen in FIG. 8, additional wireless units can include other short-range wireless engines, including a WLAN unit 850 and a Bluetooth unit 852. Using the WLAN unit 850, Wi-Fi ™ communication according to a given Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard can be realized while short-range communication via the Bluetooth protocol can occur via the Bluetooth unit 852. These units may communicate with the processor 810 via, for example, a USB link or a universal asynchronous receiver transmitter (UART) link. Or these units may be coupled to the processor 810 via a low power interconnect such as the polymeric PCIe / MIPI interconnects as described herein or another such protocol such as the Serial Data Input / Output (SDIO) standard via an interconnect. Of course, the actual physical connections between these peripheral devices that can be configured on one or more add-in cards can be made by adapting to the motherboard's NGFF connector.In addition, wireless wide area communication (eg, according to a cellular or other wireless wide area protocol) can occur via a WWAN unit 856 that can in turn be coupled to a Subscriber Identity Module (SIM) 857. In addition, a GPS module 855 may exist in order to receive and use the location information. It is noted that in the embodiment shown in FIG. 8, WWAN unit 856 and an integrated capture device, such as camera module 854, may communicate via a given USB protocol, such as a USB 2.0 or 3.0 link, or UART or I2C protocol. The actual physical connection of these units can again be made via adapting the NGFF plug-in card to the NGFF connector configured on the motherboard.To provide audio input and output, the audio processor can be implemented via a digital signal processor (DSP) 860 that can be coupled to the processor 810 via a high resolution audio (HAD) link. Similarly, DSP 860 may communicate with an integrated encoder / decoder (CODEC) and amplifier 862, which may in turn be coupled to an output speaker 863 that may be implemented within the chassis. Similarly, the amplifier and CODEC 862 can be coupled to receive audio input from a microphone 865, which in an embodiment can be implemented via a dual-array microphone to provide high quality audio input to enable voice activation control of various operations within the system. Also note that the audio output can be supplied from the amplifier / COEDC862 to the headphone jack 864.Therefore, the embodiments can be used in many different environments. Referring now to FIG. 9, an example system 900 that can be used with an embodiment is shown. As seen, system 900 may be a smart phone or other wireless communicator. As shown in the block diagram of FIG. 9, system 900 may include a baseband processor 910, which may be a multi-core processor capable of handling baseband processing tasks and application processing. Therefore, the baseband processor 910 can perform various signal processing on communication and perform calculation operations for the device. In turn, the baseband processor 910 can be coupled to a user interface / display 920, which in some embodiments can be implemented by a touch screen display. In addition, the baseband processor 910 may be coupled to a memory system that includes non-volatile memory (ie, flash memory 930) and system memory (ie, dynamic random access memory (DRAM) 935) in the embodiment of FIG. As further seen, the baseband processor 910 can further be coupled to a capture device 940, such as an image capture device capable of recording video and / or still images.In order to realize the transmission and reception of the communication, various circuits may be coupled between the baseband processor 910 and the antenna 980. In particular, a radio frequency (RF) transceiver 970 and a wireless local area network (WLAN) transceiver 975 may be present. In general, the RF transceiver 970 may be configured to operate in accordance with a given wireless such as a 3G or 4G wireless communication protocol according to Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), Long Term Evolution (LTE), or other protocols Communication protocols receive and transmit wireless data and make calls. Other wireless communications, such as AM / FM, or Global Positioning Satellite (GPS) signals, such as receiving or transmitting radio signals, may also be provided. In addition, via the WLAN transceiver 975, it is also possible to implement a local area wireless signal, such as according to the Bluetooth ™ standard or the IEEE 802.11 standard (such as IEEE802.11a / b / g / n). It is to be noted that the links between the baseband processor 910 and one or both of the transceivers 970 and 975 may be implemented via low power aggregate interactions that combine and map the functionality of PCIe ™ interconnects and low power interconnects such as MIPI interconnects even. Although shown at this high level in the embodiment of FIG. 9, it is to be understood that the scope of the present invention is not limited thereto.The embodiments can be used in many different types of systems. For example, in one embodiment, the communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to communications devices and, on the contrary, other embodiments can relate to other types of devices for processing instructions, or one or more machine-readable media including, in response to being executed on a computing device, The apparatus executes instructions of one or more of the methods and techniques described herein.Embodiments may be implemented in code and may be stored on a non-transitory storage medium having instructions stored thereon that can be used to program a system to execute the instructions. The storage medium may include, but is not limited to, any type of disk including a floppy disk, an optical disk, a solid state drive (SSD), compact disk read only memory (CD-ROM), compact disk rewritable (CD- RW) A magneto-optical disk, a semiconductor device such as a read only memory (ROM), a random access memory (RAM) such as a dynamic random access memory (DRAM), a static random access memory (SRAM), an erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM), magnetic or optical card, or any other type of medium suitable for storing electronic instructions.Although the invention has been described in terms of a limited number of embodiments, those skilled in the art will understand the many modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
A method of manufacturing a vertical transistor. The vertical transistor utilizes a deposited amorphous silicon layer to form a source region. The vertical gate transistor includes a double gate structure for providing increased drive current. A wafer bonding technique can be utilized to form the substrate. |
What is claimed is: 1. A method of manufacturing a vertical transistor, the method comprising:providing a semiconductor substrate including a semiconductor base layer below a first insulative layer, the first insulative layer being below a first semiconductor layer, and the first semiconductor layer being below a second insulative layer; providing an aperture through the first insulative layer, the first semiconductor layer, and the second insulative layer; doping the semiconductor substrate through the aperture; and providing an amorphous semiconductor layer above the second insulative layer and within the aperture. 2. The method of claim 1, further comprising:doping the amorphous semiconductor layer. 3. The method of claim 2, further comprising annealing the amorphous semiconductor layer.4. The method of claim 3, further comprising etching the amorphous semiconductor layer before the annealing step.5. The method of claim 4, wherein the amorphous semiconductor layer is etched to have a width between 500-2000Å.6. The method of claim 1, further comprising:before the providing an amorphous semiconductor layer step, providing dielectric spacers on side walls of the aperture. 7. The method of claim 6, wherein the dielectric spacers are silicon nitride.8. The method of claim 1, wherein the aperture divides the first semiconductor layer into a first portion including a first gate conductor on a first side of the aperture and a second portion including a second gate conductor on the other side of the aperture.9. The method of claim 1, wherein the semiconductor substrate includes single crystalline silicon.10. The method of claim 1, wherein a channel region is located between the first insulative layer and the second insulative layer in the aperture.11. A vertical transistor, comprising:a first gate conductor disposed above a top surface of a substrate; a second gate conductor disposed above the top surface of the substrate, wherein the first gate conductor is located between two dielectric layers, wherein the second gate conductor is located between two dielectric layers; a source disposed at least below partially above the top surface of the substrate; a drain disposed entirely above the top surface of the substrate; and a channel region between the first gate conductor and the second gate conductor and between the drain and the source, the channel region being at least partially above the top surface of the substrate, wherein a first spacer is silicon nitride provided on a side wall of the first gate conductor, and wherein the channel region is a 90-280 Å wide. 12. The transistor of claim 11, wherein the channel region is 90-280 Å wide.13. The transistor of claim 11, wherein the channel region is 500-1000 Å long.14. The transistor of claim 12, wherein the channel region is 90-280 Å wide.15. A vertical transistor, comprising:a first gate conductor disposed above a top surface of a substrate; a second gate conductor disposed above the top surface of the substrate, wherein the first gate conductor is located between two dielectric layers, wherein the second gate conductor is located between two dielectric layers; a source disposed at least below partially above the top surface of the substrate; a drain disposed entirely above the top surface of the substrate; and a channel region between the first gate conductor and the second gate conductor and between the drain and the source, the channel region being at least partially above the top surface of the substrate, wherein a first spacer is disposed between the channel region and the first gate conductor and a second spacer is disposed between the channel region and the second gate conductor, wherein the channel region is 500-1000 Å long. 16. A vertical transistor, comprising:a first gate conductor disposed above a top surface of a substrate; a second gate conductor disposed above the top surface of the substrate, wherein the first gate conductor is located between a first pair of dielectric layers, wherein the second gate conductor is located between a second pair of dielectric layers; a source disposed at least below partially above the top surface of a the substrate; a drain disposed entirely above the top surface of the substrate; and a channel region between the first gate conductor and the second gate conductor and between the drain and the source, the channel region being at least partially above the top surface of the substrate, wherein a first spacer is disposed between the channel region and the first gate conductor and a second spacer is disposed between the channel region and the second gate conductor, wherein the dielectric layers of the first pair and the second pair are each 50-250 Å thick. 17. A process of forming a vertical transistor having a channel region at least partially above a top surface of a substrate, the process comprising:providing a first dielectric layer, a silicon layer and second dielectric layer above a top surface of the substrate; providing an aperture in the first dielectric layer, the silicon layer and the second dielectric layer; doping the substrate through the aperture; forming a semiconductor layer above the second dielectric layer and within the aperture; doping the semiconductor layer; and annealing the semiconductor layer. 18. The process of claim 17, wherein the silicon layer includes a first portion being a first gate conductor and a second portion being a second gate conductor.19. The process of claim 18,wherein the aperture is 90-280 Å wide. 20. The process of claim 19, wherein the semiconductor layer includes amorphous silicon.21. A process of forming a vertical transistor having a channel region at least partially above a top surface of a substrate, the process comprising steps of;a) providing a first dielectric layer, a first semiconductor layer and a second dielectric layer above a top surface of the substrate; b) providing an aperture in the first dielectric layer, the first semiconductor layer and the second dielectric layer, the aperture reaching the substrate at a location; c) providing a doped region located at the location; d) forming a second semiconductor layer above the second dielectric layer and within the aperture; e) doping the second semiconductor layer; and f) annealing the second semiconductor layer, wherein step (b) is performed before step (c). 22. A vertical transistor, comprising;a first doped source/drain region in a substrate; a channel region disposed above the first doped source/drain region, the channel region having a bottom above the first doped source/drain region, a first side, and a second side, the channel region being 90-280 Å wide and 500-100 Å long; a first gate conductor isolated from the substrate and above the a substrate, the first gate conductor being adjacent a first dielectric structure including silicon nitride on the first side of the channel region; a second gate conductor isolated from the substrate and above the substrate, the second gate conductor being adjacent a second dielectric structure including silicon nitride on the second side of the channel region; and a second source/drain region disposed entirely above the substrate and disposed above the channel region and the first gate conductor and the second gate conductor. |
FIELD OF THE INVENTIONThe present invention relates generally to integrated circuits (ICs) and methods of manufacturing integrated circuits. More particularly, the present invention relates to a vertical transistor structure and a method of manufacturing integrated circuits having vertical transistors.BACKGROUND OF THE INVENTIONIntegrated circuits (ICs), such as, ultra-large-scale integrated (ULSI) circuits, can include as many as one million transistors or more. The ULSI circuit can include complementary metal oxide semiconductor (CMOS) field effect transistors (FETS). The transistors can include semiconductor gates adjacent a channel region and between drain and source regions. The drain and source regions are typically heavily doped with a P-type dopant (boron) or an N-type dopant (phosphorous).Generally, conventional ICs have employed lateral transistors or devices. Lateral transistors include source and drain regions disposed below a top surface of a bulk or semiconductor-on-insulator (SOI) substrate and a gate disposed above the top surface. Thus, the source region, drain region, and gate of lateral transistors each consumes valuable space on the top surface of the substrate. The gate is disposed on only one side of a channel between the source and the drain. Accordingly, the conventional lateral device can have a limited drive current.SOI-type devices generally completely surround a silicon or other semiconductor substrate with an insulator. Lateral devices built on SOI substrates have significant advantages over devices built on bulk-type substrates. The advantages include near ideal subthreshold voltage slope, low junction capacitance, and effective isolation between devices. These advantages lead to further advantages, including reduced chip size or increased chip density, because minimal device separation is needed due to the surrounding insulating layers. Additionally, SOI devices can operate at increased speeds due to reductions in parasitic capacitance.As demands for integration (transistor density) increase, vertical transistors have been considered. Vertical transistors can be insulated gate field effect transistors (IGFETs), such as, MOSFETS. In a conventional vertical MOSFET, source and drain regions are provided on opposite surfaces (e.g., a top surface and a bottom surface) of a semiconductor layer and a body region is disposed between the source and drain regions. During MOSFET operation, current flows vertically between the source and drain regions through a channel within the body region. The channel is often described in terms of its length, i.e., the spacing between the source and drain regions at the semiconductor surface, and its width, i.e., the dimension perpendicular to the length. Channel width is typically far greater than channel length.In one example of a conventional vertical FETs on a bulk-type substrate, the bulk-type semiconductor substrate, such as, a silicon substrate, is etched to form trenches or steps. The gate of the vertical transistor is disposed on a side wall of the trench or step, and the channel region is located adjacent to the side wall. Placing a gate conductor in the trench can be a difficult technical feat, especially as the size of gate lengths and gate widths decrease. Due to its small lateral size, the vertical transistor structure generally allows more devices to be contained on a single semiconductor substrate than the conventional lateral structure. Similar to the conventional lateral structure discussed above, the gate conductors of the vertical transistor are disposed on only one side of the channel region, and the current density associated with the vertical FET is accordingly somewhat limited.As discussed above, vertical transistors offer significant advantages including small wafer surface area consumption due to the vertical nature of the transistor. The vertical nature allows three dimensional integration. In addition, the vertical transistor design is conductive to double gate and surrounded gate structures. Double gate and surrounded gate structures allow an electrical field to be induced in the channel region from two or more sides. Accordingly, the double gate and surrounded gate structures can increase current density and switching speeds. Further, the double gate and surrounded gate structures provide more scalability with respect to controlling short channel effects and can be used to control threshold voltages.Thus, there is a need for an integrated circuit or electronic device that includes vertical transistors and can be manufactured in an efficient process. Further still, there is a need for vertical transistors having double gate or surrounded gate structures. Even further still, there is a need for a method of manufacturing vertical transistors without providing a gate conductor in a trench. Yet further, there is a need for a method of manufacturing double gate vertical transistors and surrounded gate vertical structures. Yet even further, there is a need for an efficient method of manufacturing a double gate vertical transistor.SUMMARY OF THE INVENTIONThe present invention relates to the method of manufacturing a vertical transistor. The method includes providing a semiconductor substrate including a semiconductor base layer which is below a first insulative layer which is below a semiconductor layer which is below a second insulative layer. The method also includes a providing an aperture through the first insulative layer, the semiconductor layer and the second insulative layer, doping the semiconductor substrate through the aperture, and providing an amorphous semiconductor layer above the second insulative layer and within the aperture.The present invention further relates to a vertical transistor. The vertical transistor includes a first gate conductor disposed above a top surface of a substrate, a second gate conductor disposed above a top surface of a substrate, a source disposed at least partially below the top surface of the substrate, and a drain disposed entirely above the top surface of the substrate. The first gate conductor is located between two dielectric layers. The second gate conductor is also located between two dielectric layers. A channel region is disposed between the first gate conductor and the second gate conductor and between the drain and the source.The present invention further relates to a process of forming a vertical transistor having a channel region above a top surface of a substrate. The process includes providing a first dielectric layer, a silicon layer and a second dielectric layer above a top surface of the substrate, providing an aperture in the first dielectric layer, the silicon layer, and the second dielectric layer, forming an amorphous semiconductor layer above the second dielectric layer and within the aperture, doping the amorphous semiconductor layer, and annealing the amorphous semiconductor layer.BRIEF DESCRIPTION OF THE DRAWINGSExemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a cross-sectional view of a portion of an integrated circuit in accordance with an exemplary embodiment, the integrated circuit including a vertical transistor;FIG. 2 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 1, showing a compound substrate for the integrated circuit;FIG. 3 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 2, showing a first photolithographic step;FIG. 4 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 3, showing an etching step;FIG. 5 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 4, showing a first dopant implant step;FIG. 6 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 5, showing a spacer formation step;FIG. 7 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 6, showing a semiconductor deposition step and a second photolithographic step; andFIG. 8 is a cross-sectional view of the portion of the integrated circuit illustrated in FIG. 7, showing a second dopant implant step.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSWith reference to FIG. 1, a portion 10 of an integrated circuit (IC) includes a vertical transistor 12 which is disposed on a semiconductor substrate 14. Semiconductor substrate 14 is preferably a compound structure comprised of a semiconductor-on-insulator (SOI) substrate 17, silicon-on-glass and a bulk-type semiconductor base or substrate 15. Substrate 14 can be embodied as a wafer suitable for semiconductor fabrication.Substrate 17 includes a semiconductor layer 21 and a dielectric or an insulative layer 20. Preferably, substrate 17 is a silicon-on-glass wafer including a single crystal silicon layer (layer 21) and a silicon dioxide layer (layer 20). Substrate 17 can be purchased from a number of sources or can be formed by various conventional techniques.Alternatively, substrate 14 can be an N-type well in a P-type substrate, an insulative substrate, a bulk P-type single crystalline (001) silicon substrate, SOI substrate or other suitable material for transistor 12. For example, substrate 14 could be a bulk-type substrate 15 including deposited layers similar to the layers associated with substrate 17 formed by an epitaxy or deposition process.A dielectric or insulative layer 30 is disposed above semiconductor layer 21 of substrate 17. A semiconductor layer 40 is disposed above insulative layer 30. Layer 30 and layer 40 are covered by a dielectric or insulative layer 58.Transistor 12 can be a P-channel or N-channel vertical metal oxide semiconductor field effect transistor (MOSFET). Transistor 12 is preferably embodied as a fully depleted (FD), double gate vertical MOSFET and includes a gate structure 18, a gate structure 19, a source region 22, and a drain region 24. Gate structures 18 and 19 advantageously provide a double gate structure that provides increased drive current and high immunity to short channel effects. Gate structures 18 and 19 can also continue to cover a front and back side (into and above the page in FIG. 1) of portion 10 so that transistor 12 has gate conductors on four sides of a channel 41.Source region 22 extends from a top surface 27 of substrate 15 to a bottom 25 in substrate 15. Top surface 27 is below layer 20 of substrate 17. Region 22 is preferably 800-2000 Å wide (left-to-right in FIG. 1) and 1700-4300 Å deep from top surface 27 to bottom 25. Drain region 24 extends from a top surface 39 of semiconductor layer 40 to a top surface 28 of insulative layer 30. Region 40 can be 0.2 to 0.5 microns wide (left-to-right in FIG. 1).For an N-channel transistor, regions 22 and 24 are heavily doped with N-type dopants (e.g., 1*10<20>-5*10<20 >dopants per cubic centimeter). For a P-channel transistor, regions 22 and 24 are heavily doped with P-type dopants (e.g., 1*10<20>-5*10<20 >dopants per cubic centimeter). Appropriate dopants for a P-channel transistor include boron, boron diflouride, or iridium, and appropriate dopants for an N-channel transistor include arsenic, phosphorous, or antimony.Gate structure 18 includes a gate conductor 36 (layer 21 of substrate 17). Gate structure 18 can also include a spacer 26. Gate structure 19 includes a gate conductor 37 (layer 21 of substrate 17). Gate structure 19 can also include a spacer 46. Conductors 36 and 37 are preferably a doped silicon material and can be electrically coupled together or electrically isolated. Conductors 36 and 37 can be a 200-1000 Å thick single crystal silicon material.Layer 21 (conductors 36 and 37) are preferably surrounded by insulative layers 20 and 30. Layers 20 and 30 are preferably comprised of a thermally grown, 150-300 Å thick silicon dioxide material. Alternatively, deposited silicon dioxide (tetraethylorthosilicate (TEOS)), silicon nitride (Si3N4) material or low-K dielectric materials can be utilized as layers 20 and 30. Alternatively, conductors 36 and 37 can include metal, such as a refractory metal. Conductors 36 and 37 can also include polysilicon. In another alternative, conductors 36 and 37 can include germanium to adjust the work function associated with transistor 12. Conductors 36 and 37 can be identically doped or can be separately doped depending upon desired operational parameters for transistor 12.Conductors 36 and 37 and layers 20 and 30 are preferably etched to form the particular structure for transistor 12. Conductors 36 and 37 can be a variety of lengths and geometries depending on the design of portion 10.Drain region 24 is preferably thicker and wider than channel 41 that is disposed along a vertical axis between regions 22 and 24. Channel 41 is disposed along a horizontal axis between gate structures 18 and 19.Source region 22 extends approximately 150-250 Å above top surface 28 to a junction 43 of region 41, and drain region 24 extends approximately 150-250 Å from top surface 28 to a junction 45 of region 41.Preferably, channel 41 is approximately 60-280 Å wide (left-to-right) if spacers 26 and 46 are utilized. Channel 41 is approximately 200-1000 Å deep from junction 43 to junction 41. Channel 41 can be doped in a variety of fashions to control short channel effects and to ensure the appropriate operation of transistor 12. Preferably, channel 41 is undoped.Spacers 26 and 46 can be a silicon nitride material formed in a conventional deposition and etch back process. Alternatively, other insulative materials can be utilized for spacers 26 and 46, such as, low k dielectric materials. Preferably, spacers 26 and 46 are of a different material than layers 20 and 30 to protect layer 21 (conductors 36 and 37) during subsequent processing steps such as etching and deposition steps.Drain region 24 and insulative layer 30 are preferably covered by dielectric layer 58. Dielectric layer 58 is preferably a 2000-5000 Å thick layer of insulative material, such as, silicon dioxide. Alternatively, compound or composite layers can be utilized for layer 58.Drain region 24 is coupled through a conductive via or contact 60 to a metal pad 62. Contact 60 extends from pad 62 through layer 58 to layer 24. Contact 60 can be a silicide material, a tungsten material, or other conductive material. Preferably, a junction 64 between region 24 and contact 60 includes a silicide material. Conventional silicidation techniques can be utilized for the silicide portion at junction 64. For example, titanium silicide, cobalt silicide, tungsten silicide, and other silicides can be utilized.With reference to FIGS. 1-8, the fabrication of transistor 12 is described as follows. The advantageous process allows a double gate or surrounded gate vertical transistor structure to be formed. Transistor 12 can be advantageously fabricated without depositing a gate conductor in a trench.In FIG. 2, substrate 14 is provided. Preferably, substrate 14 is comprised of bulk semiconductor substrate 15 and SOI substrate 17. Layer 20 of substrate 17 is preferably a 150-250 Å thick silicon dioxide layer. Layer 21 of substrate 17 is preferably a 200-1000 Å undoped single crystal silicon layer. Preferably, substrate 15 is a 300-1000 micrometer thick single crystal undoped bulk silicon substrate.Substrate 15 is attached to substrate 17 by a wafer bonding technique. Alternatively, other methods of attaching substrate 15 to substrate 17 can be utilized. Substrate 15 is disposed below substrate 17 and layer 20 is attached to top surface 27 of substrate 15. Wafer bonding can be completed at room temperature and in a nitrogen (N2) ambient atmosphere. Alternatively, higher temperature techniques can be utilized. In one embodiment, Smart-Cut(R) and Unibond(R) techniques can be utilized to bond substrate 17 to substrate 15. Smart-Cut(R) and Unibond(R) techniques are discussed in "Smart-Cut(R): The Basic Fabrication Process for UNIBOND(R) SOI Wafers," by Auberton-Herue, Bruel, Aspar, Maleville, and Moriceau (IEEE TRANS ELECTRON, March 1997), incorporated herein by reference. The Smart-Cut(R) and Unibond(R) techniques can reach temperatures of 110[deg.] C. to bond substrate 17 to substrate 15.The Smart-Cut(R) and UNIBOND(R) techniques utilize a combination of hydrogen implantation and wafer bonding to form substrate 14. Substrates 15 and 17 can be cleaned utilizing a modified RCA cleaning and are bonded in hydrophilic conditions. The silicon substrate (substrate 15) includes native oxide which, similar to the insulating layer forms OH-terminated surfaces after cleaning. Interactions between water absorbed on the surfaces causes the wafers to be bonded. Thermal treatments stabilize the bonding interface.Either before or after substrate 17 is bonded to substrate 15, insulative layer 30 deposited above layer 21. Preferably, layer 30 is a TEOS deposited silicon dioxide layer which is approximately 150-250 Å thick. Preferably, layer 30 is the same material as layer 20 and is deposited after substrate 17 is attached to substrate 15.In FIG. 3, a photoresist material 82 is provided above layer 30. A conventional photolithographic process is utilized to provide a window84 in material 82. Preferably, window 84 is 100-300 Å wide and defines channel 41 (FIG. 1).In FIG. 4, layers 30, 21, and 20 are etched in dry etching processes in accordance with window 84 (FIG. 3). Preferably, an aperture 86 is provided in substrate 17 due to the etching process. Aperture 86 extends to top surface 27 of substrate 15. Aperture 86 separates gate conductors 36 and 37 (FIG. 1).In FIG. 5, substrate 15 is subjected to a dopant implant to form source region 22. Preferably, phosphorous or arsenic ions are utilized to implant source region 22 through aperture 86. Conventional doping techniques can be utilized to form region 22. Substrate 17 and layer 30 prevent substrate 15 from being doped in areas outside of aperture 86. Thus, aperture 86 advantageously defines both channel 41 (FIG. 1) and source region 22.In FIG. 6, low pressure chemical vapor deposition (LPCVD) is utilized to form nitride liners or spacers 26 and 46 on sidewalls 88 and 90 of layers 20, 21, and 30. Spacer 26 is associated with gate structure 18, and spacer 40 is associated with gate structure 19. After deposition, the nitride material is etched to leave spacers 26 and 46. Spacers 26 and 46 are preferably each 15-30 Å wide and 1000-3000 Å high (the total thickness of layers 20, 21, and 30). Thus, aperture 86 also defines the placement of gate structures 18 and 19 with respect to channel 41 (FIG. 1).In FIG. 7, after gate structures 18 and 19 are completed, a semiconductor layer 94 is deposited above layer 30 and within aperture 86. Preferably, layer 94 is a conformal amorphous semiconductor material and fills aperture 86 between spacers 26 and 46. In aperture 86, layer 94 is in contact with top surface 27 of substrate 15. Layer 94 is preferably an amorphous silicon layer and is approximately 500-1000 Å thick from the top surface 28 of layer 30 to a top surface 39.After layer 94 is deposited, a photoresist material is provided and configured by photolithography to leave an island or a mask 102 above layer 94. Preferably, mask 102 is wider than channel region 41 and is 2000-10,000 Å wide. After mask 102 is formed, layer 94 is etched in a plasma dry etching process. Mask 102 is preferably centered about aperture 86. After etching, layer 94 is preferably a rectangular or square structure 0.5 to 0.2 microns wide.In FIG. 8, mask 102 is removed and a dopant implant is provided to layer 94. Preferably, the implant depth is shallow enough to prevent the dopant implant from entering substrate 15 through substrate 17. Preferably, the dopant implant provides dopants for a drain region 24 having a concentration of dopants of 1*10<20 >to 5*10<20 >dopant per cubic centimeter. Preferably, the implant is centered so that drain region 24 has a junction 45 at a location even with a top surface of layer 21 after annealing.After implantation, substrate 14 is subjected to a thermal anneal to activate dopants in layer 94 and substrate 15. Channel 41 is left between junctions 43 and 45. After annealing, the amorphous material of layer 94 becomes recrystallized. Dopants associated with region 22 can migrate to the location of junction 43 (the bottom surface layer 21) after annealing. Thus, the thermal anneal causes junctions 43 and 45 to be close to the top and bottom location of gate conductors 36 and 37.After annealing, layer 58 is deposited and contact 60 is formed. Pad 62 can also be formed above layer 58 by a deposition and selective etching process. Other conventional CMOS processes can form other components for portion 10. For example, metal layers and other contacts can be utilized to connect elements of transistor 12.The double gate structure (gate structures 18 and 19) can be utilized in various fashions. For example, gate conductors 36 and 37 can be coupled together to provide greater drive current for transistor 12. Alternatively, bias on one or both of gate conductors 36 and 37 can be adjusted to adjust the threshold voltage associated with transistor 12. In yet another alternative, conductors 36 and 37 can be maintained separately for providing different control to transistor 12.It is understood that while the detailed drawings, specific examples, material types, thicknesses, dimensions, and particular values given provide a preferred exemplary embodiment of the present invention, the preferred exemplary embodiment is for the purpose of illustration only. The method and apparatus of the invention is not limited to the precise details and conditions disclosed. For example, although specific types of gate conductors are described, other materials can be utilized. Other sizes and thickness for the structure described are possible, especially in light of changing capabilities of fabrication equipment and processes. Various changes may be made to the details disclosed without departing from the spirit of the invention which is defined by the following claims. |
In embodiments, a fast Fourier transform (FFT) engine (700) includes a series of stages, each stage containing a butterfly (710) and a data normalization device (730) configured to scale output of the stage's butterfly. The scaling factors are adjusted, for example, periodically or on as-needed basis, so that the dynamic range of the butterflies and the buffers is increased for a given bit-width, or the bit-width of these devices is decreased for the same dynamic range. Additionally, bit-width of other buffer(s) is decreased because of the scaling of the data. |
[00101] WHAT IS CLAIMED IS: CLAIMS 1. A wireless communication method comprising steps of: transforming a block of received signal data in a plurality of stages arranged in series so that the block of received signal data is inputted into a first stage of the plurality of stages, processed successively through each stage of the plurality of stages, and a fast Fourier transformed block of signal data is outputted from a last stage of the plurality of stages, each stage of the plurality of stages comprising a butterfly and a data normalization device wherein the data normalization device of said each stage scales output of the butterfly of said each stage by a normalization factor corresponding to the data normalization device of said each stage; processing the fast Fourier transformed block to obtained a processed block of data; and using the processed block in an application of a wireless device. 2. The wireless communication method of claim 1, wherein the step of using the processed block comprises rendering information contained in the processed block. 3. The wireless communication method of claim 2, further comprising, for said each stage, adjusting the normalization factor corresponding to the data normalization device of said each stage. 4. The wireless communication method of claim 3, wherein the step of adjusting comprises determining, for said each stage, the normalization factor corresponding to the data normalization device of said each stage. 5. The wireless communication method of claim 4, wherein the step of adjusting is performed for each block of received signal data, said each block being of predetermined size. 6. The wireless communication method of claim 4, wherein the step of adjusting is performed on as-needed basis. 7. The wireless communication method of claim 3, wherein the step of adjusting is performed in binary steps. 8. The wireless communication method of claim 3, further comprising storing the fast Fourier transformed block before the step of processing. 9. The wireless communication method of claim 8, wherein the step of storing comprises adding data describing said each normalization factor to the fast Fourier transformed block before the step of processing. 10. The wireless communication method of claim 8, wherein the step of storing comprises, for said each stage, adding a pointer to memory storing said each normalization factor to the fast Fourier transformed block before the step of processing. 11. The wireless communication method of claim 3, wherein the step of processing comprises decoding the information, and the step of decoding is performed before the step of rendering. 12. The wireless communication method of claim 3, wherein the information contained in the processed block comprises at least one of audio information and video information, and the step of rendering comprises rendering the at least one of audio information and video information. 13. The wireless communication method of claim 12, further comprising wirelessly receiving the block of received signal data in at least one orthogonal frequency division multiplexing (OFDM) symbol. 14. The wireless communication method of claim 3, wherein said each stage further comprises a stage buffer. 15. The wireless communication method of claim 3, further comprising double buffering the fast Fourier transformed block before the step of using. 16. The wireless communication method of claim 3, wherein the received signal data is obtained from a received signal, the method further comprising: monitoring a quality indication of the received signal; and varying bit width of the received signal data in response to the quality indication, wherein the bit width of the received signal data is lowered in response to the lowered quality of the received signal. 17. The wireless communication method of claim 3, wherein the plurality of stages is implemented as a single hardware stage in a recursive FFT engine configuration. 18. A device comprising : a fast Fourier transform (FFT) block comprising an input, an output, and a plurality of stages arranged in series so that the FFT block is configured to process a block of received signal data inputted into a first stage of the plurality of stages successively through each stage of the plurality of stages to obtain a fast Fourier transformed block of signal data and to output the fast Fourier transformed block from the output of the FFT block, each stage of the plurality of stages comprising a butterfly and a data normalization device wherein the data normalization device of said each stage scales output of the butterfly of said each stage by a normalization factor corresponding to the data normalization device of said each stage; a processing block configured to process the fast Fourier transformed block to obtain a processed block of data; and an application block configured to operate on the processed block of data. 19. The device of claim 18, wherein the application block is configured to render information contained in the processed block. 20. The device of claim 19, further comprising: a normalization control block configured, for said each stage, to adjust the normalization factor corresponding to the data normalization device of said each stage. 21. The device of claim 20, wherein the normalization control block is configured to determine the normalization factor corresponding to the data normalization device of said each stage. 22. The device of claim 21 , wherein the normalization control block is configured to adjust the normalization factors for each block of received signal data, said each block being of predetermined size. 23. The device of claim 21 , wherein the normalization control block is configured to adjust the normalization factors on as-needed basis. 24. The device of claim 20, wherein the normalization control block is configured to adjust the normalization factors in binary steps. 25. The device of claim 20, further comprising a buffer configured to store the fast Fourier transformed block before the fast Fourier transformed block is processed by the processing block. 26. The device of claim 25, wherein the buffer is configured to store data describing said each normalization factor with the fast Fourier transformed block. 27. The device of claim 25, wherein the buffer is configured to store a pointer to memory storing said each normalization factor with the fast Fourier transformed block. 28. The device of claim 20, wherein the processing block comprises a decoder configured to decode the information. 29. The device of claim 20, wherein the information contained in the processed block comprises at least one of audio information and video information, and the application block is configured to render the at least one of audio information and video information. 30. The device of claim 29, further comprising a wireless receiver configured to receive the block of received signal data in at least one orthogonal frequency division multiplexing (OFDM) symbol. 31. The device of claim 20, wherein said each stage further comprises a stage buffer. 32. The device of claim 20, further comprising a double buffer configured to double buffer the fast Fourier transformed block. 33. The device of claim 19, further comprising: a sample server coupled to the input of the FFT block, the sample server being configured to vary bit width of the received signal data in response to a quality indication of the received signal, wherein the bit width of the received signal data is lowered in response to lowered quality of the received signal. 34. The device of claim 19, wherein the plurality of stages is implemented as a single hardware stage in a recursive FFT engine configuration. 35. A wireless device comprising: at least one receiver; at least one transmitter; and at least one controller coupled to the at least one receiver and the at least one transmitter, wherein the at least one controller is configured to perform steps comprising: transforming a block of received signal data in a plurality of stages arranged in series so that the block of received signal data is inputted into a first stage of the plurality of stages, processed successively through each stage of the plurality of stages, and a fast Fourier transformed block of signal data is outputted from a last stage of the plurality of stages, each stage of the plurality of stages comprising a butterfly and a data normalization device wherein the data normalization device of said each stage scales output of the butterfly of said each stage by a normalization factor corresponding to the data normalization device of said each stage;processing the fast Fourier transformed block to obtain a processed block of data; and using the processed block in an application of the wireless device. 36. A computer program product, comprising: computer-readable medium comprising: code for causing a computer to communicate wirelessly, comprising: transforming a block of received signal data in a plurality of stages arranged in series so that the block of received signal data is inputted into a first stage of the plurality of stages, processed successively through each stage of the plurality of stages, and a fast Fourier transformed block of signal data is outputted from a last stage of the plurality of stages, each stage of the plurality of stages comprising a butterfly and a data normalization device wherein the data normalization device of said each stage scales output of the butterfly of said each stage by a normalization factor corresponding to the data normalization device of said each stage; processing the fast Fourier transformed block to obtain a processed block of data; and using the processed block in an application of a wireless device. 37. A device comprising: a means for performing a fast Fourier transform on a block of received signal data to obtain a fast Fourier transformed block of signal data; a means for normalizing signals in the means for performing; a means for processing the fast Fourier transformed block to obtain a processed block of data; and a means for rendering information contained in the processed block of data. |
MULTIPLE STAGE FOURIER TRANSFORM APPARATUS, PROCESSES, AND ARTICLES OF MANUFACTURE BACKGROUND Claim of Priority under 35 U.S.C. §119 [0001] The present Application for Patent claims priority to Provisional Application No. 61/040,324, entitled "METHODS AND APPARATUS FOR ACCOMMODATING A LARGE FREQUENCY DOMAIN DYNAMIC RANGE OF A RECEIVED OFDM SIGNAL," filed on March 28, 2008, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. Field [0002] The present invention relates generally to communications. More particularly, in aspects the invention relates to operation of fast Fourier transform engines. Background [0003] Modern wireless communication systems are widely deployed to provide various types of communication applications, such as voice and data applications. These systems may be multiple access systems capable of supporting communication with multiple users by sharing the available system resources (e.g., spectrum and transmit power). Examples of multiple access systems include code division multiple access (CDMA) systems, time division multiple access (TDMA) systems, frequency division multiple access (FDMA) systems, time division duplexing (TDD) systems, frequency division duplexing (FDD) systems, 3rd generation partnership project long term evolution (3GPP LTE) systems, and orthogonal frequency division multiple access (OFDMA) systems. There are also point-to-point systems, peer-to-peer systems, and wireless local area networks (wireless LANs). [0004] Generally, a wireless multiple access communication system can simultaneously support communications with multiple wireless terminals. Each terminal communicates with one or more base transceiver stations (BTSs or base stations) via transmissions on forward and reverse links. The forward link or downlink refers to the communication link from a base transceiver station to a terminal, and the reverse link or uplink refers to the communication link from a terminal to a base transceiver station. Each of theforward and reverse communication links may be established via a single-in-single-out, multiple-in-single-out, single-in-multiple-out, or a multiple-in-multiple-out (MIMO) communication technique, depending on the number of transmitting and receiving antennae used for the particular link. [0005] MIMO systems are of particular interest because of their relatively higher data rates, relatively longer coverage range, and relatively more reliable transmission of data. A MIMO system employs multiple (NT) transmit antennae and multiple (NR) receive antennae for data communication. A MIMO channel formed by the NT transmit and NR receive antennae may be decomposed into Ns independent channels, which are also referred to as spatial channels, where Ns < mm{NT, NR} . Each of the Ns independent channels corresponds to a dimension. The MIMO system can provide improved performance (e.g., higher throughput and/or greater reliability) if the additional dimensions created by the multiple transmit and receive antennae are used. [0006] Communication systems often perform at least some processing of the received signals in the frequency domain. The received signals are typically transformed from time domain to the frequency domain using Fourier transforms. Conversely, inverse Fourier transforms can be used to transform frequency domain signals to the signals' time domain counterparts. Additionally, communication systems, such as those implementing Orthogonal Frequency Division Multiplexing (OFDM), can use certain properties of Fourier transforms to generate multiple time domain symbols from linearly spaced tones and to recover the frequencies from the symbols. [0007] Fast Fourier Transform (FFT) is a computational algorithm implementing the Fourier transform. The FFT allows the Fourier transform to be performed in fewer computational operations than used for discrete Fourier transform (DFT). Often, the module responsible for the FFT (the "FFT engine") in a wireless device is implemented as a sequence of "butterflies." A "butterfly" in this context is a computational portion of the FFT engine that implements a small (relative to the entire FFT engine) DFT. The term "butterfly" typically appears in description of the Cooley-Tukey FFT algorithm. The Cooley-Turkey algorithm breaks down a DFT of composite size n = (r ■ m) into r smaller transforms of size m, where r is the so-called "radix" of the FFT transform. The breakdown is performed recursively, and the smaller transforms are combined with size- r butterflies, which themselves are DFTs of size r (performed m times on the outputs of the smaller transforms) pre-multiplied by roots of unity. The steps can also beperformed in reverse, so that the butterflies come first and are post-multiplied by the roots of unity. [0008] The output of the FFT engine is usually stored on a processing chip in an output buffer or Random Access Memory (RAM), for further processing in the frequency domain. The size of the output buffer can be quite large, and occupy a significant percentage of the application-specific integrated circuit (ASIC), increasing die-area and cost. For example, a wireless standard may define a Packet or Frame to have eight OFDM symbols, each with 1024 tones. In such a case, the mobile device with four receive antennae may have to instantiate an output buffer that needs to store eight OFDM symbols times 1024 tones times four receive antennas times 16 bit I/Q samples, resulting in a one Mbit buffer. With double-buffering of output Frames, the size increases to two Mbits. [0009] The input time domain signal may fluctuate significantly. For example, every OFDM symbol can have a different power level and different spectral characteristics. This is because of the power control, adaptive or otherwise, and the varying nature of the physical channel, which is subject to noise, multipath and fading, attenuation, Doppler shift, and interference. The significant fluctuations of the signal amplitude at the FFT output - i.e., increased dynamic range - necessitate additional increases in the memory used for buffering the output of the FFT engine of a wireless device. [0010] Because memory is a scarce resource - with weight, size, and power consumption costs in addition to the direct economic cost - a need exists in the art for apparatus, methods, and articles of manufacture that reduce the buffer size requirement at the output of the FFT engine. Another need exists in the art to reduce FFT engine output buffer requirement without compromising other performance characteristics, including dynamic range. Yet another need exists to reduce the computational resources used by the butterflies in the FFT engine. SUMMARY [0011] Embodiments disclosed herein may address one or more of the above stated needs by providing apparatus, methods, and articles of manufacture for performing fast Fourier transform in an FFT engine configured to scale intermediate results between the butterflies, thereby allowing bit- width reduction of the butterflies and buffers.[0012] In an embodiment, a wireless communication method includes transforming a block of received signal data in a plurality of stages arranged in series so that the block of received signal data is inputted into a first stage of the plurality of stages, processed successively through each stage of the plurality of stages, and a fast Fourier transformed block of signal data is outputted from a last stage of the plurality of stages. Each stage of the plurality of stages includes a butterfly and a data normalization device. The data normalization device of each stage scales output of the butterfly of each stage by a normalization factor corresponding to the data normalization device of each stage. The method also includes processing the fast Fourier transformed block to obtained a processed block of data. The method further includes using the processed block in an application of a wireless device, for example, rendering information contained in the processed block. [0013] In an embodiment, a device includes a fast Fourier transform (FFT) block with an input, an output, and a plurality of stages arranged in series so that the FFT block is configured to process a block of received signal data inputted into a first stage of the plurality of stages successively through each stage of the plurality of stages to obtain a fast Fourier transformed block of signal data, and to output the fast Fourier transformed block from the output of the FFT block. Each stage of the plurality of stages has a butterfly and a data normalization device. The data normalization device of each stage scales output of the butterfly of the stage by a normalization factor corresponding to the data normalization device of the stage. The device also includes a processing block configured to process the fast Fourier transformed block to obtain a processed block of data. The device further includes an application block configured to operate on the processed block of data, for example, to render information included in the processed block of data. [0014] In an embodiment, a wireless device includes at least one receiver, at least one transmitter, and at least one controller coupled to the at least one receiver and the at least one transmitter. The at least one controller is configured to perform a number of steps. The steps include transforming a block of received signal data in a plurality of stages arranged in series so that the block of received signal data is inputted into a first stage of the plurality of stages, processed successively through each stage of the plurality of stages, and a fast Fourier transformed block of signal data is outputted from a last stage of the plurality of stages. Each stage of the plurality of stages has a butterflyand a data normalization device. The data normalization device of each stage scales output of the butterfly of the stage by a normalization factor corresponding to the data normalization device of the stage. The steps also include processing the fast Fourier transformed block to obtain a processed block of data. The steps further include using the processed block in an application of the wireless device, for example, rendering information contained in the processed block of data. [0015] In an embodiment, a computer program product stores, on computer-readable medium, code for causing a computer to communicate wirelessly. The code includes instructions for transforming a block of received signal data in a plurality of stages arranged in series so that the block of received signal data is inputted into a first stage of the plurality of stages, processed successively through each stage of the plurality of stages, and a fast Fourier transformed block of signal data is outputted from a last stage of the plurality of stages. Each stage of the plurality of stages has a butterfly and a data normalization device. The data normalization device of each stage scales output of the butterfly of the stage by a normalization factor corresponding to the data normalization device of the stage. The code also includes instructions for processing the fast Fourier transformed block to obtain a processed block of data. The code further includes instructions for using the processed block in an application of a wireless device, for example, rendering information contained in the processed block of data. [0016] In an embodiment, a device includes a means for performing a fast Fourier transform on a block of received signal data to obtain a fast Fourier transformed block of signal data, a means for normalizing signals in the means for performing, a means for processing the fast Fourier transformed block to obtain a processed block of data, and a means for rendering information contained in the processed block of data. [0017] These and other aspects of the present invention will be better understood with reference to the following description, drawings, and appended claims. BRIEF DESCRIPTION OF THE DRAWINGS [0018] Figure 1 illustrates selected elements of a multiple access wireless communication system which may be configured in accordance with embodiments described in this document;[0019] Figure 2 illustrates in block diagram manner selected components of a wireless MIMO communication system that may be configured in accordance with embodiments described in this document; [0020] Figure 3 illustrates selected features of a symbol generated in or received by a terminal; [0021] Figure 4 illustrates selected components of a receiver of the terminal shown in Figure 2; [0022] Figure 5 illustrates selected components of a receive data processor of the terminal of Figure 2; [0023] Figure 6A illustrates selected components of a fast Fourier transform engine; [0024] Figure 6B illustrates selected details of a recursive implementation of the Fourier transform engine of Figure 6A; [0025] Figure 7A illustrates selected components of another fast Fourier transform engine with data normalization; [0026] Figure 7B illustrates selected details of a recursive implementation of the Fourier transform engine of Figure 7 A; and [0027] Figure 8 illustrates selected steps and decisions of a process for operating the fast Fourier transform engine of Figure 7. DETAILED DESCRIPTION [0028] In this document, the words "embodiment," "variant," and similar expressions are used to refer to a particular apparatus, process, or article of manufacture, and not necessarily to the same apparatus, process, or article of manufacture. Thus, "one embodiment" (or a similar expression) used in one place or context may refer to a particular apparatus, process, or article of manufacture; the same or a similar expression in a different place may refer to a different apparatus, process, or article of manufacture. The expressions "alternative embodiment," "alternative variant," "alternatively," and similar phrases may be used to indicate one of a number of different possible embodiments or variants. The number of possible embodiments or variants is not necessarily limited to two or any other quantity. [0029] The word "exemplary" may be used herein to mean "serving as an example, instance, or illustration." Any embodiment or variant described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodimentsor variants. All of the embodiments and variants described in this description are exemplary embodiments and variants provided to enable persons skilled in the art to make and use the invention, and not necessarily to limit the scope of legal protection afforded the invention. [0030] "Tone" and "sub-carrier" are generally used interchangeably to indicate individual symbol-carrying tones in an OFDM or OFDMA system. [0031] "Gain control device" and "data normalization device" are used interchangeably. Such devices are described in the context of fast Fourier transform engines. [0032] The techniques described in this document may be used for various wireless communication networks, including CDMA networks, TDMA networks, FDMA networks, OFDM and OFDMA networks, Single-Carrier FDMA (SC-FDMA) networks, and other networks and peer-to-peer systems. The techniques may be used on both forward and reverse links. Further, the techniques are not necessarily limited to wireless or other communication systems, but may be used in any apparatus where signals are processed in a fast Fourier transform engine. The terms "networks" and "systems" are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, and other technologies. UTRA networks include Wideband-CDMA (W-CDMA) and Low Chip Rate (LCR) networks. The cdma2000 designates IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA), IEEE 802.11, IEEE 802.16, IEEE 802.20, Flash-OFDM, and other technologies. UTRA, E-UTRA, and GSM are parts of Universal Mobile Telecommunication System (UMTS). Long Term Evolution (LTE) is a release of UMTS that uses E-UTRA. UTRA, E-UTRA, GSM, UMTS and LTE are described in documents from an organization known as the "3rd Generation Partnership Project" (3GPP). The cdma2000 standard is described in documents from an organization known as the "3rd Generation Partnership Project 2" (3GPP2). Certain aspects of the techniques are described in the context of LTE systems, and LTE terminology may be used in the description below, but the techniques may be applicable to other standards and technologies. [0033] Single carrier frequency division multiple access (SC-FDMA) is a communication technique which utilizes single carrier modulation and frequencydomain equalization. SC-FDMA systems typically have similar performance and essentially the same overall complexity as OFDMA system. SC-FDMA signals have lower peak-to-average power ratio (PAPR) because of the technique's inherent single carrier structure. The SC-FDMA technique is attractive in many systems, especially in the reverse link communications where the lower PAPR benefits the mobile terminal in terms of transmit power efficiency. The SC-FDMA technique is currently a working assumption for the uplink multiple access scheme in 3GPP Long Term Evolution and Evolved UTRA. [0034] A multiple access wireless communication system 100 according to one embodiment is illustrated in Figure 1. An access point or a base transceiver station 101 includes multiple antenna groups, one group including antennae 104 and 106, another group including antennae 108 and 110, and an additional group including antennae 112 and 114. Although only two antennae are shown for each antenna group, more or fewer antennae may be included in any of the antenna groups. The BTS 101 may also include a single antenna group, or have only a single antenna. An access terminal (AT) 116 is in communication with the antennae 112 and 114, where antennae 112 and 114 transmit information to the access terminal 116 over a forward link 120, and receive information from the access terminal 116 over a reverse link 118. Another access terminal 122 is in communication with antennae 106 and 108, where the antennae 106 and 108 transmit information to the access terminal 122 over a forward link 126 and receive information from the access terminal 122 over a reverse link 124. In an FDD system, each of the communication links 118, 120, 124 and 126 may use a different frequency for communications between access terminals and a particular antenna or antenna group, as well as different frequencies for forward and reverse links. For example, the forward link 120 may use a different frequency then that used by the reverse link 118, and still another frequency than that used by the forward link 126. The use of different frequencies, however, is not necessarily a requirement of the invention. [0035] Each group of antennae and the area in which it is designed to communicate is often referred to as a sector. As shown in Figure 1 , each of the antenna groups is designed to communicate to access terminals in a different sector of the area covered by the BTS 101. [0036] In communications over the forward links 120 and 126, the transmitting antennae of the BTS 101 use beamforming in order to improve the signal-to-noise ratioof the forward links for the different access terminals 116 and 122. Additionally, beamforming reduces interference for access terminals in neighboring cells, as compared to forward link transmissions through a single antenna to all its access terminals. Beamforming is also not necessarily a requirement of the invention. [0037] An access point or a base transceiver station may be a fixed station used for communicating with the terminals and may also be referred to as a Node B or by some other term. An access terminal may also be called a mobile unit, user equipment (UE), a wireless communication device, terminal, mobile terminal, or some other term. [0038] Figure 2 shows, in a block diagram form, selected components of an embodiment of a wireless MIMO communication system 200 that includes a transmitter system 210 of a base transceiver station and a receiver system 250 of an access terminal. [0039] At the transmitter system 210, traffic data for a number of data streams is provided by a data source 212 to a transmit (Tx) data processor 214. In an embodiment, each data stream is transmitted over a respective transmit antenna or antenna group. The Tx data processor 214 formats, codes, and interleaves the traffic data for each data stream based on a particular coding scheme selected for that data stream to provide coded data. The coded data for each data stream may be multiplexed with pilot data using OFDM techniques. The pilot data is a known data pattern that is processed in a known manner and may be used at the receiver system to estimate the physical channel response or transfer function. The multiplexed pilot and coded data for each data stream are then modulated (i.e., symbol mapped) based on a particular modulation scheme selected for that data stream, to obtain modulation symbols. The modulation scheme may be selected, for example, from binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-ary Phase-Shift Keying (M-PSK), and multilevel quadrature amplitude modulation (M-QAM). The data rate, coding, and modulation for each data stream may be determined by instructions performed by a processor 230. [0040] The modulation symbols for all data streams are provided to a Tx MIMO processor 220, which may further process the modulation symbols (e.g., for OFDM). The Tx MIMO processor 220 then provides NT modulation symbol streams to NT transmitters (TMTRs) 222a through 222t. In certain embodiments, the Tx MIMO processor 220 applies beamforming weights to the symbols of the data streams and to the antennae from which the symbols are transmitted.[0041] Each transmitter 222 receives and processes a respective symbol stream to provide one or more analog signals, and further conditions (e.g., amplifies, filters, upconverts) the analog signals to provide a modulated signal suitable for transmission over its corresponding MIMO channel. The NT modulated signals from the transmitters 222a through 222t are transmitted from the Nτ antennae 224a through 224t, respectively. The antennae 224 may be the same as or different from the antennae 104- 114 shown in Figure 1. [0042] At the receiver system 250, the transmitted modulated signals are received by NR antennae 252a through 252r, and the received signal from each antenna 252 is provided to a respective receiver (RCVR) 254a through 254r. Each of the receivers 254 conditions (e.g., filters, amplifies, downconverts) its respective received signal, digitizes the conditioned signal to provide samples, and further processes the samples to provide a corresponding received symbol stream. [0043] A receive (Rx) data processor 260 receives and processes the NR received symbol streams from the NR receivers 254, based on a particular receiver processing technique, to provide NT detected symbol streams. The Rx data processor 260 then demodulates, deinterleaves, and decodes each detected symbol stream to recover the traffic data of the data stream. The processing by the Rx data processor 260 is complementary to that performed by the Tx MIMO processor 220 and the Tx data processor 214 at the transmitter system 210. [0044] A processor 270 periodically determines which pre-coding matrix to use. The processor 270 formulates a reverse link message comprising a matrix index portion and a rank value portion. The reverse link message may include miscellaneous information regarding the communication link and/or the received data stream. [0045] The reverse link message is then processed by a Tx data processor 238, which also receives traffic data for a number of data streams from a data source 236. The traffic data and the reverse link message are modulated by a modulator 280, conditioned by transmitters 254a through 254r, and transmitted to the transmitter system 210. [0046] At the transmitter system 210, the modulated signals from the receiver system 250 are received by the antennae 224, conditioned by receivers 222, demodulated by a demodulator 240, and processed by an Rx data processor 242 to extract the reverse link messages transmitted by the receiver system 250. The processor 230 determines whichpre-coding matrix to use for determining the beamforming weights, and processes the extracted message. [0047] Figure 3 illustrates selected features of an OFDM symbol 300 of data transmitted according to selected aspects of this disclosure. The symbol 300 begins at a time t = 0 and ends at a time ?END- The symbol includes a leading ramp portion 310, a data portion 320 (which may include payload or traffic data and certain overhead, such as a cyclic prefix), and a trailing ramp portion 330. The leading and trailing portions are generally present to smooth transitions from symbol to symbol and prevent spikes in transmitted power and the associated spectral spread of the transmitted signal. Figure 3 is merely an illustration of data according to the present disclosure, and other methods of transmission may be used. For example, the use of OFDM symbols each having a plurality of sub-carriers is not necessarily a requirement of the invention. [0048] Figure 4 illustrates selected details of a receiver 254 (which is one of the receivers 254a-254r shown in Figure 2). The receiver 254 receives signals from its associated antenna or antennae 252. Thus, the receiver 254a receives signals from an antenna (or antennae) 252a, while the receiver 254r receives signals from an antenna (or antennae) 252r. The illustration of Figure 4 and the associated description may apply to any and each of the receivers 254, for example, to the receiver 254a and to the receiver 254r. While some details of the architecture of the receiver 254 are not shown, it should be appreciated that many known and possibly later-developed architectures may be used. In various exemplary embodiments, the modules 410-440 can take the form of separate electronic components coupled together via a series of separate busses. In other embodiments, one or more of the various modules 410-440 can take form of processors or separate servers coupled together via one or more networks. Additionally, it should be appreciated that each of the modules 410-440 advantageously can be realized using multiple computing devices employed in a cooperative fashion. It also should be appreciated that some of the modules 410-440 can take the form of software/firmware structures and routines residing in a memory to be executed or worked upon by a controller, or software/firmware routines or structures residing in separate memories in separate servers/computers being operated upon by different controllers. [0049] In operation, as signals are received by antenna 252-0 and/or antenna 252-1 (and/or any other antenna 252 associated with the receiver 254 shown in Figure 4), theanalog front-end 410 is configured to accept the received signals, condition the signals, and provide the conditioned signals to a mixer 420. Front-end signal conditioning may include filtering the signals through one or more filters 412 in the front-end 410. [0050] The mixer 420 is configured to downconvert the conditioned signals from their received frequency spectrum to a lower baseband spectrum. The converted baseband signals are then provided to a sampler 430, which is configured to convert the analog baseband signals into digital data. One or more filters 432 may be used to filter the baseband signal further, either before or after sampling. Thus, the filter(s) 432 may be analog and/or digital, depending on whether they operate before or after sampling conversion. [0051] While ideal filters may introduce no phase delay, have a flat profile across all received frequencies, and may exhibit a perfect cutoff at any frequency, known realizable filters deviate from such "ideal" filter performance. The filters 412 and 432 may thus introduce distortion to the received signals. For example, one or both of the filters or filter sets 412 and 432 may introduce to the received signal frequency- dependent amplitude and/or phase distortions, such as pass-band amplitude and/or phase ripple. [0052] A timing recovery device 440 is configured to apply various algorithms to the received data to derive timing information from the signals. The timing recovery device 440 may operate independently from other such devices 440 in other receivers 254, or it may operate in conjunction with other timing recovery devices. In variants, the timing recovery device 440 receives analog data from the analog front end 410 or the mixer 420, or it receives digital data from the sampler 430, or both, for use with its algorithms. Because timing recovery may not always be perfect, there may be an inadvertent time offsets Td present, which timing recovery device 440 may eventually recognize and report. [0053] Figure 5 illustrates selected details of the Rx data processor 260 (from Figure 2), which here is configured to receive both timing information and sample data from the receivers 254. While some details of the architecture of the exemplary Rx data processor 260 are not shown, it should be appreciated that any known or possibly later- developed architectures may be used. In exemplary embodiments, the various modules 510-574 can take the form of separate electronic components coupled together via a series of separate busses. In other embodiments, one or more of the various modules510-574 can take form of processors or separate servers coupled together via one or more networks. Additionally, it should be appreciated that each of the modules 510-574 advantageously can be realized using multiple computing devices employed in a cooperative fashion. It also should be appreciated that some of the modules 510-574 can take the form of software/firmware structures and routines residing in a memory to be executed or worked upon by a controller, or software/firmware routines or structures residing in separate memories in separate servers/computers being operated upon by different controllers. [0054] The exemplary data processor 260 includes a timing adjustment block 510; an instruction processor block 520, which may be a sequential instruction machine such as a DSP or another processor controller; an input data sample buffer 530; a Fast Fourier Transform (FFT) engine 550 that includes an FFT control device 550a and an FFT engine proper 550b; a filter correction device 560; a phase ramp 562; a beacon sorter 564; and an output buffer 570. The instruction processor block 520 includes a real-time clock or counter (RTC) 522, an FFT address generator 524, and an FFT Engine Task List memory 526. The data sample buffer 530 includes separate blocks 532 and 534 for data associated with different antennae. The output buffer 570 similarly includes separate blocks 572 and 574 for output data associated with the different antennae. [0055] In operation, the timing adjustment block 510 is configured to receive the timing information, and to provide an output time offset Td to the instruction processor block 520. The time offset Td can further be passed on to the phase ramp 562. [0056] The input data sample buffer 530 is configured to receive sample data via one or more antennae 252 of the respective receiver 254, possibly through the front end 410, and to provide buffered data samples to the FFT engine 550. [0057] The FFT address generator 524 of the processor block 520 is configured to generate addresses for use by the FFT engine 550. The control block 550a of the FFT engine 550 receives the addresses generated by the FFT address generator 524 and the commands and variables stored in the FFT Engine Task List 526, and based on this received information controls the FFT engine proper 550a, so that the FFT engine 550 converts the buffered data samples from which communication channels may be resolved.[0058] In the hardware/software/firmware architecture described above, the FFT Engine Task List 526 may store various instructions, various variables, and/or operational data for use by the FFT control block 550a. As non-limiting examples, the FFT Engine Task List 526 may store variable(s) representing a sample start address for a transformation; instructions for reading or supplying the sample start address; variable(s) representing the number of data symbols to skip before or between executions of the Fourier transform; instructions for skipping a number of data symbols before or between executions; variable(s) representing FFT Length; variable(s) representing the number of FFT stages or butterflies to be executed; instructions for executing multiple FFT stages; variable(s) representing scaling or gain control for each FFT stage to be executed (as will be discussed in more detail below); instructions for executing scaling at or following each FFT stage/butterfly; variable(s) representing a start time for each FFT operation to be executed; instructions for starting an FFT operation; variable(s) indicating a bit for instant start; and/or instructions for performing an instant start. These are merely examples, and other instructions, variables, and/or data items may be stored in the FFT Engine Task List 526. [0059] The contents of the FFT Engine Task List 526 can be held in firmware or other memory, and can be updated and modified with new or different instructions, variables, and/or data as, needed. [0060] The instructions, variables, and/or operational data held in the FFT Engine Task List 526 can be requested by the FFT control block 550a and stored in registers of the block 550a, or can be presented to the FFT control block 550a by the instruction processor 520 without a specific request from the block 550a. [0061] After the FFT engine 550 has converted the received and buffered time domain data samples into a block of frequency-domain data, a total of k rows of OFDM data are provided to the filter correction device 560. Each orthogonal frequency component will have a resolved values for its frequency ^ and time t, as represented by the following equation: [0062] I + jQ = A exp(-j Iπfk t ), where A designates amplitude, I designates in-phase part of the frequency component, and Q designates the quadrature part of the frequency component. [0063] Note that in practical operation, the FFT data may require amplitude and/or phase corrections.[0064] We now proceed to describe the mechanisms for the FFT engine to accommodate a large dynamic range of the received time-domain signals. [0065] Figure 6A illustrates selected elements of an FFT engine 600 (which may be the same as the FFT engine 550). The FFT engine 600 includes a plurality of N internal FFT stages or butterflies 61On. (Recall discussion of butterflies above.) Each butterfly 610 is followed by a buffer 620 configured to receive and store the output of the nearest preceding butterfly 610, and to provide the stored data as input to the nearest following butterfly 610. Although Figure 6 A shows only two butterflies 610 and two associated buffers 620, the FFT engine 600 may have a greater or a smaller number of the butterflies 610, with 3-4 being a typical number of butterflies in many designs. For example, the FFT engine 600 may include four, eight, or sixteen butterflies and an equal number of their associated buffers. [0066] Successive butterflies 610 are employed in successive stages of the FFT process. Thus, butterfly 61Oi is used in the first stage of the FFT engine 600, and its output is stored in the buffer 62Oi . The contents of the buffer 62Oi are then taken up by the butterfly 61O2 in the second stage of the FFT engine 600, and its output is stored in the buffer 62O2. The contents of the buffer 62O2 are in turn provided to the input of the butterfly 6IO3 in the third stage of the FFT engine 600, and the output of the butterfly 6IO3 is stored in the buffer 62O3. [0067] The symbols received by the FFT engine 600 across the different FFT subchannels can have a large dynamic range, because of factors that include frequency domain channel variations, differences in the transmitted power among different subchannels or tones, and perhaps other factors. Regarding the differences in the transmitted power, beacons in some embodiments may be 30 dB stronger than other sub-channels, and forward link control channel tones may be 0 to 15 dB stronger than other sub-channels. If the FFT engine 600 does not normalize signals, its output can become saturated, leading to distortion of symbols on the saturated and adjacent subchannels, and consequent poor demodulation performance on such sub-channels. Further, if the FFT engine 600 does not normalize the signals with the large dynamic range, the storage size of the buffers 620 and a buffer (or buffers in case of double buffering) configured to receive the output of the FFT engine 600 may be large relative to analogous buffers in systems configured to process signals having a smaller dynamic range.[0068] It should be noted that in the FFT engine 600 multiple butterfly-buffer stages may be replaced with one such stage configured to operate successively. This is illustrated in Figure 6B, which shows a single stage with one butterfly 61O1, 2 ... N that performs functions of two or more (including all) of the butterflies 610 shown in Figure 6A, and one buffer 620i, 2 ... N that performs functions of two or more (including all) of the buffers 620 shown in Figure 6A. Here, the stage is configured successively as the first stage (butterfly 6IO1 and buffer 620i), then as the second stage (butterfly 6IO1 and buffer 620i), and so on, with output of a preceding stage being fed into input of the following stage. We may refer to such configuration as a recursive FFT engine configuration. [0069] Figure 7A illustrates another FFT engine implementation, which uses data normalization devices or gain control devices at the outputs of each of the butterflies. Accordingly, the FFT engine 700 shown in Figure 7A (which may be the same as the FFT engine 550) has a plurality of N internal FFT stages or butterflies 71On. Each butterfly 710 is followed by its associated data normalization or gain control device 730 and a buffer 720. Thus, the stages of the engine 700 are configured in series to process a signal received at the input to the first stage 710i/720i/730i successively through the stages and then output from the output of the last stage 710N/720N/730N- Note that the gain control device 73On is interposed between its associated (nearest preceding) butterfly 71On and the buffer 72On associated with that butterfly. The gain control device 73On normalizes (scales to a predetermined amplitude scale/range) the output of its associated butterfly 71On, for example by multiplying or dividing the output in the digital domain, and provides the resulting output to the nearest following buffer 72On, as shown. The buffer 72On is configured to receive and store the output data of the nearest preceding gain control device 73On, and to provide the stored data to the nearest following butterfly 71On+1. Although Figure 7A shows only two butterflies 710, two buffers 720, and two gain control devices 730, the FFT engine 700 may have a greater or a smaller number of the butterflies, buffers, and data normalization devices. Some embodiments contain two, three, four, eight, or sixteen butterflies, and equal numbers of their associated buffers and data normalization devices. Moreover, the number of stages may be configurable by the information stored in the FFT Engine Task List 526. [0070] Successive butterflies 710 are employed in successive stages of the FFT process. Thus, the butterfly 71Oi is used in the first stage of the FFT engine 700, and its output issent to the gain control device 73Oi and then (after normalization/scaling) stored in the buffer 72Oi . The contents of the buffer 72Oi are taken up by the butterfly 71O2 in the second stage of the FFT engine 700, and its output is provided to the gain control device 73O2 and stored in the buffer 72O2, again after appropriate scaling. The contents of the buffer 72O2 are in turn provided to the input of the butterfly 71O3 in the third stage of the FFT engine 700, and the output of the butterfly 7IO3 is sent to the gain control device 73 O3 and stored in the buffer 72O3. And so it continues through the last stage N with its butterfly 71ON, gain control device 73ON, and buffer 72ON- [0071] In operation, the first butterfly stage is executed, for example in the butterfly 710i. The output data from the butterfly 71Oi is normalized by the device 7301, e.g., so that all signals within the current data block fall between predetermined maximum and minimum amplitudes of the signal. Next, the normalized data is stored in the buffer 72Oi . The buffered data from the buffer 72Oi is sent to the next butterfly 71O2, for the next FFT stage, and the steps are then repeated (with changes in the subscripts) for the following butterflies. [0072] In some embodiments, not every butterfly is followed by a data normalization device. As non-limiting examples, normalization may be accomplished within the FFT engine 700 by way of digital gain control at each stage of FFT engine, or at the input and output of the FFT engine, at one or more intermediate stages, or any combination thereof. In a particular embodiment, normalization is performed following every butterfly except for the last one. [0073] It should be noted that in the FFT engine 700 multiple butterfly-buffer- normalization device stages may be replaced with one such stage configured to operate successively. This is illustrated in Figure 7B, which shows a single stage with one butterfly 71O1, 2 ... N that performs functions of two or more (including all) of the butterflies 710 shown in Figure 7A, one buffer 720i, 2 ... N that performs functions of two or more (including all) of the buffers 720 shown in Figure 6A, and one gain control device 73O1, 2 ... N that performs functions of two or more (including all) of the gain control devices 730 shown in Figure 7A. Here, the stage is configured successively as the first stage (butterfly 71O1, buffer 720i, and gain control device 730i), then as the second stage (butterfly 71O2, buffer 72O2, and gain control device 73O2), and so on, with output of a preceding stage being fed into input of the following stage. As in the case ofthe FFT engine 600, we may refer to such configuration as a recursive FFT engine configuration. [0074] The normalization factors of all the data normalization devices are stored, so that the output of the FFT engine 700 can be easily converted to the FFT-transformed data based on the output of the last stage of the FFT engine and the normalization factors (settings of the gain control devices used when the data were processed in the FFT engine), as should be understood by a person of average skill in the art after perusal of this document. [0075] A separate block or module may be responsible for configuring the data normalization devices. Alternatively, the function of controlling and configuring the data normalization devices may be distributed, for example, contained in the data normalization devices themselves. Or the function may be the domain of another processor also configured to perform additional functions. [0076] Because the devices 730 normalize the data operated upon by the butterflies 710, the FFT engine is configured to accommodate a large dynamic range of the input signal with the butterflies 710, the buffers 720, and any other buffers at the output of the FFT engine (such as the buffer used for double buffering) being configured to accommodate a smaller dynamic range of the signal. Normalization thus lowers the FFT bit-width, which can ultimately lead to FFT timeline improvement and FFT area reduction on the chip. The normalization can also lead to a reduction of symbol buffer bit- widths and hence overall modem area reduction. The normalization can provide area and timeline improvements offsetting the cost and increased complexity of including the gain control devices in the FFT engine 700. [0077] In some embodiments, 16-bit signal range at the input to the FFT engine is processed using twelve- or eight-bit wide stages, but different bit-widths of the input and output of the FFT engine also fall within the scope of the invention. [0078] To buffer the FFT data, the output of the FFT engine 700 is stored together with the gain settings of all the gain control devices 730 used for processing the data. Typically, the range of the signal is relatively stable for some duration, such as one, two, three, four, five, and even more OFDM symbols. Consequently, there may not be a need to change the gain settings for the corresponding time duration, and no need to store the gain settings for each set of FFT data during the corresponding time duration. In embodiments, the buffered FFT-processed data is stored together with a pointer to thestorage location of the gain settings used in the FFT engine 700 to process the data. In embodiments, the gain settings are updated once per OFDM symbol, once for every two OFDM symbols, once for every three OFDM symbols, once for every four OFDM symbols, or once for every five OFDM symbols. Updating the gain settings for every M symbols for M falling outside the one through five range is also within the scope of the invention, and the number M may be whole or fractional. In embodiments, updating is performed on as-needed basis, that is, one or more of the gain settings are changed (and stored) when the block of signal input into the FFT engine 700 cannot be processed using the current settings without loss of information due to saturation or lack of resolution, given the bit- width of the component butterflies 710 and buffers 720. [0079] In some embodiments, the gain setting of each of the butterflies 710 has two, three, four, or five bits of resolution. Other numbers of bits for setting the gain of the butterflies 710 may be used in some other embodiments. [0080] In some embodiments, the resolution of the gain setting is varied in steps corresponding to factors of 2, for example, 1, 2, 4, 8, 16 (or corresponding fractions or multiples of these numbers). Advantageously, the use of such binary steps in setting the gains allows easy normalization (multiplication and division) of the data through left and right bit shifting operations. [0081] Another technique for reducing buffer and FFT engine sizes is to lower the bit- width of the FFT engine and symbol buffer output (at the output of the FFT engine), and at the same time lower the sample server (which provides the input to the FFT engine) bit width in response to varying quality of the signal. Note that when the received signal is of high quality, the dynamic range in the frequency domain at the output of the FFT engine is generally low for a mobile terminal. This is because the base transceiver station is informed that the mobile station has a high quality signal and consequently the base transceiver station does not significantly vary the forward link power. In high quality scenarios, therefore, the bit width of the FFT engine and/or the symbol buffer can be lowered. In low quality signal scenarios, the dynamic range may be relatively much higher. This is so, for example, because the base transceiver station may transmit some tones with high power in order to connect with the mobile terminal. But the low quality of the received signal means that the bit width of the sample server may be lowered without significant quality loss in the received information, because the resolution at the bottom of the range typically reflects mostly noise and/or interference.Lowered bit width of the sample buffer leads to the lowered need for the FFT engine and symbol buffer bit widths. [0082] Thus, in embodiments, the bit width of the signal fed into the FFT engine is varied as a function of the signal-to-noise ratio (SNR), signal-to-noise-and-interference ratio (SINR), carrier-to-interference (C/I), or another similar metric of the received signal that is processed in the FFT engine. The variation of the input bit width may be performed gradually, continually, or in one or more steps. In a variant, the bit width of the input signal (sample server output) is set to a first predetermined length if the monitored signal quality metric (e.g., SNR, SINR, C/I) is above a predetermined level; the sample server bit width is set to a second predetermined length that is lower than the first predetermined length when the signal quality metric is above the predetermined level. The lowered sample server bit width is achieved by dropping the least significant bits of the sample server. [0083] The sample server back-off technique described in the last two paragraphs may be used together with the above-described normalization technique, or instead of the normalization technique. One embodiment uses an 11 -bit sample server and a 14-bit FFT engine and symbol buffer. All of these techniques can lead to a reduction in distortion, and hence performance improvement. Moreover, the last-mentioned technique can accommodate larger sub-channel power boosts in low carrier-to- interference scenarios, without incurring an increase in the FFT and/or symbol buffer bit- widths. [0084] Figure 8 illustrates selected steps and/or decision blocks of an exemplary process 800 used to operate an FFT engine, such as the FFT engine 700 shown in Figure 7. [0085] The process flow begins at a flow point 801 and proceeds to step 805, where the data gain settings of one, a plurality, or all of the data normalization devices are determined for the next processing period or data block, such as a data block corresponding to one or several OFDM symbols. [0086] In step 810, the gains of the data normalization devices are set or programmed in accordance with the determination made in the step 805. [0087] In step 815, a block of data (e.g., an OFDM symbol, fraction of an OFDM symbol, or a multiple of OFDM symbol) is processed in the FFT engine using the settings made in the step 810.[0088] In step 820, the block processed in the step 815 is buffered (e.g., single- or double-buffered). [0089] In step 825, information describing the gain settings of the butterflies 830 during the FFT engine processing of the block is also added to the buffered block. For example, the actual coefficients or pointers to the actual coefficients are stored. [0090] In step 830, the buffered block (including the gain settings) is further processed, for example, corrected for realizable filter distortions in amplitude and phase. [0091] In step 835 the block processed in the step 830 is stored or otherwise made accessible to an application, such as an application for decoding and rendering audio and/or video for playing to a user of the wireless device in which the process 800 is performed. The application then accesses the block of data and acts upon it, for example, by rendering the information encoded or otherwise contained in the data. The application may be implemented as a hardware, software, and/or firmware block. [0092] The process flow then proceeds to decision block 840, where a decision is made whether one or more of the gain settings need to be updated. The decision may be based on the amount of data processed with the current gain settings (for example, whether one OFDM symbol or another predetermined number of OFDM symbols have been processed with the current gain settings), or made on as-needed basis (for example, signal amplitude range exceeds predetermined upper limit or falls below a predetermined lower limit). [0093] If the gain settings need not be updated, as determined in the decision block 840, the process flow returns to the step 815 to operate on another (e.g., next) block of data using the current gain settings. Otherwise, the process flow return to the step 805 to determine the new gain settings of the butterflies 710 for processing another (e.g., next) block of data in the FFT engine. [0094] Although steps and decision blocks of various methods may have been described serially in this disclosure, some of these steps and decisions may be performed by separate elements in conjunction or in parallel, asynchronously or synchronously, in a pipelined manner, or otherwise. There is no particular requirement that the steps and decisions be performed in the same order in which this description lists them, except where explicitly so indicated, otherwise made clear from the context, or inherently required. It should be noted, however, that in selected variants the steps and decisions are performed in the particular sequences described above and/or shown in theaccompanying Figures. Furthermore, not every illustrated step and decision may be used in every system, while some steps and decisions that have not been specifically illustrated may be desirable in some systems. [0095] It should be noted that, in aspects, the inventive concepts disclosed may be used on forward links, reverse links, peer-to-peer links, and in other non-multiple access contexts. It should also be noted that the communication techniques that are described in this document may be used for unidirectional traffic transmissions, as well as for bidirectional traffic transmissions. [0096] Those of skill in the art would also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0097] Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments and variants disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To show clearly this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps may have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware, software, or combination of hardware and software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0098] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments and variants disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or statemachine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0099] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer- readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where "disks" usually reproduce data magnetically, while "discs" reproduce data optically with lasers and LEDs. Combinations of the above should also be included within the scope of computer- readable media. [00100] The previous description of the disclosed embodiments and variants is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the embodiments and variants shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
A method of forming a semiconductor with n-channel and p-channel transistors with optimum gate to drain overlap capacitances for each of the different types of transistors, uses differential spacing on gate electrodes for the respective transistors. A first offset spacer is formed on the gate electrode and an n-channel extension implant is performed to create source/drain extensions for the n-channel transistors spaced an optimum distance away from the gate electrodes. Second offset spacers are formed on the first offset spacers, and a p-channel source/drain extension implant is formed to create source/drain extensions for the p-channel transistors. The increased spacing of the source/drain extension implants away from the gate electrodes in the p-channel transistors accounts for the faster diffusion of the p-type dopants in comparison to the n-type dopants. |
What is claimed is: 1. A method of forming n-channel and p-channel transistors on the same substrate, comprising the steps of:forming first offset spacers on n-channel transistor gate electrodes and on p-channel gate electrodes, the first offset spacers having a first spacer width; forming source/drain extensions in the n-channel transistors by implanting n-type dopants spaced from the n-channel gate electrodes by the first spacer width, the first offset spacers masking implantation into the substrate directly beneath the first offset spacers; forming second offset spacers on the first offset spacers to form offset spacer pairs after the forming of the source/drain extensions in the n-channel transistors, each second offset spacer having a second spacer width, the width of each offset spacer pair being equal to the first spacer width plus the second spacer width; and forming source/drain extensions in the p-channel transistors by implanting p-type dopants spaced from the p-channel gate electrodes by the offset spacer pair width, the first and second offset spacers masking implantation into the substrate directly beneath the first and second offset spacers. 2. The method of claim 1, further comprising forming sidewall spacers on the second offset spacers; forming source/drain regions in the n-channel transistors by implanting n-type dopants; and forming source/drain regions in the p-channel transistors by implanting p-type dopants.3. The method of claim 2, wherein the first offset spacer has a width between about 60 Å to about 180Å.4. The method of claim 3, wherein the second offset spacer has a width between about 120 Å to about 240Å.5. The method of claim 1, further comprising forming a liner oxide on the gate electrode, the first offset spacers and the substrate after the source/drain extensions in the n-channel transistors are formed and prior to the forming of the second offset spacers. |
FIELD OF THE INVENTIONThe present invention relates to the field of semiconductor manufacturing, and more particularly, to the formation of n-channel and p-channel transistors with reduced gate overlap capacitance.BACKGROUND OF THE INVENTIONFabrication of a semiconductor device and an integrated circuit thereof begins with a semiconductor substrate and employs film formation, ion implantation, photolithographic, etching and deposition techniques to form various structural features in or on a semiconductor substrate to attain individual circuit components which are then interconnected to ultimately form an integrated semiconductor device. Escalating requirements for high densification and performance associated with ultra large-scale integration (ULSI) semiconductor devices requires smaller design features, increased transistor and circuit speed, high reliability and increased manufacturing throughput for competitiveness. As the devices and features shrink, and as the drive for higher performing devices escalate, new problems are discovered that require new methods of fabrication or new arrangements or both.There is a demand for large-scale and ultra large-scale integration devices employing high performance metal-oxide-semiconductor (MOS) devices. MOS devices typically comprise a pair of ion implanted source/drain regions in a semiconductor substrate and a channel region separating the source/drain regions. Above the channel region is typically a thin gate oxide and a conductive gate comprising conductive polysilicon or another conductive material. In a typical integrated circuit, there are a plurality of MOS device of different conductivity types, such as n-type and p-type, and complementary MOS (CMOS) devices both employing both p-channel and n-channel devices that are formed on a common substrate. CMOS technology offers advantages of significantly reduced power density and dissipation as well as reliability, circuit performance and cost advantages.As the demand has increased for semiconductor chips that offer more functions per chip and shorter times for performing those functions, semiconductor device dimensions have been pushed deeper and deeper into the sub-micron regime. Smaller devices readily translate into more available area for packing more functional circuitry onto a single chip. Smaller devices are also inherently advantageous in terms of shorter switching times.There are certain factors, such as parasitic device capacitance, that impact device switching times. One relevant component of parasitic device capacitance is the gate to drain overlap capacitance which is also referred to as "Miller Capacitance". The gate to drain overlap capacitance can have a significant impact on device switching speed. It is important to obtain sufficient gate overlap of source/drains for maintaining low channel resistance, but still minimize the gate to drain overlap capacitance. One of the methods that has been employed involves the use of offset spacers on the gate electrodes during source/drain extension implantation steps. The offset spacers act as a mask to prevent implantation of the dopants into the substrate directly beneath the spacers and thus, increases the separation between the source/drain extensions and the gate electrode.The diffusivity in silicon of boron, a p-type dopant, is significantly greater than the diffusivity of arsenic, an n-type dopant. This creates a concern in semiconductor devices that contain both n-channel and p-channel transistors. The formation of an offset spacer that minimize the overlap capacitance will be optimized for only one type of transistor (e.g., n-channel) and not the other type of transistor (e.g., p-channel). In other words, providing an offset spacer with the optimum width to optimize the gate to drain overlap capacitance for an n-channel transistor will not provide the optimum spacing for a p-channel transistor optimization, due to the faster diffusion of boron in silicon.SUMMARY OF THE INVENTIONThere is a need for a method of producing n-channel and p-channel transistors on the same chip in a manner that allows optimization of the gate to drain overlap capacitance for each of the different types of transistors on the chip.These and other needs are met by embodiments of the present invention which provide a method of forming n-channel and p-channel transistors on the same substrate, and comprises the steps of forming source/drain extensions in n-channel transistors by implanting n-type dopants a first distance away from first gate electrodes. Source/drain extensions are formed in the p-channel transistors by implanting p-type dopants a second distance away from second gate electrodes, the second distance being greater than the first distance.By implanting the p-type dopants into the substrate a distance that is further away from the gate electrodes then the distance at which the n-type dopants are implanted, the faster diffusion of the p-type dopants is accommodated, thereby allowing optimization of the gate to drain overlap capacitance for both the n-channel transistors and the p-channel transistors. In certain embodiments of the invention, the n-type dopants are implanted in accordance with a first spacer width, and the p-type dopants are implanted in accordance with a second spacer width. In certain embodiments of the invention, the first spacer width is equal to the width of a first offset spacer on the gate electrode of the n-channel and p-channel transistors. The second spacer width is equal to the width of the first offset spacers plus the second offset spacers that are formed on the first offset spacers to form offset spacer pairs.The other stated needs are also met by embodiments of the present invention which provide a method of forming a semiconductor device with a substrate and n-channel and p-channel transistors. This method comprises the steps of forming first offset spacers on gate electrodes on the n-channel and p-channel transistors. Source/drain extensions are implanted into the substrate at only the n-channel transistors, with the first offset spacers masking implantation into the substrate directly beneath the first offset spacers. Second offset spacers are formed on the first offset spacers. Source/drain extensions are then implanted into the substrate at only the p-channel transistors. The first and second offset spacers mask the implantation into the substrate directly beneath the first and second offset spacers.The foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a cross-sectional, schematic depiction of n-channel and p-channel transistors on a semiconductor device, during one stage of manufacturing in accordance with embodiments of the present invention.FIG. 2 shows the structure of FIG. 1 after the formation of a first offset spacer in accordance with embodiments of the present invention.FIG. 3 depicts the structure of FIG. 2 following an extension implant into the n-channel devices to form a source/drain extension, in accordance with embodiments of the present invention.FIG. 4 shows the structure of FIG. 3 following the formation of a second offset spacer over the n-channel and the p-channel devices, in accordance with embodiments of the present invention.FIG. 5 depicts the structure of FIG. 4 following a p-channel source/drain extension implant, in accordance with embodiments of the present invention.FIG. 6 shows the structure of FIG. 5 after sidewalls spacers have been formed over the n-channel and p-channel transistors, in accordance with embodiments of the present invention.FIG. 7 shows the structure of FIG. 6 after an n-channel source/drain deep implant, in accordance with embodiments of the present invention.FIG. 8 shows the structure of FIG. 7 after a p-channel source/drain deep implant, in accordance with embodiments of the present invention.FIG. 9 shows the structure of FIG. 8, depicting the final junction shape.FIG. 10 shows a cross-section of a semiconductor device formation in which strain is generated in an SOI film by source/drain oxidation.FIG. 11 depicts the structure of FIG. 10 after the oxidation process is completed.DETAILED DESCRIPTION OF THE INVENTIONThe present invention addresses and solves problems related to the reduction of gate to drain overlap capacitance, and in particular to the problems caused by the differential diffusion rates of p-type dopants and n-type dopants in silicon. The present invention optimizes the overlap capacitance of n-channel transistors and p-channel transistors, respectively, by implanting the dopants of source/drain extensions at different spacing from the gate electrodes. This is accomplished by forming a first offset spacer on the gate electrode, and creating a source/drain extension implant in only the n-channel transistors. Second offset spacers are formed on the first offset spacers, and source/drain extension implants are created in the p-channel transistors. Hence, the source/drain extension implants in the p-channel transistors are spaced further from the gate electrode then the source/drain extensions in the n-channel transistors. This accounts for the faster diffusion of p-type dopants, such as boron. This permits the overlap capacitance to be optimized for both the n-channel and p-channel transistors.FIG. 1 depicts a cross-sectional, schematic depiction of one of the n-channel transistors and one of the p-channel transistors during one step of the manufacturing of the present invention. Except where otherwise noted, the following description employs conventional processing methodologies to form and etch the layers, and implant the dopants into the substrate. As shown in FIG. 1 a substrate 10 forms a common substrate for the n-channel and p-channel transistors. The n-channel transistor 12 has a gate electrode 16, as does the p-channel transistor 14. The gate electrodes 16 are formed in a conventional manner, as by deposition of polysilicon gate layer over the substrate 10, and conventional photolithographic and etching techniques.In FIG. 2, first offset spacers 18 are formed on all the gate electrodes 16, in both the p-channel 14 and n-channel 12 transistors. The first offset spacers 18 may be made of a conventional spacer material, such as silicon nitride or silicon oxide, for example, but other materials may also be used, such as silicon oxynitride, for example. The deposition and formation of the first offset spacer include deposition of a first spacer material in a first spacer layer (not shown) over the entire substrate 10 and the gate electrodes 16. The thickness of the first spacer layer may be selected so that the first offset spacers 18 have a desired width after etching to optimize the gate to drain overlap capacitance of the n-channel transistors. For example, the depth of the first spacer layer may be between about 100 Å to about 300 Å. After conventional anisotropic etching, such as a reactive ion etch, the first offset spacers 18 are formed with a width of between about 60 to about 180 Å wide on the substrate 10. This spacing is normally considered adequate for an offset spacer for n-channel transistors to provide the optimum gate to drain overlap capacitance. As can be seen from this example, the anisotropic etching produces a spacer 18 that has a width of approximately 60% of the thickness of the spacer layer. Greater or lesser thicknesses of the spacer layer, or variation in the etching techniques, may produce widths of offset spacers that are tailored to produce a desired overlap capacitance.Following the formation of the first offset spacers 18, as depicted in FIG. 2, an n-channel source/drain extension implant is performed by conventional technique as depicted FIG. 3. The p-channel transistors 14 are masked during the implantation process to protect the p-channel transistors 14 from implantation of n-type dopants Ion implantation, for example, may be performed to implant n-type dopants, such as arsenic, into the substrate 10. The implanted dopants create source/drain extensions 20 for the n-channel transistors 12. The first offset spacers 18 mask the substrate 10 to prevent ion implantation of the n-type dopants directly beneath the first offset spacers 18 in the n-channel transistor 12. The width of the first offset spacers 18 is optimized for the n-channel transistors 12. A conventional dosage and energy for the n-channel transistor source/drain extension implant may be employed to create the source/drain extensions 20.Following the source/drain extension implantation process, the mask over the p-channel transistors 14 is removed and a second spacer layer (not shown) is deposited over the substrate 10 and the n-channel transistors 12 and the p-channel 14 transistors. The second spacer layer is then etched in a conventional, anisotropic manner, to form second offset spacers 22 on the first offset spacers 18 of both the n-channel transistors 12 and the p-channel transistors 14. Again, a conventional spacer material, such as silicon nitride or silicon oxide, may be used to form the second offset spacers 22.The thickness of the second spacer layer may be tailored such that the width of the second offset spacer 22 is optimized to account for the faster diffusivity in silicon of p-type dopants. In other words, after etching, the offset spacer pair 24, formed by the first offset spacer 18 and the second offset spacer 22, should have a width that is selected to optimize the gate to drain overlap capacitance of p-channel transistors, taking into account the faster diffusion of p-type dopants. In accordance with certain embodiments of the invention, the thickness of the second spacer layer is between about 200 to about 400 Å. This creates second offset spacers 22 having a width of between about 120 to about 240 Å. Hence, the combined width of an offset spacer pair is between about 180 to about 420 Å. In certain preferred embodiments the combined width of the offset spacers 24 is between about 180 to about 300 Å wide.In certain embodiments of the invention, a liner oxide (not shown) of about 100 Å thick is formed over the substrate 10, the first offset spacers 18 and the gate electrode 16 prior to the deposition of the second spacer layer. The liner oxide may be deposited by LPCVD (low pressure chemical vapor deposition) or PECVD (plasma enhanced chemical vapor deposition), for example. The liner oxide is not depicted in the embodiments of FIGS. 1-8, but may be used to improve the overall quality of the transistors.Following the formation of the second offset spacers 22, the n-channel transistors 12 are masked off and a source/drain extension implant step is performed to create source/drain extension regions 26 in the p-channel transistors 14. The offset spacer pairs 24, comprising the first offset spacers 18 and the second offset spacers 22, mask the substrate 10 underneath the first offset spacers 18 and the second offset spacers 22. Hence, the source/drain extensions 26 in the p-channel transistors 14 are spaced further from the gate electrodes 16 than the source/drain extensions 20 in the n-channels 12. This accounts for the faster diffusion rates of boron and optimizes the overlap capacitance the p-channel transistors 14. Conventional dosages and implantation energies for the p-type dopants are employed to create the source/drain extensions 26.In FIG. 6, sidewall spacers 28 are formed on the second offset spacers 22. The sidewalls spacers 28 are formed on both the n-channel and p-channel transistors 12, 14. The sidewall spacers 28 may be formed of a conventional material, such as silicon oxide, silicon nitride, or silicon oxynitride, for example. A spacer material is deposited and then etched anisotropically to create the sidewall spacers 28. The sidewall spacers 28 are at least twice as large as the first and second offset spacers 18, 22 in preferred embodiments of the invention. Exemplary thicknesses (width) of the sidewall spacers range from about 500 to about 1500 Angstroms in embodiments of the invention. Following the formation of the sidewall spacers 28, in a conventional manner, the p-channel transistors 14 are masked again and a source/drain deep implant is performed to create source/drain regions 30 in the n-channel transistors 12. Conventional dosages and energies may be employed to create the source/drain regions 30. The resulting and structure as depicted in FIG. 7.As shown in FIG. 8, a mask is formed over the n-channel transistors 12 and a source/drain deep implant process is performed in a conventional manner to create source/drain regions 32 in the p-channel transistors 14. The sidewall spacers 28 prevent the implantation of the n-type dopants in the substrate directly beneath the spacers 28 during the implantation process of FIG. 7, and the p-type dopants during the implantation process of FIG. 8.Following the formation of the source/drain regions 32 in the p-channel transistors 14, the mask over the n-channel transistors 12 is removed. Further processing of the semiconductor device is then performed in accordance with conventional techniques, the result of which is depicted in FIG. 9. Note that the extension junctions have diffused laterally and vertically to form an overlap region with the gate poly.The present invention provides differential spacing for n-channel and p-channel transistors to create optimum overlap capacitance for the respective transistors. This is achieved in a cost-effective and practical manner by the use of multiple offset spacers formed on the gate electrodes.In another aspect, the surface of a source/drain is oxidized to create a strain in thin silicon-on-insulator (SOI) film. The source/drain oxidation is performed after a narrow silicon nitride spacer is formed. Therefore, the polysilicon sidewalls are protected during the source/drain oxidation and the transistor structure is unaltered. The strain generated as a result of the source/drain oxidation changes the carrier mobility favorably.FIG. 10 depicts a precursor in which a silicon substrate 40 is covered by a buried oxide layer 42. SOI islands 44 are formed on the buried oxide layer 42. The gate electrode 46 is protected on its sidewalls by narrow spacers 48, which may be made of silicon nitride, for example.Following the formation of the spacers 48 (by deposition and etch back, for example), an oxidation process is performed to grow oxide 50 over the surface of the source/drain area and the gate electrode 46, as shown in FIG. 11. The growth of the oxide 50 on the surface of the source/drain are causes additional stress to be created in the SOI film 44, thereby improving carrier mobility. A standard CMOS processing may then follow.Although the present invention has been described and illustrated in detail, it to be clearly understood that the same is by way of illustration and example only, and it is not to be taken by way of limitation, the scope of the present invention being limited only by the terms of the appended claims. |
Embodiments of a method and system for saving system context after a power outage are disclosed herein. A power agent operates to reduce the possibility of data corruption due to partially written data during an unexpected power outage. The power agent can determine an amount of time remaining before a power store is depleted. Based on the amount of time, the power agent can store system context information. Correspondingly, the power agent can operate to save complete system context, partial system context, or flush (I/O) buffers. Once power is restored, the power agent can restore the system context based on the nature of the save. Other embodiments are described and claimed. |
CLAIMS What is claimed is: L A method comprising: determining if an amount of time remaining before computing device power is lost is sufficient to save a complete system context, wherein determining comprises monitoring a system power store; and determining if the amount of time remaining before system power is lost is sufficient to save a partial system context, wherein determining comprises monitoring a system power store. 2. The method of claim 1, further comprising spooling the complete system context to non- volatile memory if the amount of time is sufficient to spool the complete system context. 3. The method of claim 1, further comprising spooling the partial system context to non- volatile memory if the amount of time is insufficient to spool the complete system context. 4. The method of claim 1, further comprising flushing input/output (I/O) buffers if the amount of time is insufficient to spool the partial system context. 5. The method of claim 1, further comprising using an agent to determine the amount of time based at least in part on an amount of charge remaining in the power store. 6. The method of claim 2, further comprising completely restoring the system context when power is restored after a power loss. 7. The method of claim 3, further comprising partially restoring the system context when power is restored after a power loss. 8. A computer-readable medium having stored thereon instructions, which when executed in a system operate to save system context by: receiving an interrupt based on a power loss; determining an amount of power remaining based on an associated power store; determining if the amount of power remaining is sufficient to spool a complete system context; and determining if the amount of power is sufficient to spool a partial system context. 9. The medium of claim 8, wherein the instructions, when executed, spool the complete system context to non- volatile memory if the amount of power is sufficient to spool the complete system context. 10. The medium of claim 8, wherein the instructions, when executed, spool the partial system context to non- volatile memory if the amount of power is insufficient to spool the complete system context. 11. The medium of claim 8, wherein the instructions, when executed, flush an input/output (I/O) buffer if the amount of power is insufficient to spool the partial system context. 12. The medium of claim 8, wherein the instructions, when executed, determine the amount of power based at least in part on an amount of charge remaining in the power store. 13. The medium of claim 8, wherein the instructions, when executed, use a power manager to monitor a power supply and the power store. 14. A system configured to save system context, the system comprising: a power agent configured to, determine an amount of time based on an associated power store and a loss of power; determine if the amount of time is sufficient to spool a complete system context; and determine if the amount of time is sufficient to spool a partial system context. 15. The system of claim 14, wherein the power agent is further configured to spool the complete system context to non- volatile memory if the amount of time is sufficient to spool the complete system context. 16. The system of claim 14, wherein the power agent is further configured to spool the partial system context to non- volatile memory if the amount of time is insufficient to spool the complete system context. 17. The system of claim 14, wherein the power agent is further configured to flush an input/output (I/O) buffer if the amount of time is insufficient to spool the partial system context. 18. The system of claim 14, wherein the power agent is further configured to determine the amount of time based at least in part on an amount of charge remaining in the power store. 19. The system of claim 15, wherein the power agent is further configured to completely restore the system context when power is restored. 20. The system of claim 16, wherein the power agent is further configured to partially restore the system context when power is restored. |
SAVING SYSTEM CONTEXT IN THE EVENT OF POWER LOSSBACKGROUND OF THE DISCLOSUREUnplanned power outages and interruptions can be disastrous to computer users. Frequent power disruptions can lead to equipment damage, such as hard disk corruptions, which can result in significant down time for a computer user. Significant down time in turn can lead to lost revenue and opportunity. Furthermore, power outages and interruptions can result in a loss of vital data associated with a computer. For example, unexpected power outages can result in data corruption due to data being partially written before the outage. Power outages can be particularly devastating for computer users in emerging markets in which power may be intermittent at best. Brown-outs continue to be problematic in India and China. Some business environments counter power outage issues by employing an uninterrupted power supply across a network. However, this option is not cost viable for many computer users.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a block diagram of an operating environment including a power agent that operates to spool system context to non- volatile storage based on an amount of power remaining in a power store.Figure 2A is a flow diagram illustrating using a power agent to perform a system context save operation according to an embodiment.Figure 3 A is a screen shot of an operating system task manager illustrating a number of processes of a computing device.Figure 3B is a screen shot of an operating system task manager illustrating performance related information associated with the computing device of Figure 3 A.DETAILED DESCRIPTION A power agent can be associated with a platform, such as server, desktop, handheld device, and other computing devices. The power agent operates to reduce the possibility of data corruption due to partially written data during an unexpected power outage. The power agent can determine an amount of time (or power) remaining before a power store is depleted. Based on the amount of time, the power agent can store system context. In certain circumstances, the power agent operates to save partial system context which enables a partial restoration of the system environment once power is restored. The power agent operates to save system context to a storage device, such as non-volatile memory, based on an amount of power remaining in the power store. Accordingly, embodiments of a method and system for saving system context upon the occurrence of power outage are disclosed herein.In the following description, numerous specific details are introduced to provide a thorough understanding of, and enabling description for, embodiments of operations using the power agent. One skilled in the relevant art, however, will recognize that these embodiments can be practiced without one or more of the specific details, or with other components, systems, etc. In other instances, well-known structures or operations are not shown, or are not described in detail, to avoid obscuring aspects of the disclosed embodiments. Figure 1 illustrates an operating environment 100 including a power agent 104 that operates to save system context to non-volatile storage, under embodiments described herein. A platform, computing device 102 for example, includes a bus 103 in communication with the power agent 104. As described further below, the power agent 104 is used in various transactions, such as transactions in which the computing device 102 has lost external power and requires saving system context to non- volatile storage. The computing device 102 is one type of a "platform." Generally, a platform corresponds to an entity, such as a server, mobile computing device, personal computer, etc. operating to transfer and manipulate information. The power agent 104 operation is described below in detail as represented by the flow of Figure 2. The computing device 102 typically includes random access memory (RAM) or other dynamic storage as a main memory 106 for storing information and instructions to be executed by a processor 108. It will be appreciated that the computing device 102 can include multiple processors and other devices. The computing device 102 can include read-only memory (ROM) 110 and/or other static storage for storing static information and instructions for the processor 108. A storage device 112, such as a magnetic disk, optical disk and drive, flash memory or other nonvolatile memory, or other memory device, can be coupled to the bus 103 of the computing device 102 for storing information and instructions. The power agent 104 is configured as logic embedded in the storage device 112, such as a flash memory component. The embedded logic can be hardware, software, or a combination of both. As described below, the power agent 104 operates to save system context to a non- volatile memory, such as storage device 112, based on an amount of power contained in a power store 114.As shown in Figure 1, according to an embodiment, the power agent 104 is in communication with a power store 114, power manager 116, and power source 118. According to this embodiment, the power store 114 is implemented as a capacitive device which operates to store power or energy based on the storage capacity and time connected to an active power or energy source, such as power source 118. Alternatively, the power store 114 is implemented as a quick discharge battery, such as a nickel metal hydride ("NiMH") battery, lithium ion battery, bank of capacitors, uninterruptible power supply, etc. Once the power source 118 is shut-off or otherwise interrupted (such as during a power outage), the power store 114 operates to dissipate the stored energy at a rate which is dependent on its storage capacity and load.As described below, the power store 114 has sufficient capacity to allow the power agent 104 to save some or all of the system context when the external power source 118 is interrupted or fails. It will be appreciated that the power store 114 is implemented to have sufficient capacity based on the components and configuration of an associated computing device. For example, the capacity of the power store 114 is typically less for smaller systems, such as handheld devices, as compared to larger systems, such as desktop systems. The power source 118 is an A/C power supply or equivalent, such as a wall outlet which can supply power to the computing device 102 when the computing device 102 is plugged in. A number of input/output (I/O) devices 120 can be coupled with the computing device 102 via bus 103. Exemplary (I/O) devices include, but are not limited to, display devices, communication devices, audio devices, printers, scanners, and various manipulation devices for inputting and outputting information to a platform. The computing device 102 can be in communication with a network, system, or other computing devices.As described above, the power agent 104 is also in communication with a power manager 116. The power manager 116 is a power or voltage sensor operating to monitor the power source 118. The power manager 116 also operates to monitor the power store 114 to determine an amount of power available at a given time. The power manager 116 can also be described as a power management microcontroller which operates to monitor the charge level of the power source 118 and power store 114, respectively.As described below, if the power manager 116 detects a drop in the charge level of the power source 118, the power manager 116 is configured to interrupt the processor 108 by providing an interrupt signal across the bus 103 to the processor 108. According to an embodiment, the power manager 116 can interrupt the main processor at any time via a system management interrupt (SMI). The SMI can be used when the power manager 116 detects a drop in the charge level of the power source 118. The SMI is a high priority non- maskable interrupt for the processor 108. According to an embodiment, the power manager 116 operates to filter the signal from the power source 118, which can smooth out intermittent power surges. The power manager 116 filtering will allow for a more consistent user experience, while also tending to prevent damage to the system.Figure 2 is a flow diagram illustrating a system context save operation according to an embodiment. At 200, a system, such as computing device 102 powers on by switching on the computing device 102. At this point, the power source 118 is providing power to the computing device 102. The power source 118 is also supplying power to the power store 114. Accordingly, the power store 114 begins to charge or store power. At 202, the firmware initializes the computing device 102 and boots a target application, such as an operating system (OS) target. Firmware typically refers to software stored in ROM or programmable ROM (PROM) and is responsible for the behavior of the computing device 102 when it is first switched on. During initialization, firmware logic, including the agent 104, is loaded from the storage device 112 to handle interactions, such as various errors or conditions detected by hardware of the computing device 102, as described below.As described above, the power manager 116 monitors the charge level of the power source 118. At 204, the power manager 116 detects whether a drop in charge level of the power source 118 has occurred. If the power manager 116 does not detect a drop in charge level of the power source 118, at 206, the computing device 102 continues its normal operation. If the power manager 116 detects a drop in charge level of the power source 118, at 208, the power manager 116 issues an interrupt, such as an SMI, which alerts the processor 108 of the power loss. At this point, the power store 114 begins to discharge its stored charge.As described above, the power agent 104 is in communication with the power manager 116 and the power store 114. According to this embodiment, the computing system 102 gives control to the power agent 104 based on the interrupt. The power agent 104 also receives information from the power manager 116 associated with the remaining charge level in the power store 114. At 210, based on the configuration of the computing device 102, the power agent 104 determines if there is sufficient charge (power) remaining in the power store 114 to perform a complete system context save, such as an Advanced Configuration and Power Interface (ACPI) S4 state save.Based on the charge remaining in the power store 114, the power agent 104 determines an amount of time (or power) remaining before the charge is totally depleted. The time remaining is dependent on the capacity of the power store 114 and the configuration of the computing system 102. The time remaining can be correlated to how long it will take to spool the system context to memory. Alternatively, the power manager 116 can be configured to calculate the amount of time (or power) remaining, and provide this information to the power agent 104. If there is sufficient charge remaining in the power store 114 to perform a complete system context save, at 212, the power agent 104 spools the complete system context to non-volatile storage. The complete context save allows a complete restoration of the system environment when power is restored to the computing device 102. If there is insufficient charge remaining in the power store 114 to perform a complete system context save, at 214, the power agent 104 determines if there is sufficient charge remaining in the power store 114 to perform a partial system context save.If there is sufficient charge remaining in the power store 114 to perform a partial system context save, at 216, the power agent 104 spools the partial system context to non- volatile storage. According to this embodiment, spooling partial system context corresponds with saving the active OS state (non-paged) and context for one or more applications currently in use by the user of the computing device 102. Thus, during the partial context spool, the power agent 104 saves the active non-paged OS state and the context of one or more applications currently in use based on the amount of charge remaining in the power store 114 (which corresponds to a spool time) and/or a prioritized application spooling scheme. For this embodiment, applications being used at the time of power drop-off are prioritized. The partial context save allows a partial restoration of the system environment when power is restored to the computing device 102.If there is insufficient charge remaining in the power store 114 to perform a partial system context save, at 218, the power agent 104 flushes the input/output (I/O) buffers, ensuring that there is no partially written data remaining. According to this embodiment, at 220, if power is restored to the computing device before completion of a spool to storage 102 (at 212 or 216), the power agent 104 discontinues the respective spool and the flow returns to 206. Otherwise, the computing device 102 safely powers off at 222.Figure 3A is a screen shot of an OS task manager 300 which illustrates a number of processes running on a computing device. As shown in Figure 2, the user is running a number of processes, and is currently using an e-mail application ("OUTLOOK.EXE") 302. The e-mail application 302 is using 22.476 Mbytes of memory.Figure 3B is a screen shot of the OS task manager 300 illustrating performance related information associated with the computing device running the processes in Figure 3 A. As shown in Figure 3B, the active non-paged OS state 304 is using 32.152 Mbytes of memory. According to this example, it would take about 150 seconds to spool the complete system context if power was lost or interrupted. Conversely, it would take about 3.4 seconds to spool the partial system context which includes the non-paged OS state and the e-mail application context, (i.e. (32.1 Mbytes + 22.5 Mbytes)/16 Mbytes/sec = 3.4 seconds).Consequently, for this example, the power store 114 only has to provide power for 3.4 seconds to enable the power agent 104 to perform a partial system context save, as compared to 150 seconds for a complete system context save. Thus, to perform a partial system context save, the power store 114 should have sufficient capacity to provide enough time for the power agent 104 to spool the non-paged OS state and the e-mail application context. Accordingly, the time to perform a partial system context save is about 2 orders of magnitude less than for a complete context system save. This time difference can be critical if and when the power source 118 is interrupted or lost. Moreover, the power store 114 does not have to have a size and related cost to allow time to spool the complete system context and can be implemented in a computing system to be consistent with partial context saving capability. Finally, if the power store 114 does not have sufficient charge, the power agent 104 can at least flush the I/O buffers, as described above. It will be appreciated that the power store 114 can include more or less capacity, and the amount of time the capacity will run the computing device is dependent upon the configuration and user preferences when using the computing device 102.In an alternative embodiment, the power store 114 and power manager 116 can be consolidated as a separate device. The consolidated power store 114 and power manager 116 can then be coupled between the computing device 102 and power source 118. Alternatively, the power store 114 and power manager 116 can be separately coupled between the power source 118 and computing device 102. It will be appreciated that different configurations can be used according to a desired implementation.It will also be appreciated that the power agent 104 can be included separately from the OS (OS agnostic), as described above. Alternatively, the power agent 104 is embedded with the OS and operates to trigger an OS specific driver if power is lost or interrupted. In any case, the embodiments described herein tend to reduce the possibility of data corruption due to partially written data during an unexpected power outage.Aspects of the methods and systems described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices ("PLDs"), such as field programmable gate arrays ("FPGAs"), programmable array logic ("PAL") devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits. Embodiments may also be implemented as microcontrollers with memory (such as electrically erasable programmable read-only memory ("EEPROM")), embedded microprocessors, firmware, software, etc. Furthermore, aspects may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor ("MOSFET") technologies like complementary metal-oxide semiconductor ("CMOS"), bipolar technologies like emitter-coupled logic ("ECL"), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc. The various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., hypertext transfer protocol ("HTTP"), file transfer protocol ("FTP"), simple mail transfer protocol ("SMTP"), etc.).Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of "including, but not limited to." Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words "herein," "hereunder," "above," "below," and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word "or" is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list; all of the items in the list; and any combination of the items in the list.The above description of illustrated embodiments is not intended to be exhaustive or limited by the disclosure. While specific embodiments of, and examples are described herein for illustrative purposes, various equivalent modifications are possible, as those skilled in the relevant art will recognize. The teachings provided herein can be applied to other systems and methods, and not only for the systems and methods described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to methods and systems in light of the above detailed description.In general, in the following claims, the terms used should not be construed to be limited to the specific embodiments disclosed in the specification and the claims, but should be construed to include all systems and methods that operate under the claims. Accordingly, the method and systems are not limited by the disclosure, but instead the scope is to be determined entirely by the claims. While certain aspects are presented below in certain claim forms, the inventors contemplate the various aspects in any number of claim forms. For example, while only one aspect is recited as embodied in a machine- readable medium, other aspects may likewise be embodied in a machine-readable medium. Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects as well. |
Embodiments of the present disclosure are directed toward techniques and configurations for immersion cooling. In embodiments, an apparatus configured for immersion cooling may include a number of trays and a fluid circulation system. The number of trays may be configured to hold one or more circuit boards and may have a first opening to allow dielectric fluid to be injected into the tray, and a second opening to allow for escape of the dielectric fluid. The fluid circulation system may include a catchment area to collect the dielectric fluid that escapes from the plurality of trays and a distribution manifold coupled with the catchment area, to deliver the dielectric fluid collected in the catchment area back to the plurality of trays. Other embodiments may be described and/or claimed. |
1.An apparatus for immersion cooling comprising:a plurality of trays each holding one or more circuit boards and having a first opening that allows a dielectric fluid to be injected into the tray and allowing the dielectric fluid to escape from the tray a second opening; andFluid circulation system, including: a collection zone disposed under the plurality of trays to collect the dielectric fluid that escapes from the plurality of trays; a distribution manifold having a plurality of injection ports above or adjacent to respective trays of the plurality of trays, the distribution manifold being coupled to the collection region to collect the dielectric collected in the collection region Fluid is delivered back to the plurality of trays through the injection port.2.The apparatus of claim 1 wherein said dielectric fluid is absorbing thermal energy generated by said one or more circuit boards and said escaping through said dielectric fluid to said interior of said device The one or more circuit boards turn the thermal energy away.3.The apparatus of claim 2 wherein said second opening is designed to allow said dielectric fluid to escape said one or more of: leakage of said dielectric fluid; said dielectric The overflow of the fluid; or the evaporation of the dielectric fluid.4.The apparatus of claim 2 wherein said second opening is an outlet port disposed on a side or bottom of said tray to allow said dielectric fluid to leak from said tray.5.The apparatus of claim 2 wherein said dielectric fluid will evaporate into a dielectric gas in response to absorption of thermal energy emitted by said one or more circuit boards, and wherein said second opening comprises said intervening An electric gas escapes from the top opening of the tray.6.The apparatus of claim 5 further comprising performing a condensation process to remove thermal energy from said dielectric gas, causing said dielectric gas to condense back to the dielectric fluid, and routing said thermal energy removed from said dielectric gas To the condenser of the cooling loop.7.The apparatus of claim 6 further comprising a sub-ambient cooler to further remove thermal energy from said dielectric gas.8.The apparatus of claim 6 wherein said collection zone is a first collection zone, and wherein said fluid circulation system further comprises coupled to said distribution manifold, disposed below said condenser for collection by said condensation A second collection region of the dielectric fluid produced by the process.9.The apparatus of claim 6 further comprising a pressure sensor to output a measure of ambient air pressure, and wherein said condenser will perform said condensation process in response to determining that said measurement of said air pressure meets or exceeds a predefined threshold .10.The apparatus of claim 6 further comprising one or more additional condensers disposed at different locations around the tray or the fluid circulation system.11.The apparatus of claim 5 further comprising one or more evaporators to remove thermal energy from said dielectric gas to cause said dielectric gas to condense back to the dielectric fluid;One or more condensers coupled to respective evaporators to remove thermal energy from the evaporator and to route the thermal energy removed from the evaporator to a cooling loop, wherein the one or more evaporators The one or more condensers coupled to the respective evaporators form one or more heat pipes.12.The apparatus of any of claims 1-11, wherein the fluid circulation system further comprises a pump coupled to the collection zone and the distribution manifold, wherein the pump is to be received in the collection zone The dielectric fluid is delivered to the distribution manifold.13.The apparatus of claim 12 wherein said pump is to deliver said dielectric collected in said collection zone at a rate equal to or greater than a rate at which said dielectric fluid escapes from said plurality of trays fluid.14.The apparatus of claim 12 wherein said fluid circulation system further comprises a heat exchanger coupled to said pump, and wherein said heat exchanger is to remove thermal energy from said dielectric fluid.15.The apparatus of any of claims 1-11, wherein the first opening is an inlet port that receives delivery of the dielectric fluid from at least one of the injection ports of the distribution manifold.16.A device as claimed in any of claims 1 to 11, wherein the device is a computer server rack component.17.The device of any of claims 1-11, wherein the device is a computer server, further comprising the one or more circuit boards.18.A method for immersion cooling comprising:A dielectric manifold is provided to the plurality of trays by a distribution manifold of the immersion cooling arrangement of the server system, wherein each tray of the plurality of trays holds one or more circuit boards immersed in the dielectric fluid, and Each tray has a first opening that allows the dielectric fluid to be injected into the tray by the distribution manifold and a second opening that allows the dielectric fluid to escape from the tray to the interior of the server system ;Collecting the dielectric fluid that escapes from the plurality of trays by the collection zone of the immersion cooling arrangement;The dielectric fluid collected by the collection zone is recirculated back to the distribution manifold for distribution to the plurality of trays by the pump of the immersion cooling arrangement.19.The method of claim 18, further comprising: absorbing, by said dielectric fluid, thermal energy emitted by said one or more circuit boards to cause said dielectric fluid to evaporate into a dielectric gas, wherein said second opening The dielectric gas is allowed to escape from the tray.20.The method of claim 19, further comprising:a condenser disposed by the immersion cooling, condensing the dielectric gas into a dielectric fluid by removal of the dielectric gas by thermal energy;The thermal energy removed from the dielectric gas is routed by the conduit to the cooling loop.21.The method of any of claims 18-20, wherein the re-circulating further comprises:The heat exchanger disposed by the immersion cooling cools the dielectric fluid to reduce thermal energy contained in the dielectric current body to allow the dielectric fluid to absorb more thermal energy.22.A server system comprising:Multiple boards;Immersion cooling arrangement with: a plurality of trays, each tray of the plurality of trays holding one or more circuit boards of the plurality of circuit boards immersed in a dielectric fluid body, and having means for injecting the dielectric fluid into the tray a first opening and a second opening for escaping the dielectric fluid from the tray; a collection zone disposed under the plurality of trays to collect the dielectric fluid that escapes from the plurality of trays; a distribution manifold having a plurality of injection ports above or adjacent to respective trays of the plurality of trays, the distribution manifold being coupled to the collection region to collect the dielectric collected in the collection region Fluid is delivered back to the plurality of trays through the injection port.23.The server system of claim 22 wherein said dielectric fluid will evaporate into a dielectric gas in response to absorption of thermal energy emitted by said one or more circuit boards, and wherein said second opening comprises an allowable The top opening of the dielectric gas escaping from the tray, and further comprising a condenser to perform a condensation process to remove thermal energy from the dielectric gas to cause the dielectric gas to condense back to the dielectric fluid.24.The server system of claim 23, further comprising a pressure sensor to output a measure of ambient air pressure within said server system, and wherein said condenser is responsive to said determination that said measurement of said air pressure meets or exceeds a predefined threshold The condensation process will be performed.25.A server system according to any of claims 22-24, further comprising an additional plurality of circuit boards and an air cooled or closed loop liquid cooling arrangement for cooling said further plurality of circuit boards. |
Recirculating dielectric fluid coolingTechnical fieldEmbodiments of the present disclosure generally relate to the field of thermal cooling of computing devices and, more particularly, to immersion cooling of computing devices.Background techniqueThe background description provided herein is intended to provide an overview of the context of the disclosure. The materials described in this section are not prior art to the claims of the present application, and are not admitted to be prior art.Traditional air-cooled data centers can suffer from limited energy efficiency, very low component densities (eg, high data center footprint), lack of waste heat recovery capabilities, and high operating costs. While air cooling is still the standard for data center cooling, liquid cooling has been steadily increasing in high performance computing (HPC) environments because liquid cooling provides higher component density, waste heat recovery, and lower operations. cost. One form of soak cooling of liquid cooling involves calculating the soaking of components in a dielectric liquid. However, at current state of the art, immersion cooling systems can be extremely costly due to design complexity and redesigned non-standard rack designs that can require costly physical infrastructure (eg, electronic boards are often immersed) In the large box of the dielectric fluid). Therefore, immersion cooling has not been widely adopted.DRAWINGSThe embodiments will be readily understood by the following detailed description. To facilitate this description, similar labels indicate structural elements of the type. In the figures of the figures, the embodiments are shown by way of illustration and not limitation.FIG. 1 illustrates an exploded perspective view of a server system having a soak cooling arrangement of the present disclosure, in accordance with various embodiments of the present disclosure.2 shows a more detailed view of a portion of the immersion cooling arrangement of FIG. 1 in accordance with various embodiments of the present disclosure.FIG. 3 illustrates a more detailed view of a portion of the immersion cooling arrangement of FIG. 1 in accordance with various embodiments of the present disclosure.4 shows an illustration of an infusion cooling arrangement that also has a condenser, in accordance with various embodiments of the present disclosure.FIG. 5 shows an illustration of a immersion cooling arrangement with multiple condensers, in accordance with various embodiments of the present disclosure.FIG. 6 shows an illustration of a soak cooling arrangement with an evaporator and a condenser, in accordance with various embodiments of the present disclosure.FIG. 7 illustrates a perspective view of a server system having a soak cooling arrangement of the present disclosure deployed in a housing, in accordance with various embodiments of the present disclosure.Detailed waysEmbodiments of the present disclosure include techniques and configurations for computing immersion cooling of a server. In the following description, various aspects of the implementations shown will be described using terms commonly employed by those skilled in the art to convey the substance of their work to those skilled in the art. However, those skilled in the art will appreciate that the embodiments of the present disclosure may be practiced only by some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth to provide a thorough understanding of the illustrated embodiments. However, it will be apparent to those skilled in the art that <RTIgt; In other instances, well-known features are omitted or simplified to obscure the illustrated implementation.In the following detailed description, reference to the drawings in the claims It is understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the disclosure. Therefore, the following detailed description is not to be considered in aFor ease of understanding of the present disclosure, the phrase "A and/or B" means (A), (B) or (A and B). For ease of understanding of the present disclosure, the phrases "A, B, and C" mean (A), (B), (C), (A and B), (A and C), (B and C), or (A, B). And C).Descriptions An angle-based description such as top/bottom, in/out, top/bottom, and the like can be used. Such descriptions are only for facilitating discussion and are not intended to limit the application of the embodiments described herein to any particular orientation.The description may use the phrase "in an embodiment" or "in an embodiment", which may mean one or more of the same or different embodiments. Further, the terms "including", "comprising", "having", and the like, are used as used in relation to the embodiments of the present disclosure.The term "coupled" and its derivatives can be used herein. "Coupled" may mean one or more of the following. "Coupled" may refer to direct physical or electrical contact of two or more elements. However, "coupled" may also mean that two or more elements are in indirect contact with each other, but still cooperate or interact with each other, and may mean that one or more other elements are coupled or coupled between elements that are considered to be coupled to each other. The term "directly coupled" may mean that two or more elements are in direct contact.1 shows an exploded perspective view of a computer server rack assembly 100, hereinafter referred to as "server system 100", having a soak cooling arrangement of the present disclosure, in accordance with various embodiments. In an embodiment, the immersion cooling arrangement may include a plurality of computing device trays, such as a vertical computing device tray 102 and a horizontal computing device tray 104, collectively referred to below as a computing device tray for simplicity. The computing device tray can be secured to the rack or housing of the server system 100 using any conventional mechanism (e.g., track, etc.) for securing the electronic components in a conventional rack system. Thus, a standard rack design can be utilized in conjunction with the soak cooling arrangement of the present disclosure.A separate computing device tray can be configured to hold a dielectric fluid and one or more circuit boards such as circuit board 122 that are immersed in the dielectric fluid. By immersing one or more circuit boards in separate computing device trays, the amount of dielectric fluid required to operate the server system 100 with the immersion cooling arrangement of the present disclosure can be reduced. In addition, since the amount of dielectric fluid can be reduced, the weight that the physical infrastructure needs to support will also be reduced. Such a reduction in weight may allow the server system 100 with the immersion cooling arrangement of the present disclosure to be implemented in existing physical infrastructure without the need to redesign the physical infrastructure to support additional importance from the immersion cooling arrangement. For example, server system 100 having the immersion cooling arrangement of the present disclosure may be capable of being implemented in a physical infrastructure that is designed to support 250 pounds per square foot as is currently the case. Additionally, by placing one or more circuit boards in separate computing device trays, one or more of the circuit boards may be more easily heat exchanged, which may be a requirement, for example, in data centers, service farms, and the like.The dielectric fluid can be configured to absorb thermal energy generated by the circuit board 122. Such dielectric fluids may include, but are not limited to, mineral oil, castor oil, silicone oil, or any Novec® engineering fluid from 3M®. In some embodiments, the dielectric fluid can be configured to evaporate into a dielectric gas in response to absorbing thermal energy generated by one or more circuit boards. Some embodiments may be referred to herein as a dual phase immersion cooling arrangement, or simply as a two phase embodiment, while embodiments that maintain dielectric in the liquid phase may be referred to as a single phase immersion cooling arrangement, or simply a single phase embodiment. The dual phase immersion cooling arrangement is discussed in more detail below with respect to Figures 4-6.In an embodiment, the computing device tray can have an opening (or outlet port 118) formed in the computing device tray and configured to allow controlled escape of the dielectric fluid from the computing device tray, for example, into a server rack assembly. This escape of fluid allows for the transfer of thermal energy absorbed by the dielectric fluid from the board. In various embodiments, the openings may be configured to leak through the dielectric fluid (eg, via the outlet port 118), the overflow of the dielectric fluid (eg, through the top opening of the computing device tray), or in a dual phase immersion cooling system One or more of the evaporation of the intermediate current body (eg, via the top opening or perforation of the computing device tray) allows for the escape of the dielectric fluid.To maintain a sufficient level of dielectric fluid in the computing device tray, the computing device tray can have additional openings (eg, inlet port 120) to allow the dielectric fluid to be delivered to the computing device tray. As used herein, a sufficient level of dielectric fluid is an amount of fluid sufficient to cover a useful heat generating portion (eg, one or more processors) of one or more circuit boards. In some embodiments, the opening for escaping the dielectric fluid from one of the computing device trays and the opening for transporting the dielectric fluid to one of the computing device trays may be the same opening. For example, in a dual phase immersion cooling system, the top opening may allow for the release of dielectric gases generated from the evaporation of the dielectric liquid while also allowing the addition of a dielectric liquid through the same top opening. Such an example would also apply where the dielectric liquid can escape from the computing device tray by the overflow of the dielectric liquid from the computing device tray.In an embodiment, the computing device tray can have top openings, such as those shown in FIG. 1, to allow for repair and/or heat exchange of one or more circuit boards (eg, circuit board 122) contained therein. . Additionally, the computing device tray can be configured to allow rapid release of the dielectric fluid contained therein back into the immersion cooling system 100. For example, in embodiments where the computing device tray has a top opening, the top opening may allow for rapid release of the dielectric fluid contained therein by the inversion of the computing device tray to allow the dielectric fluid to escape through the top opening. This rapid release of fluid may allow removal of one or more computing device trays without leakage or loss of dielectric fluid outside of the immersion cooling system 100, or leakage or loss. Additionally, in some embodiments, the top opening may allow for routing of input/output (I/O) and power wiring to one or more of the boards contained therein. In other embodiments, routing features (eg, DIN connectors, bus connectors, or the like) may be formed in one side of the computing device tray to route I/O signals and power to and from the computing device tray. The computing device tray is routed out. In such embodiments, the connection can be sealed to prevent loss of dielectric fluid through the routing part, or the connection seal can be eliminated, and any loss of the dielectric fluid through the routing component can be taken into account in the fluid circulation system described below.Although shown to be uniform in size, it will be appreciated that the computing device trays may vary in size depending on the application. For example, a larger computing device tray can be utilized to hold a larger group of boards while a smaller computing device tray can be utilized to hold a single board. In such embodiments, a larger tray can be a multiple of a smaller tray in size to maintain the ability to swap larger trays into groups of smaller trays. For example, a larger tray can be configured to be the width of three vertical computing device trays 102. Thus, a larger pallet can be swapped into the server system 100 by removing only three adjacent vertical computing device trays and replacing the three adjacent vertical computing device trays with larger ones.In an embodiment, the immersion cooling arrangement of server system 100 can include a fluid circulation system. The fluid circulation system can include a collection zone configured to collect a dielectric fluid that can escape from the computing device tray or can be released from the computing device tray. As shown, the collection zone can include a collection tray 110 and a collection reservoir 112. The collection zone can be deployed under the computing device tray to allow collection of dielectric fluid that escapes from the plurality of trays and to prevent loss of dielectric fluid. Embodiments of the pooling zone are discussed in more detail below with reference to FIG.The fluid circulation system can also include a distribution manifold 114. The distribution manifold 114 can have an injection port (eg, injection port 116) that is disposed above or adjacent to the computing device tray. The distribution manifold 114 can be coupled to the collection zone and can be configured to deliver a dielectric fluid that escapes from the computing device tray back to the computing device via the injection port. The injection port can be configured to inject a dielectric fluid into the computing device tray to maintain a sufficient level of dielectric fluid in a separate computing device tray.In an embodiment, the fluid circulation system can include a pump 106. The pump 106 can couple the distribution manifold 114 and the collection reservoir 112 and can be configured to deliver the dielectric fluid collected in the collection reservoir 112 to the distribution manifold. In an embodiment, the pump 106 can be configured to deliver a dielectric fluid collected in the collection zone at a rate equal to or greater than the rate at which the dielectric fluid escapes from the plurality of trays to ensure that the computing device tray remains at a sufficient level . This can be accomplished by calculating the rate at which the dielectric fluid can escape from the computing device tray and selecting a pump 106 that meets or exceeds this rate. In some embodiments, the pump 106 can be configured to provide a tunable flow rate. In such embodiments, the flow rate of the pump may be adjusted based on changes in the configuration of the immersion cooling system 100. In other embodiments, the pump 106 can be configured with a controller 130 that can monitor the rate at which the dielectric fluid enters the collection zone. In such an embodiment, the collection zone may be configured with a sensor 132 configured to monitor the level of the currenting body of the pooling region or the rate at which the dielectric fluid may enter the pooling zone. Sensor 132 can be communicatively coupled to controller 130 to allow controller 130 to monitor the rate at which the dielectric fluid enters the collection zone.Pump 106 can be external to the collection zone or can be located within the collection zone. For example, the pump 106 can be configured as a shallow pool pump and can be placed within the collection reservoir 112. In some embodiments, the pump 106 can be external to the immersion cooling system 100. Such an embodiment may be beneficial to prevent thermal energy generated by operation of the pump 106 from being introduced into the immersion cooling system 100.In some embodiments, the pump 106 can couple the distribution manifold 114 and the collection reservoir 112 by means of the upper reservoir 108. In such an embodiment, the pump 106 can be configured to deliver a dielectric fluid from the collection reservoir 112 to the upper reservoir 108. In some embodiments, this can be accomplished via a conduit having one end coupled to the pump 106 and an opposite end disposed in one or the bottom of the upper reservoir 108. In other embodiments, the opposite ends can be deployed in an area on the upper reservoir 108. The upper reservoir 108, in turn, can provide a dielectric fluid to the central conduit of the distribution manifold 114 via the opening 126, such as the central conduit configuration discussed in more detail below with respect to FIG. The dielectric fluid can then be delivered to the injection port of the distribution manifold 114 (eg, the injection port 116) via a gravity feed configuration. In other embodiments, the pump 106 can be coupled directly to the central conduit of the distribution manifold 114, for example by means of a top or bottom to the central conduit, as discussed further below with respect to the central conduit configuration.In some embodiments, the fluid circulation system can include a heat exchanger 128. Heat exchanger 128 can be coupled between pump 106 and distribution manifold 114. The heat exchanger 128 can be configured to extract thermal energy from the dielectric liquid to condition the dielectric liquid, absorb more thermal energy from the one or more circuit boards, and thereby increase the cooling effect of the dielectric liquid. In such an embodiment, the heat exchanger 128 can be located outside of the service rack system or within the server rack system. In embodiments where the heat exchanger is located within the server rack system, the heat exchanger must discharge thermal energy extracted from the dielectric liquid to an area outside of the server rack system. For example, the extracted thermal energy can be transferred to a cooling loop, an air cooling element, or any other suitable cooling mechanism. In some embodiments, the thermal energy extracted by the heat exchanger or any of the other heat extraction mechanisms discussed herein may be recaptured by a waste heat recovery process that may increase the efficiency of the immersion cooling arrangement of the server system 100.Additionally, while shown as a fully immersed cooling arrangement, it will be appreciated that the above-described embodiments and other embodiments described herein can be implemented as a hybrid system. In such a hybrid system, a portion of server system 100 may be immersion cooling as described herein, while another portion may be cooled by conventional air cooling, closed loop liquid cooling systems, and the like. For example, in such an embodiment, the vertical computing device tray 102 can be implemented in a soak cooling arrangement as described above, while the horizontal computing device tray 104 can be implemented in a conventional closed loop liquid cooling arrangement.2 shows a more detailed view of a portion 200 of a immersion cooling arrangement of the server system 100 of FIG. 1 in accordance with various embodiments of the present disclosure. Portion 200 shows a vertical computing device tray 102 and a horizontal computing device tray 104. In this illustration, the vertical computing device tray 102 has become translucent to provide a more detailed view of an embodiment of the distribution manifold 114 and collection area.As shown, in some embodiments, the distribution manifold 114 can have a central conduit 202 positioned to extend vertically between the vertical calculation tray 102 and the horizontal calculation tray 104. The central catheter 202 can be coupled to one or more branch conduits (eg, the branch conduits 204), which in turn can be coupled to one or more injection ports (eg, the injection ports 116). The central conduit 202 can be configured to provide a dielectric fluid to one or more branch conduits, and the branch conduit can be configured to provide a dielectric fluid to one or more injection ports. The injection port can be configured to inject a dielectric fluid into the computing device tray to maintain a sufficient level of dielectric fluid in the computing device tray.In some embodiments, the central conduit 202 can couple the distribution manifold 114 and the collection zone to deliver the dielectric fluid that escapes from the computing device tray back to the computing device via the injection port. This can be done, for example, by including an upper reservoir (e.g., upper reservoir 108 of Figure 1). In such an arrangement, the dielectric fluid can be pumped from the collection reservoir 112 to the upper reservoir, which can be configured to provide a dielectric fluid to the computing device tray via a gravity flow configuration. Such a gravity flow configuration may allow the dielectric fluid to fall into the central conduit 202 (eg, via opening 126 of FIG. 1). The central catheter 202 can be configured to direct the incoming dielectric fluid to one or more branch conduits for delivery to one or more injection ports.In other embodiments, a pump (eg, pump 106 of FIG. 1) can directly couple central conduit 202 and collection reservoir 112. As shown in FIG. 1, the pump can be located in the collection reservoir 112 and can be coupled to the bottom of the central conduit 202 or the top of the central conduit 202. In such an embodiment, the central conduit 202 can extend through the collection tray 110 into the collection reservoir 112. In other embodiments, the pump can be located external to the collection zone. This can be beneficial to prevent thermal energy generated by the operation of the pump from increasing the thermal energy within the immersion cooling system. In such an embodiment, the pump can be coupled to a portion of the central conduit 202 above the computing device tray, and the central conduit can be configured to deliver the dielectric fluid provided by the pump to the same manner as described above with reference to the gravity flow configuration Branch catheter.As shown, the collection tray 110 can form a perimeter around the computing device tray. The collection tray 110 can be configured to capture a dielectric fluid that escapes from the computing device tray and can direct fluid through the opening 206 to the collection reservoir 112. While the opening 206 is shown as a single opening, it will be appreciated that the collection tray 110 can have any number of openings to direct fluid into the collection reservoir 112. In some embodiments, the collection tray 110 and the collection reservoir 112 may be formed together such that they form a single component. In other embodiments, the collection tray 110 and the collection reservoir 112 may be two separate, distinct components that are separately formed.3 shows a more detailed view of a portion 300 of an embodiment of a immersion cooling arrangement of the server system 100 of FIG. Portion 300 shows a different perspective view of the top of the soak cooling arrangement in an embodiment without an upper reservoir. Portion 300 includes a vertical computing device tray 102 and a horizontal computing device tray 104. From the angle shown, the outlet port 302 of one of the vertical computing device trays can be seen. As configured with the outlet port 118 discussed above with respect to FIG. 1, the outlet port can be configured to allow escape of the dielectric fluid. Additionally, this illustration shows the inlet port 306 of the horizontal computing device tray 104. The inlet port 306 can be configured to align with the injection port 304 and can accept a dielectric fluid of the injection level computing device tray.Portion 300 includes different viewing angles of distribution manifold 114 having injection ports 304 configured to deliver a dielectric fluid in a similar manner as injection port 116 described above. However, as can be seen, the injection port 304 can be deployed on the central catheter 202 rather than the branch catheter (eg, the branch catheter 102). As shown, in some embodiments, the central conduit 202 can have a solid top 308. In such an embodiment, the central conduit 202 can be coupled to a pump (eg, pump 106 of FIG. 1) on a lower portion of the central conduit 202. The pump can deliver the dielectric fluid from the collection zone, such as described above, through the central conduit 202 to an injection port disposed on the central conduit 202 and/or one or more branch conduits (eg, the branch conduit 204). The one or more branch conduits can then deliver the dielectric fluid to one or an upper injection port (eg, injection port 116) disposed on one or more branch conduits, which in turn can inject a dielectric fluid into the individual The computing device is in the tray.4 shows a schematic diagram of a server system with a dual phase immersion cooling arrangement, in accordance with various embodiments. The dual phase immersion cooling arrangement of server system 400 may include a vertical computing device tray 410 and a horizontal computing device tray 412, collectively referred to below as a computing device tray for simplicity. The computing device tray can be configured to hold a dielectric fluid such as the dielectric fluid described above and one or more circuit boards (eg, circuit board 122 of FIG. 1) immersed in the dielectric fluid. The dielectric fluid can be configured to absorb thermal energy generated by one or more circuit boards. In some embodiments, the dielectric fluid can be configured to evaporate into a dielectric gas in response to absorbing thermal energy generated by one or more circuit boards. This can be accomplished by selecting or designing a dielectric fluid having a boiling point substantially equal to and below the satisfactory operating temperature of one or more of the boards. As used herein, a satisfactory operating temperature may refer to a temperature at which one or more circuit boards are operable without the risk of damage due to one or more thermal energy generated by the operation.In these two-phase embodiments, the computing device tray can have an opening at the top, as described above, to calculate the top opening of the device tray such that a dielectric fluid in the form of a dielectric gas escapes from the computing device tray to the server rack assembly. This escape of dielectric gas to the server rack assembly may allow for the transfer of thermal energy absorbed by the dielectric gas from one or more of the boards. To maintain a sufficient level of dielectric fluid for cooling in the computing device tray, the computing device tray can have additional openings to allow the dielectric fluid to be delivered to the computing device tray. In some embodiments, the opening for escaping the dielectric fluid from the separate computing device tray and the opening for transporting the dielectric fluid to the separate computing device tray may be the same opening. For example, as described above, the top opening may allow for the release of dielectric gas generated from evaporation of the dielectric liquid while also allowing injection of the dielectric liquid through the same top opening.In some two-phase embodiments, the additional dielectric fluid may also escape through openings such as the outlet port 118 discussed above with reference to Figures 1-3 and the outlet port 302 discussed above with respect to Figure 3 while still in liquid form. In this embodiment, the dielectric fluid that is still escaping in the liquid phase can be circulated through the fluid circulation system including the collection zone 416 and the central conduit 414 configured in a similar manner as described above with respect to the collection zone of FIG.In an embodiment, the dual phase cooling arrangement may also include a condenser 406. The condenser 406 can be configured to perform a condensation process to extract thermal energy from the dielectric gas and thereby cause the dielectric gas to condense back to the dielectric fluid. Such a process can be performed by flowing a liquid or gas through the condenser 406. Such liquids or gases may be cooled to a temperature below the condensation point of the dielectric gas by a cooling element such as a cooling loop, an air cooling element, or any other suitable cooling element. The cooling loop can be implemented in any conventional manner. The liquid or gas can absorb the thermal energy of the dielectric gas to cause the dielectric gas to condense back to the dielectric liquid. Thermal energy extracted by the liquid or gas can be routed from condenser 406 to a cooling loop via line 408 where heat energy can be extracted from the liquid or gas. The liquid or gas can then be returned to the condenser 406 via another tube, not shown, wherein the condensation process described above can be repeated. In an embodiment, the cooling loop may be external to the cooling system 400 or server rack assembly to prevent thermal energy extracted by the cooling loop from increasing the thermal energy of the cooling system 400. In some embodiments, the dielectric fluid generated from the condensation of the dielectric gas may fall back into the computing device tray, and any dielectric fluid that may not fall into the computing device tray may fall into the collection region 416 for recirculation back to the computing device. tray. In other embodiments, an upper reservoir, such as the upper reservoir 108 of Figure 1, can be positioned below the condenser to collect the dielectric fluid produced by the condensation process. In such embodiments, the dielectric fluid can then be delivered to the computing device tray via a gravity flow configuration, such as described above.In some embodiments, the condenser 406 may not extract sufficient thermal energy from the dielectric gas to restore a sufficient level of dielectric gas to the dielectric fluid. In such embodiments, sub-ambient cooler 402 can be selectively used to further extract thermal energy from the dielectric gas to restore a greater amount of dielectric gas to a dielectric fluid. The thermal energy extracted by the sub-ambient cooler 402 can be routed from the sub-ambient cooler to the cooling assembly via conduit 404, such as a cooling loop as described above, an air cooling element configured to discharge sufficient thermal energy into the air, or any other suitable Cool the parts.In some embodiments, the pressure within the cooling system 400 or server rack assembly may begin to increase as the dielectric fluid is converted to a dielectric gas. In such embodiments, the cooling system 400 can have a pressure sensor integrated therewith. The pressure sensor can be configured to measure the ambient air pressure of the cooling system 400 or server rack assembly. In such embodiments, the condenser 406 and/or the sub-ambient cooler 402 can be configured to perform the condensation process described above in response to the ambient pressure of the cooling system 400 or the server rack system reaching or exceeding a predefined threshold. It will be appreciated that while the cooling system 400 is discussed above with reference to a two-phase embodiment, the cooling system 400 can also be implemented into a single phase embodiment to remove thermal energy from the air within the cooling system 400.FIG. 5 shows a schematic diagram of a server system 500 with an alternate immersion cooling arrangement, in accordance with various embodiments. In addition to the plurality of condensers 502a-c being available to perform the condensation process, an alternate soak cooling arrangement of the server system 500 can be configured at the core in a manner similar to the dual phase soak cooling arrangement of the server system 400. As shown, the condensers 502a-c can be configured to correspond to layers of the vertical computing device tray 102; however, it will be appreciated that depending on the cooling requirements and configuration of the server system 500, multiple condenser configurations can employ any The number of configurations that are suitable for configuration. Moreover, in some embodiments, one or more separate condensers may be employed for the vertical computing device tray 410, while one or more different condensers may be employed for the horizontal computing device trays 412. At 504, thermal energy extracted from condensers 502a-c can be routed from condensers 502a-c, and output 510 can be coupled to a complementary cooling component, such as a cooling loop, air cooling component, or server rack assembly as described above. Any other suitable cooling mechanism externally, while at the same time exiting from the cooling unit, can enter the immersion cooling arrangement of the server system 500. Although shown in front of the vertical computing device tray, it will be appreciated that the flow into and out of the cooling mechanism can be deployed along the side of the computing device tray to allow access to the front end of all computing device trays. It will be appreciated that while the cooling system 500 is discussed above with reference to a two-phase embodiment, the cooling system 500 can also be implemented into a single phase embodiment to remove thermal energy from the air within the cooling system 500.FIG. 6 shows an illustration of another alternative immersion cooling arrangement of server system 600, in accordance with various embodiments of the present disclosure. An alternative soak cooling arrangement of server system 600 may be similar at the core, except that condensers 502a-c of Figure 5 may be replaced with an evaporator/condenser combination such as evaporators 602a-c coupled to condensers 604a-c, respectively. Configured in a manner of immersion cooling arrangement of server system 500. An evaporator/condenser combination can be used to perform the condensation process described above. In such embodiments, the thermal energy can be absorbed by the liquid contained within the evaporators 602a-c that can cause the liquid to vaporize into a gas. The gas then transfers the thermal energy absorbed by the gas to condensers 604a-c, which can absorb thermal energy from the gas in a manner similar to that described above with respect to Figure 4 to promote recovery of the gas to a liquid. The liquid then flows back to the evaporator where it can be repeated.At 606, thermal energy extracted from condensers 604a-c can be routed from condensers 604a-c via output 612, which can be coupled to a complementary cooling mechanism, such as a cooling loop, air cooling element, or with reference to FIG. Any other suitable cooling mechanism can enter the cooling system 600 at the same time as the output from the cooling unit. While the condensers 604a-c are shown in front of the vertical computing device tray, it will be appreciated that the condensers 604a-c and the flow into and out of the cooling assembly can be deployed along the side of the computing device tray to allow access to all computing device trays. Front end. Additionally, in some embodiments, the evaporator/condenser combination can take the form of a heat pipe. It will be appreciated that while the cooling system 600 is discussed above with reference to a two-phase embodiment, the cooling system 600 can also be implemented into a single phase embodiment to remove thermal energy from the air within the cooling system 600.FIG. 7 schematically illustrates a server system 704 with a server circuit board and a immersion cooling arrangement deployed in the housing 702, in accordance with various embodiments. In an embodiment, the immersion cooling arrangement of server system 704 can be any of the immersion cooling arrangements described above. The housing 702 can be configured to place a server circuit board and a immersion cooling arrangement. In an embodiment, the housing 702 can have one or more access doors, such as an access door 706 and an access door 708. These access doors allow for easy maintenance of the server board and immersion cooling arrangement 704. For example, the access door 706 can allow access to the vertical computing device tray, while the access door 708 can allow access to the horizontal computing device tray. In some embodiments, the housing 702 can be configured to be sealed when the access doors 706 and 708 are closed. This may be beneficial to limit or prevent thermal energy external to the housing from being introduced into the infusion cooling system 704. Additionally, in a two-phase embodiment, the sealed housing prevents dielectric gas from escaping from the infusion cooling system 704. In some embodiments, pressure sensor 710, such as described above with respect to FIG. 4, may also be incorporated into housing 702. In other embodiments, such pressure sensors can be incorporated into the infusion cooling system 704.ExampleSome non-limiting examples are:Example 1 may include an apparatus for immersion cooling, comprising: a plurality of trays, each tray of the plurality of trays holding one or more circuit boards, and having a first opening that allows a dielectric fluid to be injected into the tray and allowing dielectric a second opening from which the fluid escapes; and a fluid circulation system comprising: a collection zone disposed below the plurality of trays to collect a dielectric fluid escaping from the plurality of trays; and having a tray above the plurality of trays Or a distribution manifold of a plurality of injection ports adjacent thereto, the distribution manifold being coupled to the collection zone to transport the dielectric fluid collected in the collection zone back to the plurality of trays via the injection port.Example 2 may include the subject matter of Example 1, wherein the dielectric fluid is to absorb thermal energy generated by one or more circuit boards and to escape from one or more circuit boards through the escape of the dielectric fluid to the interior of the device Thermal energy.Example 3 may include the subject matter of Example 2, wherein the second opening is designed to allow the dielectric fluid to escape through one or more of the following: leakage of a dielectric fluid; overflow of a dielectric fluid; or dielectric fluid evaporation.Example 4 can include the subject matter of Example 2, wherein the second opening is an outlet port that is disposed on a side or bottom of the tray to allow leakage of a dielectric fluid from the tray.Example 5 may include the subject matter of Example 2, wherein the dielectric fluid will evaporate into a dielectric gas in response to absorption of thermal energy emitted by the one or more circuit boards, and wherein the second opening includes allowing dielectric gas to escape from the tray The top opening.Example 6 can include the subject matter of Example 5, further comprising performing a condensation process to remove thermal energy from the dielectric gas, causing the dielectric gas to condense back to the dielectric fluid, and routing the thermal energy removed from the dielectric gas to the condenser of the cooling loop.Example 7 can include the subject matter of Example 6, further including a sub-ambient cooler to further remove thermal energy from the dielectric gas.Example 8 may include the subject matter of Example 6, wherein the collection zone is a first collection zone, and wherein the fluid circulation system further comprises a coupling to the distribution manifold, disposed below the condenser to collect the dielectric fluid produced by the condensation process Two collection areas.Example 9 may include the subject matter of Example 6, further comprising a pressure sensor to output a measure of ambient air pressure, and wherein the condenser will perform a condensation process in response to the determination that the measurement of the air pressure reaches or exceeds a predefined threshold.Example 10 can include the subject matter of Example 6, further comprising one or more additional condensers deployed at different locations around the tray or fluid circulation system.Example 11 can include the subject matter of Example 5, further comprising one or more evaporators to remove thermal energy from the dielectric gas, causing the dielectric gas to condense back to the dielectric fluid; and one or more condensers coupled to the respective evaporators Thermal energy is removed from the evaporator and thermal energy is routed from the evaporator to the cooling loop, wherein the one or more evaporators and one or more condensers coupled to the respective evaporators form one or more heat pipes.Example 12 may include the subject matter of any of examples 1-11, wherein the fluid circulation system further comprises a pump coupled to the collection zone and the distribution manifold, wherein the pump is to deliver the dielectric fluid received in the collection zone to Distribution manifold.Example 13 can include the subject matter of Example 12, wherein the pump is to deliver the dielectric fluid collected in the collection zone at a rate equal to or greater than the rate at which the dielectric fluid escapes from the plurality of trays.Example 14 can include the subject matter of Example 12, wherein the fluid circulation system further comprises a heat exchanger coupled to the pump, and wherein the heat exchanger is to remove thermal energy from the dielectric fluid.The subject matter of any of examples 1-11, wherein the first opening is an inlet port that receives delivery of a dielectric fluid from at least one of the injection ports of the distribution manifold.Example 16 can include the subject matter of any of examples 1-11, wherein the device is a computer server rack component.Example 17 may include the subject matter of any of examples 1-11, wherein the device is a computer server and further includes one or more circuit boards.The example 18 can include the subject matter of any of examples 1-11, wherein a first subset of the plurality of trays are vertically deployed in the device, and wherein the second subset of the plurality of trays are deployed horizontally.Example 18 can include a method for immersion cooling, comprising: providing a dielectric fluid to a plurality of trays by a distribution manifold of a immersion cooling arrangement of a server system, wherein each tray of the plurality of trays remains immersed in a dielectric fluid One or more circuit boards, and each tray has a first opening that allows the dielectric fluid to be injected into the tray by the distribution manifold and a second opening that allows the dielectric fluid to escape from the tray to the interior of the server system; The collection zone of the immersion cooling arrangement collects the dielectric fluid that escapes from the plurality of trays; and the pump arranged by the immersion cooling recirculates the dielectric fluid collected in the collection zone back to the distribution manifold for distribution to the plurality of trays.Example 19 can include the subject matter of Example 18, further comprising absorbing thermal energy generated by the one or more circuit boards by the dielectric fluid, and escaping from the dielectric body to the interior of the server system, from one or more circuit boards Take the heat.Example 20 can include the subject matter of Example 19, wherein the escape of the dielectric fluid is performed by one or more of: leakage of a dielectric fluid; overflow of a dielectric fluid; or evaporation of a dielectric fluid.Example 21 can include the subject matter of Example 19, wherein the second opening is an outlet port, and wherein the escape of the dielectric fluid occurs via leakage of the dielectric fluid from the tray through the outlet port.Example 22 can include the subject matter of Example 18, further comprising: absorbing thermal energy emitted by the one or more circuit boards by the dielectric fluid to cause the dielectric fluid to evaporate into a dielectric gas, wherein the second opening allows the dielectric gas to escape from the tray Out.Example 23 may include the subject matter of Example 19, further comprising: a condenser arranged by immersion cooling, condensing the dielectric gas into a dielectric fluid by removal of the thermal energy from the dielectric gas; and routing the thermal energy removed from the dielectric gas by the conduit Go to the cooling loop.Example 24 can include the subject matter of Example 23, further comprising: removing additional thermal energy from the dielectric gas by a sub-ambient cooler of the server system.Example 25 can include the subject matter of Example 23, wherein the collection zone is a first collection zone, and further comprising: collecting a dielectric fluid produced by condensation by a second collection zone disposed below the condenser.Example 26 may include the subject matter of Example 23, determining whether the ambient air pressure of the server system meets or exceeds a predefined threshold via a pressure sensor of the server system, and wherein condensation is based on the determined result.Example 27 can include the subject matter of Example 22, further comprising: removing thermal energy from the dielectric gas by one or more evaporators of the server system to cause the dielectric gas to condense back to the dielectric fluid; and by one or more of the server systems The condenser removes thermal energy from the evaporator; and the conduit removes thermal energy removed from the dielectric gas to the cooling loop.Example 28 can include the subject matter of any of Examples 18-28, wherein the dielectric fluid collected in the collection zone is delivered to the distribution manifold by a pump of the server system.Example 29 can include the subject matter of Example 29, wherein transporting the dielectric fluid by the pump further comprises delivering the dielectric fluid at a rate equal to or greater than a rate at which the dielectric fluid escapes from the plurality of trays.The example 30 may include the subject matter of any of the examples 18-29, wherein the cycle further comprises: cooling the dielectric fluid by a heat exchanger arranged by immersion cooling to reduce thermal energy contained in the dielectric current body to allow dielectric fluid absorption More heat.Example 31 may include a server system including: a plurality of circuit boards; and a immersion cooling arrangement having: a plurality of trays, each tray of the plurality of trays holding one or more circuit boards of the plurality of circuit boards, and a first opening for injecting a dielectric fluid into the tray and a second opening for the escape of the dielectric fluid from the tray; a collection region disposed below the plurality of trays to collect a dielectric fluid that escapes from the plurality of trays And a distribution manifold having a plurality of injection ports above or adjacent to respective trays of the plurality of trays, the distribution manifold being coupled to the collection region to transport the dielectric fluid collected in the collection region back to the plurality of injection ports tray.Example 32 may include the subject matter of Example 31, wherein the dielectric fluid is to absorb thermal energy generated by one or more circuit boards and to escape from one or more circuit boards through the escape of the dielectric fluid to the interior of the device Thermal energy.Example 33 may include the subject matter of Example 32, wherein the second opening is designed to allow leakage of the dielectric fluid through one or more of: leakage of a dielectric fluid; overflow of a dielectric fluid; or dielectric fluid evaporation.Example 34 can include the subject matter of Example 32, wherein the second opening is an outlet port that is disposed on a side or bottom of the tray to allow leakage of a dielectric fluid from the tray.Example 35 may include the subject matter of Example 31, wherein the dielectric fluid will evaporate into a dielectric gas in response to absorption of thermal energy emitted by the one or more circuit boards, and wherein the second opening includes allowing dielectric gas to escape from the tray The top opening is open and a condenser is also included to perform a condensation process to remove thermal energy from the dielectric gas, causing the dielectric gas to condense back to the dielectric fluid.Example 36 can include the subject matter of Example 35, further comprising performing a condensation process to remove thermal energy from the dielectric gas, causing the dielectric gas to condense back to the dielectric fluid, and routing the thermal energy removed from the dielectric gas to the condenser of the cooling loop.Example 37 can include the subject matter of Example 36, further comprising a sub-ambient cooler to further remove thermal energy from the dielectric gas.Example 38 can include the subject matter of Example 36, wherein the collection zone is a first collection zone, and wherein the immersion cooling arrangement further comprises coupling with a distribution manifold, disposed below the condenser to collect a dielectric fluid produced by the condensation process Two collection areas.Example 39 can include the subject matter of Example 36, further comprising a pressure sensor to output a measure of ambient air pressure within the server system, and wherein the condenser will perform a condensation process in response to the determination that the measurement of the air pressure reaches or exceeds a predefined threshold.Example 40 can include the subject matter of Example 36, and further includes one or more additional condensers deployed at different locations around the tray or soak cooling arrangement.Example 41 may include the subject matter of Example 35, further comprising one or more evaporators to remove thermal energy from the dielectric gas, causing the dielectric gas to condense back to the dielectric fluid; and one or more condensers coupled to the respective evaporators Thermal energy is removed from the evaporator and thermal energy is routed from the evaporator to the cooling loop, wherein the one or more evaporators and one or more condensers coupled to the respective evaporators form one or more heat pipes.The example 42 may include the subject matter of any one of examples 31-41, wherein the immersion cooling arrangement further comprises a pump coupled to the collection zone and the distribution manifold, wherein the pump is to deliver the dielectric fluid received in the collection zone to Distribution manifold.Example 43 can include the subject matter of Example 42, wherein the pump is to deliver the dielectric fluid collected in the collection zone at a rate equal to or greater than the rate at which the dielectric fluid escapes from the plurality of trays.Example 44 may include the subject matter of Example 42, wherein the fluid circulation system further includes a heat exchanger coupled to the pump, and wherein the heat exchanger is to remove thermal energy from the dielectric fluid.The subject matter of any of examples 31-41, wherein the first opening is an inlet port that receives delivery of a dielectric fluid from at least one of the injection ports of the distribution manifold.The example 46 can include the subject matter of any of examples 31-41, wherein the immersion cooling arrangement is a computer server rack component.The example 47 can include the subject matter of any of examples 31-41, wherein a first subset of the plurality of trays are vertically deployed in the device, and wherein the second subset of the plurality of trays are deployed horizontally.Example 48 may include the subject matter of any of examples 31-41, further comprising an additional plurality of circuit boards and an air cooled or closed loop liquid cooling arrangement for cooling the additional plurality of circuit boards.Various embodiments may include any suitable combination of the above-described embodiments, including alternative (or) embodiments of the embodiments described above in conjunction (and) (eg, "and" may be "and/or"). Moreover, some embodiments may include one or more articles of manufacture (e.g., non-transitory computer readable media) having instructions stored therein that, when executed, produce the actions of any of the above-described embodiments. Additionally, some embodiments may include devices or systems having any suitable components for performing the various operations of the above-described embodiments.The above description of the illustrated implementation, including the content of the summary, is not intended to be exhaustive or to limit the embodiments of the disclosure. Although specific implementations and examples are described herein for the purposes of illustration, it will be appreciated by those skilled in the <RTIgt;These modifications can be made to the embodiments of the present disclosure in light of the above detailed description. The terms used in the following claims are not to be construed as limiting the particular embodiments of the present disclosure as disclosed in the specification and claims. Instead, the scope is to be determined entirely by the appended claims, and the claims are to be understood in accordance with the principles defined in the claims. |
The invention relates to reading sequential data from memory using hub tables. In one approach, a computer storage device has one or more pivot tables and corresponding bit maps stored in volatile memory. The storage device has non-volatile storage media that stores data for a host device. The pivot tables and bit maps are used to determine physical addresses of the non-volatile storage media for logical addresses received in commands from the host device that are determined to be within a sequential address range (e.g., LBAs that are part of a prior sequential write operation by the host device). When a command is received by the storage device that includes a logical address within the sequential address range, then one of the pivot tables and its corresponding bit map are used to determine the physical address of the non-volatile storage media that corresponds to the logical address. |
1.A method for a storage device, the method comprising:Receiving, by the controller, a command containing the first logical address of the data stored in the non-volatile storage medium;Determining, by the controller, whether the first logical address is within a sequence range based on the first stored value corresponding to the first logical address; andIn response to determining that the first logical address is within the sequence range, determining a first physical address corresponding to the first logical address, wherein determining the first physical address includes:Determining a displacement from a starting physical address associated with the sequence range, wherein the displacement is determined by the difference between the first logical address and a second logical address corresponding to the starting physical address; andThe first physical address is determined by adding the displacement to the starting physical address.2.The method according to claim 1, wherein the command is a read command or a write command.3.The method of claim 1, wherein the first stored value is one of a plurality of bits stored in a bitmap, and each of the plurality of bits corresponds to a logic within a host address range address.4.The method according to claim 3, wherein each bit of the bitmap has a first binary value or a second binary value, and the first binary value indicates that the corresponding logical address is in a sequence range , And the second binary value indicates that the corresponding logical address is not in the sequence range.5.The method according to claim 4, wherein the starting physical address is a first starting physical address, and the method further comprises:Storing a pivot table including a plurality of starting physical addresses including the first starting physical address, wherein each starting physical address corresponds to a corresponding range of a logical address;The bitmap includes a plurality of bit arrays, each bit array includes a part of the plurality of bits, and each array corresponds to a corresponding starting physical address of the pivot table.6.The method of claim 5, further comprising:The partition is stored in a volatile memory, wherein the partition provides a logical-to-physical mapping for the logical address in the command received from the host device, and wherein the partition includes a plurality of physical addresses each corresponding to a corresponding logical address. address;Wherein, each starting physical address of the pivot table corresponds to a corresponding part of the block.7.3. The method of claim 3, wherein the command is received from a host device, and the host address range is used by the host device to logically address data stored in the non-volatile storage medium.8.The method according to claim 1, wherein the command is the first command received from the host device, the method further comprising:Storing a table in the non-volatile storage medium, wherein the table provides a logical-to-physical mapping for the logical address in the command received from the host device;Receiving a second command including a third logical address from the host device;Determining whether the third logical address is within the sequence range based on the second stored value corresponding to the third logical address; andIn response to determining that the third logical address is not within the sequence range, determining a second physical address corresponding to the third logical address, wherein determining the second physical address includes:Load logical to physical partitions from the table into volatile memory; andThe second physical address is determined using the load block.9.The method according to claim 1, wherein the starting physical address is a first starting physical address, and the method further comprises:A plurality of starting physical addresses are stored in the table, and each starting physical address is associated with a corresponding sequence range of the logical address of the data stored in the non-volatile storage medium, and the starting physical address includes the The first starting physical address.10.A system including:Non-volatile storage media;Volatile memory configured to store a bitmap, the bitmap including the first bit corresponding to the first logical address;Controller; andFirmware containing instructions configured to instruct the controller:Receiving a command including the first logical address;Determining whether the first logical address is within a sequence range based on the first bit of the bitmap; andIn response to determining that the first logical address is within the sequence range, determining a first physical address of the non-volatile storage medium corresponding to the first logical address, wherein determining the first physical address includes:Determining a displacement from a starting physical address associated with the sequence range, wherein the displacement is determined by the difference between the first logical address and a second logical address corresponding to the starting physical address; andThe first physical address is determined by adding the displacement to the starting physical address.11.The system of claim 10, wherein the instructions are configured to further instruct the controller to use the determined first physical address to read or write data in the non-volatile storage medium.12.The system of claim 10, wherein the starting physical address is a first starting physical address, and wherein the instructions are configured to further instruct the controller:A plurality of starting physical addresses are stored in the table, and each starting physical address is associated with a corresponding sequence range of the logical address of the data stored in the non-volatile storage medium, and the starting physical address includes the The first starting physical address;Wherein determining the first physical address further includes using the table to determine the first starting physical address.13.The system of claim 12, wherein the volatile memory is further configured to store the table.14.The system according to claim 13, wherein the table is a first table, and wherein a second table is stored in the non-volatile storage medium, and the second table is among the commands received from the host device Logical addresses provide logical to physical mapping.15.The system of claim 14, wherein the command is a first command, and wherein the instruction is configured to further instruct the controller:Receiving the second command including the third logical address;In response to receiving the second command, load logical to physical partitions from the second table into the volatile memory; andThe load partition is used to determine a second physical address corresponding to the third logical address.16.A non-transitory machine-readable storage medium that stores instructions that, when executed on at least one processing device, cause the at least one processing device to at least:Receiving a read command including the first logical address;Determining whether the first logical address is within a sequence range based on the bit corresponding to the first logical address;In response to determining that the first logical address is within the sequence range:Determining a displacement from a starting physical address associated with the sequence range, wherein the displacement is determined by the difference between the first logical address and a second logical address corresponding to the starting physical address; andDetermining the physical address corresponding to the first physical address by adding the displacement to the starting physical address; andThe data stored in the non-volatile storage medium is read using the determined physical address.17.16. The non-transitory machine-readable storage medium of claim 16, wherein the starting physical address is one of a plurality of starting physical addresses stored in a pivot table in a volatile memory.18.The non-transitory machine-readable storage medium of claim 17, wherein the bit is one of a plurality of bits in a bitmap stored in the volatile memory.19.The non-transitory machine-readable storage medium of claim 18, wherein the read command is a first command, the starting physical address is a first starting physical address, and the instruction further causes the at least one Processing device:Storing a mapping table in the non-volatile storage medium, wherein the mapping table provides a logical-to-physical mapping for the logical address in the command received from the host device;In response to receiving a second command from the host device, load logical to physical partitions from the mapping table into the volatile memory; andIn response to loading the partition, the second starting physical address in the pivot table is updated.20.The non-transitory machine-readable storage medium of claim 16, wherein the sequence range is a first sequence range, and the instructions further cause the at least one processing device:Determine multiple sequence ranges within the logical address range;Determining that the first sequence range has the maximum length of the sequence range; andIn response to determining that the first sequence range has the maximum length:Associating the starting physical address with the first sequence range; andThe bits of the bitmap corresponding to the first sequence range are updated, wherein each updated bit indicates that a physical address can be determined for a corresponding logical address based on the displacement from the starting physical address. |
Use pivot table to read sequential data from memoryTechnical fieldAt least some of the embodiments disclosed herein generally relate to computer storage devices, and more specifically (but not limited to), relate to using a pivot table to read data stored in a non-volatile storage device.Background techniqueVarious types of non-volatile storage devices can be used to store data. Non-volatile storage devices may include NAND flash memory devices.A typical computer storage device has a controller that receives data access requests from a host computer and performs programmed calculation tasks to implement the requests in a manner that can be dedicated to the media and structures configured in the storage device. In an example, the flash memory controller manages data stored in the flash memory and communicates with the computer device.In some cases, flash memory controllers are used in digital cameras, mobile phones, etc. in SD cards or similar media. In other cases, the USB flash driver uses a flash memory controller to communicate with the computer through a USB port.The firmware can be used to operate the flash memory controller of a particular storage device. In one example, when the computer system or device reads data from or writes data to the flash memory device, it communicates with the flash memory controller.Generally, the flash memory controller includes a flash translation layer (FTL) that maps a logical block address (LBA) received from a host device to a physical address of the flash memory. In this way, FTL provides logical to physical mapping.In some cases, the storage device is a managed NAND device that includes a memory controller and supports interfaces such as eMMC and SD. Moreover, Universal Flash Storage (UFS) is a flash storage specification for digital cameras, mobile phones, and the like. UFS is regarded as a replacement for eMMC and SD cards.Summary of the inventionIn one aspect, the present invention relates to a method for a storage device, the method comprising: receiving, by a controller, a command containing a first logical address of data stored in a non-volatile storage medium; and by the controller based on The first stored value corresponding to the first logical address determines whether the first logical address is within the order range; and in response to determining that the first logical address is within the order range, it is determined that it corresponds to the first The first physical address of the logical address, wherein determining the first physical address includes: determining a displacement from the starting physical address associated with the sequence range, wherein the displacement is determined by the first logical address and corresponding to the The difference between the start physical address and the second logical address is determined; and the first physical address is determined by adding the displacement to the start physical address.In another aspect, the present invention relates to a system including: a non-volatile storage medium; a volatile memory configured to store a bitmap, the bitmap including a first bit corresponding to a first logical address; A controller; and firmware containing instructions configured to instruct the controller to: receive a command including the first logical address; determine the first logical based on the first bit of the bitmap Whether the address is within the sequence range; and in response to determining that the first logical address is within the sequence range, determining a first physical address of the non-volatile storage medium corresponding to the first logical address, wherein it is determined The first physical address includes: determining a displacement from a starting physical address associated with the sequence range, wherein the displacement is determined by the first logical address and a second logical address corresponding to the starting physical address The difference between is determined; and the first physical address is determined by adding the displacement to the starting physical address.In yet another aspect, the present invention relates to a non-transitory machine-readable storage medium that stores instructions that, when executed on at least one processing device, cause the at least one processing device to at least: receive information including a first logical address Read command; determine whether the first logical address is within the sequence range based on the bit corresponding to the first logical address; in response to determining that the first logical address is within the sequence range: determine the slave and the sequence The displacement of the starting physical address associated with the range, wherein the displacement is determined by the difference between the first logical address and the second logical address corresponding to the starting physical address; and by adding the displacement to The starting physical address is used to determine a physical address corresponding to the first physical address; and the determined physical address is used to read data stored in a non-volatile storage medium.Description of the drawingsThe embodiments are illustrated in the drawings by way of example rather than limitation, where similar element symbols indicate similar elements.Figure 1 illustrates a storage device including a volatile memory storing a pivot table and a bitmap according to some embodiments.Figure 2 illustrates the logical to physical mapping of a storage device according to some embodiments.Figure 3 illustrates a pivot table according to some embodiments.Figure 4 illustrates a bitmap according to some embodiments.Figure 5 illustrates the logical correspondence between logical block addresses, pivot tables, and bitmaps according to some embodiments.Figure 6 illustrates a logical-to-physical block including logical block addresses and corresponding physical addresses according to some embodiments.FIG. 7 illustrates a pivot table containing starting physical addresses corresponding to the logical block addresses in the logical-to-physical partition of FIG. 6 according to some embodiments.FIG. 8 illustrates a bitmap including bits corresponding to logical block addresses in the logical-to-physical partition of FIG. 6 according to some embodiments.9 is a graph illustrating exemplary thousands of input/output operations per second (KIOPS) on the vertical axis versus the percentage of randomness on the horizontal axis in a performance simulation of an exemplary storage device according to some embodiments.FIG. 10 shows a method for determining a physical address based on a logical address in a command received from a host device by using a pivot table and a bitmap according to some embodiments.Detailed waysAt least some embodiments herein relate to determining the memory of a storage device based on a logical address (e.g., LBA) in a command received by the storage device from a host device (e.g., a mobile phone or other computing device that reads data stored in the storage device) The physical address of the unit.The physical memory elements of the storage device may be arranged as logical memory blocks addressed via logical block addresses (LBA). A logical memory block is the smallest LBA addressable memory cell, and each LBA address identifies a single logical memory block that can be mapped to a specific physical address of the memory cell in the storage device.The controller generally uses a logical-to-physical mapping table to determine the physical address based on the logical address in the command received from the host device. The mapping table usually requires a large amount of memory storage. In the case where the storage device has a limited volatile memory (e.g. SRAM) storage capacity (e.g. UFS or eMMC device), most of the mapping table must be stored in the non-volatile memory of the storage device (e.g. NAND flash) .The limited size of volatile memory creates technical problems. Specifically, when a command is received, a new part of the mapping table (e.g., sometimes called a block of the mapping table) must be loaded from the non-volatile memory to the volatile memory in order for the controller to perform the logical-to-physical translation. This significantly slows down the performance of the storage device. For example, the read access time is significantly increased due to the need to load blocks into volatile memory.The various embodiments of the present invention provide technical solutions to the above-mentioned technical problems. In some embodiments, one or more pivot tables and corresponding bitmaps are stored in volatile memory and used to determine logical addresses within the sequence range (such as LBA, which is the previous sequential write by the host device). The physical address of the operating part). When the command is received by the storage device containing the logical address in the sequence range, the pivot table and its corresponding bitmap are used to determine the physical address corresponding to the logical address. This determination is performed without loading the new partition from the logical-to-physical mapping table stored in the non-volatile memory of the storage device to the volatile memory. In one example, the sequential range is a set of consecutive LBA addresses.In one embodiment, a method for a storage device (such as a USB drive) includes: receiving, by a controller, a first logical address (such as a NAND flash) containing data stored in a non-volatile storage medium (such as a NAND flash) LBA 10) command; the controller determines whether the first logical address is within the sequence range (for example, based on the first stored value corresponding to the first logical address (for example, the bit value 1 in the bitmap in the volatile memory)) The data written in the non-volatile memory is in the logical order from LBA7 to LBA97); and in response to determining that the first logical address is within the order range, determining the first physical address corresponding to the first logical address (for example, 1003) .The first physical address is determined by determining the displacement from the starting physical address associated with the sequence range (e.g., the starting physical address having the value 993 corresponding to LBA 0 and stored in the pivot table in volatile memory) carried out. The displacement is determined by the difference between the first logical address and the second logical address (for example, LBA 0) corresponding to the starting physical address. The first physical address is obtained by adding the displacement (for example, logical address LBA 10 minus logical address LBA 0, which is displacement 10-0=10) to the starting physical address (for example, 993+10=1003, which corresponds to LBA 10). Physical address) to determine.In another example of the above method, the host device sends a sequential write command from LBA 150 to LBA 200 (51 logical addresses) allocated from the NAND physical address 2000 to 2050. The starting physical address (hub index 1) is determined to be 2000-(150-128)=1978. In this calculation, 150 is the value of the logical address of the first LBA address in the sequence range. 2000 is the value of the physical address corresponding to the first LBA address. 128 is the value corresponding to the starting LBA address of the LBA range (for example, LBA 128 to LBA 255, as described in FIG. 5 below) covered by the pivot table using the corresponding pivot index item 1. In an example, the pivot index item 1 is the starting physical address given by the sequential pointer to the LBA 128, as described in the pivot table 701 of FIG. 7 as follows.If the read command from the host device is received with LBA=165, the controller first determines that the bit corresponding to LBA=165 in the bitmap has a value of 1. This indication pivot table can be used to determine physical addresses instead of loading blocks from non-volatile memory. The physical address corresponding to LBA=165 is calculated using the starting physical address 1978 (hub index 1) stored in the hub table. The controller determines the physical address of LBA=165 as follows: 1978+(165-128)=1978+37=2015.In an embodiment, for a random read operation in which an LBA is received, the controller determines whether the logical-to-physical block corresponding to the received LBA has been previously loaded in the RAM. If the block is not loaded, the controller checks the bit map corresponding to the received LBA bit before issuing the block load command. If this bit is set to a high state (for example, the bit has a value of 1), the controller can use the pivot table to determine the physical address corresponding to the received LBA, as described above.In one embodiment, the bitmap is a logic-to-logic table. Each LBA in the logical address space of the host corresponds to a corresponding bit in the bitmap. Each bit has a value indicating whether the data corresponding to the LBA is written in order relative to the nearest physical location of the previous hub. For example, if the bit has a value of 1, then the data corresponding to the LBA is determined to be written sequentially.In one embodiment, the pivot table is updated after determining the consecutive address sequence in the address range. If the address range contains more than one sequence, the pivot table is updated based on the longest sequence in the range. Specifically, the starting physical address and the pivot table corresponding to the address range are updated based on the determination of the longest sequence in the range.The bitmap is also updated. Specifically, each bit of the bitmap corresponding to the LBA in the determined longest sequence is set to a high state (for example, bit value = 1). Set other bits of the bitmap to a low state (for example, bit value = 0).In view of the above, using the pivot table and bitmap to determine the physical address of the sequential logical address can provide various advantages. In one example, in a flash translation layer with low RAM resources, the use of pivot tables and bitmaps avoids the need to load logical-to-physical blocks from NAND to SRAM for logical-to-physical translation and therefore improves random read performance .In another example, improve the system benchmark in which the sequential write phase is followed by the random write phase and the random read phase or the sequential write phase is followed by the random read phase. In one example, the use of the pivot table and bitmap uses a relatively small amount of data stored in SRAM during random read access by the host device to calculate the physical address required for a page.FIG. 1 illustrates a storage device 103 including a volatile memory 106 storing a pivot table 119 and a bitmap 117 according to some embodiments. In FIG. 1, a host 101 communicates with a storage device 103 via a communication channel having a predetermined protocol. The host 101 may be a computer (such as a mobile phone or other computing device) having one or more central processing units (CPU), and a computer peripheral device such as the storage device 103 may be attached to the one via an interconnect such as a computer bus. Or multiple CPUs.The computer storage device 103 can be used to store data of the host 101. Examples of computer storage devices generally include flash memory and the like. The storage device 103 has a host interface 105 that implements communication with the host 101 using a communication channel. For example, in one embodiment, the communication channel between the host 101 and the storage device 103 is a bus; and the host 101 and the storage device 103 use eMMC or UFS protocol to communicate with each other.In some embodiments, the communication channel between the host 101 and the storage device 103 includes a computer network, such as a local area network, a wireless local area network, a wireless personal area network, a cellular communication network, a broadband high-speed full-time connection, a wireless communication connection (such as contemporary or The next generation mobile network link); and the host 101 and the storage device 103 can be configured to communicate with each other using various data storage management and usage commands.The storage device 103 has a controller 107 that runs the firmware 104 to perform operations in response to communication from the host 101. Firmware is generally a type of computer program that provides control, monitoring, and data manipulation of engineering computing devices. In FIG. 1, the firmware 104 controls the operation of the controller 107 to operate the storage device 103, such as translating a logical address into a physical address to store and access data in the storage device 103. In one example, the controller is an internal controller of a managed NAND device that stores data in a TLC NAND flash memory.The storage device 103 has a non-volatile storage medium 109, such as a memory unit in an integrated circuit. The storage medium 109 is non-volatile because it does not require power to maintain the data/information stored in the non-volatile storage medium 109, and the data can be retrieved after the non-volatile storage medium 109 is powered off and then powered on again. /information. The memory unit can be implemented using various memory/storage technologies (such as NAND gate-based flash memory, phase change memory (PCM), magnetic memory (MRAM), resistive random access memory, and 3D XPoint) to make the storage medium 109 non-compliant It is volatile and can retain the data stored in it in the event of a power outage for days, months, and/or years.The storage device 103 includes a volatile random access memory (RAM) 106. In one embodiment, a part of the RAM is used to store runtime data and instructions used by the controller 107 to improve the computing performance of the controller 107 and/or to be transferred between the host 101 and the non-volatile storage medium 109 Data provides buffering. The RAM 106 is volatile because it needs power to maintain the data/information stored therein, and when the power is interrupted, the data/information will be lost immediately or quickly.Volatile memory 106 generally has less latency than non-volatile storage media 109, but loses its data quickly when power is removed. Therefore, in some cases, it is advantageous to use the volatile memory 106 to temporarily store the instructions and/or data used by the controller 107 in its current computing task to improve performance. In some examples, the volatile memory 106 is implemented using volatile static random access memory (SRAM) that uses less power than DRAM in some applications.During operation, the controller 107 receives various commands from the host 101. These commands can include read commands or write commands. In an example, the read command includes a logical address and is received from the host 101 to access the storage data 113 in the non-volatile storage medium 109.In addition to storing the data 113, the nonvolatile storage medium 109 also stores a logical-to-physical mapping table 111. The mapping table 111 stores a physical address corresponding to each logical address of the data storage capacity of the nonvolatile storage medium 109.In addition to the pivot table 119 and the bitmap 117, the volatile memory 106 also stores logical to physical partition 115. The partition 115 is a part of the mapping table 111 loaded into the volatile memory 106 by the controller 107. The block 115 is used by the controller 107 to determine the physical address of the logical address in the read command received from the host 101.In some cases, when receiving a read command, the controller 107 determines that the partition 115 has previously been loaded into the volatile memory 106 and can be used to determine the physical address of the received logical address (for example, the received logical address falls Within the LBA range of block 115). In other cases, the controller 107 determines that the partition 115 cannot be used to determine the corresponding physical address of the received logical address (for example, the received logical address is outside the LBA range of the partition 115).In the case where the previously loaded block 115 cannot be used to determine the corresponding physical address, the controller 107 uses the above-mentioned bitmap 117 to determine whether the received logical address is within the sequence range (for example, the data has been previously written to the non-volatile In a sequence in the storage medium 109). If the received logical address is within the sequence range, the controller 107 uses the hub table 119 to determine the physical address, as described herein. The determined physical address is used to read the portion of the storage data 113 corresponding to the received logical address. Next, the controller 107 sends the read data to the host 101.If the received logical address is not within the order range determined by the bitmap 117, the controller 107 loads the new partition 115 from the mapping table 111 into the volatile memory 106. The physical address is determined using the new block 115. In one embodiment, when a new partition 115 is loaded, the pivot table 119 and/or the bitmap 117 are updated.In some examples, the controller 107 has multiple processors each with its own in-processor cache memory. Optionally, the controller 107 uses data and/or instructions organized in the storage device 103 to perform processing in a data-intensive memory. For example, in response to a request from the host 101, the controller 107 performs real-time analysis of the data set stored in the storage device 103 and responsively transmits the condensed data set to the host 101. For example, in some applications, the storage device 103 is connected to a real-time sensor to store sensor input (such as a sensor of an autonomous vehicle or a digital camera); and the processor of the controller 107 is configured to perform machine learning and/or based on the sensor input Pattern recognition is used to support an artificial intelligence (AI) system implemented at least in part via the storage device 103 and/or the host 101.The storage device 103 may be used in various computing systems such as cloud computing systems, edge computing systems, fog computing systems, and/or stand-alone computers. In a cloud computing system, a remote computer server is connected to the network to store, manage, and process data. The edge computing system optimizes cloud computing by performing data processing at the edge of the computer network close to the data source and thus reduces data communication with centralized servers and/or data storage devices. The fog computing system uses one or more end-user devices or near-user edge devices to store data and therefore reduces or eliminates the need to store data in a centralized data warehouse.At least some embodiments of the present invention may be implemented using computer instructions, such as firmware 104, executed by the controller 107. In some examples, hardware circuits can be used to implement at least some functions of the firmware 104. The firmware 104 may be initially stored in the non-volatile storage medium 109 or another non-volatile device, and loaded into the volatile memory 106 and/or the in-processor cache memory for execution by the controller 107.The firmware 104 can be configured to use the techniques of using hub tables and bitmaps discussed below. However, the techniques discussed below are not limited to use in the computer system of FIG. 1 and/or the examples discussed above.A non-transitory computer storage medium can be used to store instructions for the firmware 104. When the instructions are executed by the controller 107 of the computer storage device 103, the instructions cause the controller 107 or other processing device to perform the methods discussed herein.In an example, the non-volatile storage medium 109 of the storage device 103 has a memory unit that can be identified by an LBA address range, where the range corresponds to the memory capacity of the non-volatile storage medium 109.In one embodiment, the local manager (not shown) of the storage device 103 receives the data access command. The data access request (for example, read, write) from the host 101 identifies the LBA address to read, write, or erase data from the memory cell identified by the LBA address. The local manager translates the logical address into a physical address.In one embodiment, the controller is implemented by one or more processing devices. In one embodiment, the computer system includes a first memory device (such as SRAM) and a second memory device (such as NAND flash) and one or more processing devices (such as a CPU or a system on a chip (SoC)). In an embodiment, the computer system may include a processing device and a controller.The processing device may be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, and the like. In some examples, the controller may be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, the controller controls communication via a bus coupled between the computer system and one or more memory subsystems.The controller of the computer system may communicate with the controller of the memory subsystem to perform operations such as reading data, writing data, or erasing data at the memory component, and other such operations. In some examples, the controller is integrated in the same package of the processing device. In other examples, the controller is separated from the packaging of the processing device. The controller and/or processing device may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, cache memory, or a combination thereof. The controller and/or processing device may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.Figure 2 illustrates a logical to physical mapping 201 of a storage device according to some embodiments. In an example, the storage device is the storage device 103 of FIG. 1. The map 201 includes the value of the LBA address and the value of the corresponding physical address stored in the non-volatile memory. In an example, the physical address is the physical address of the stored data 113 in the non-volatile storage medium 109 of FIG. 1.The LBA address of the logical-to-physical mapping 201 includes a sequential range 202 of logical addresses. For example, the logical addresses LBA 7 to LBA 97 are in the sequence range 202. The logical addresses LBA 7 to LBA 97 correspond to the physical addresses 1000 to 1090. In one example, the host sends a sequential write command from LBA 7 to LBA 97 (91 logical addresses) assigned from NAND physical address 1000 to 1090.Figure 3 illustrates a pivot table 302 according to some embodiments. The pivot table 302 is an example of the pivot table 119 of FIG. 1. The hub table 302 contains various starting physical addresses. In an example, each starting physical address corresponds to a hub index of 0, 1, 2,... In an example, the controller 107 uses the starting physical address obtained from the hub table 302 when calculating the physical address of the received logical address. In an example, each hub index corresponds to an LBA address range. The starting physical address selected by the controller 107 corresponds to the pivot index of the specific range of the LBA address within which the received LBA address is. For example, the starting physical address of the hub index 1 corresponds to the LBA range of 128 to 255, and is used when receiving the logical address LBA 150.Figure 4 illustrates a bitmap 402 according to some embodiments. The bitmap 402 is an example of the bitmap 117 of FIG. 1. The bitmap 402 contains bit values corresponding to bit indexes 0, 1, 2,... 127. In an example, the bitmap 402 corresponds to the logical address range from LBA 0 to LBA 127. Each bit value indicates whether the corresponding LBA address is within the sequence range. For example, setting the bit value of bit indexes 7 to 97 to 1 is high, which indicates that LBA 7 to LBA 97 are within the sequence range 202. Setting the bit value of other LBA addresses in the range LBA 0 to LBA 127 low to 0, which indicates that the other LBA addresses are outside the sequence range 202.Figure 5 illustrates the logical correspondence between logical block addresses (LBA), pivot tables, and bitmaps according to some embodiments. In an example, the LBA range from LBA 0 to LBA 127 corresponds to the pivot index 0 of the pivot table 302 in FIG. 3. The LBA range from LBA 0 to LBA 127 also corresponds to bit indexes 0 to 127 of the bitmap 402 of FIG. 4, respectively.In an example, the LBA range from LBA 128 to LBA 255 corresponds to the pivot index 1 of the pivot table 302 of FIG. 3, and further corresponds to the bit index 128 to 255 of the bitmap 402 (not shown).In an example, the LBA range from LBA 0 to LBA 1023 corresponds to the first partition (indicated as PPT#0) and the first pivot table (hub 0). The LBA range from LBA 1024 to LBA 2047 corresponds to the second partition (indicated as PPT#1) and the second pivot table (hub 1). The first partition and the second partition are each an example of the logical to physical partition 115 in FIG. 1.Figure 6 illustrates a logical-to-physical partition 601 including a logical block address (LBA) and corresponding physical address according to some embodiments. The logical-to-physical partition 601 is an example of the logical-to-physical partition 115 of FIG. 1. In FIG. 6, block 601 provides physical addresses for the logical address range of LBA0 to LBA 1023. In an example, as illustrated, the pointer to LBA0 is the corresponding physical address of the logical address LBA0. Similarly, provide other corresponding physical addresses for other LBAs.In an example, the partition 601 has a size of 4KB and can address 4MB of data. The number of pointers in the partition 601 is 1,024. The size of each item in the partition 601 is 4 bytes (4B).FIG. 7 illustrates a pivot table 701 containing a starting physical address corresponding to the logical block address in the logical-to-physical partition 601 of FIG. 6 according to some embodiments. The pivot table 701 is an example of the pivot table 119 of FIG. 1. In FIG. 7, the pivot table 701 provides a starting physical address for each of the respective corresponding ranges of logical addresses. In an example, the starting physical address of pivot index 0 corresponds to the first logical address range from LBA 0 to LBA 127. In an example, the starting physical address of the first logical address range is a sequential pointer to LBA 0.In another example, the starting physical address of hub index 1 corresponds to the second logical address range from LBA 128 to LBA 255. The starting physical address of the second logical address range is a sequential pointer to LBA 128.In an example, the starting physical address of the pivot table 701 (the pivot index 0 to 7) covers the entire logical address range of the partition 601 (the first partition) from LBA0 to LBA 1023. As mentioned above, in an example, the LBA range from LBA 1024 to LBA2047 corresponds to the second partition (not shown) and the second pivot table (not shown). In other embodiments, a single pivot table may be used instead of multiple pivot tables.In an example, the pivot table 701 has a size of 32 bytes. The number of items in the pivot table 701 is 8. The size of each item is 4 bytes. Each logical to physical block corresponds to the corresponding pivot table.FIG. 8 illustrates a bitmap 801 containing bits corresponding to logical block addresses in the logical-to-physical partition 601 of FIG. 6 according to some embodiments. The bitmap 801 is an example of the bitmap 117 of FIG. 1. In FIG. 8, the bitmap 801 stores the bits of each of the logical address ranges. In an example, each item of the bitmap 801 corresponds to a bit array indexed by bitmap indexes 0, 1, 2, ..., 7. For example, the bit array of bitmap index 0 corresponds to the logical address range LBA 0 to LBA 127. For each LBA address in this range, the bit array contains a single bit that is set high or low depending on whether the corresponding LBA address is in the sequence range, as described above.In one example, the bitmap 801 includes multiple bit arrays (each indexed by bitmap indexes 0, 1, 2, ..., 7). Each bit array contains a portion of the bits stored in the bitmap 801. Each bit array corresponds to the corresponding starting physical address of the pivot table 701. In one example, the bit array of bitmap index 0 corresponds to the sequential pointer to LBA 0. In another example, the bit array of bitmap index 7 corresponds to a sequential pointer to LBA 896.In an example, the bitmap 801 has a size of 128 bytes and covers the range LBA 0 to LBA 1023. Each bit array (bitmap index 0, 1,..., 7) has a size of 16 bytes (16B).In other embodiments, the size and range of the partition 601, the pivot table 701, and/or the bitmap 801 (and/or the bit array in the bitmap 801) may become larger and/or smaller. The above-mentioned embodiments and examples are for demonstration only and are not restrictive.In an embodiment, the pivot table 701 and/or the bitmap 801 are updated in response to loading logical to physical blocks into the volatile memory 106. In an example, when the partition is loaded, one or more bit arrays of the bitmap 801 are updated.In an example, for the bit array corresponding to bitmap index 0, the controller 107 determines that the bit array corresponds to two or more sequential LBA address ranges. The controller 107 determines which of the sequence ranges has the longest or largest length, and then updates the bits of the bit array based on selecting the sequence range with the largest length. The controller 107 determines the starting physical address of the hub index 0 so as to correspond to the selected sequence range of the bit array being updated. The determined starting physical address in the hub table 701 is updated.9 is a graph illustrating exemplary thousands of input/output operations per second (KIOPS) on the vertical axis versus the percentage of randomness on the horizontal axis for a performance simulation of a storage device according to some embodiments. In an example, the storage device is the storage device 103 of FIG. 1.The performance simulation illustrated in FIG. 9 is based on an example in which the volatile memory partition is based on a total usable volatile memory capacity of 1024KB. Keep the volatile memory (SRAM) size of all analog configurations in the traditional and hub methods constant. The simulation assumes a host range of 1GB, 2GB, 4GB, and 8GB. For example, 8GB coverage means that the host range spans 8GB. In other words, the random read logical addresses given by the host are distributed on 8GB.Perform simulations on both traditional methods and hub methods. The pivot method uses a pivot table and bitmap, as discussed above. The traditional method does not use a pivot table or a bitmap, but loads a new block into the volatile memory whenever it is needed to handle a new command from the host. The 150KIOPS boundary corresponds to the situation when the blocks are not loaded (all operations are performed using loaded blocks). A 1MB block can be used to cover a 1GB host range.The volatile memory (SRAM) is divided into 640KB for storing logical to physical blocks covering the 640MB host range. 384KB of volatile memory is reserved for pivot tables and bitmaps, as discussed above. Considering that the 4KB pivot table and 16KB bitmap (total 20KB) cover 512MB of the host, this implies that the 384KB pivot table and bitmap cover (384/20)*512MB, which is approximately equal to 10GB.In the 640KB+384KB partition and 8GB host range (HR), this implies 100% coverage by the pivot table and bitmap (the coverage is about 10GB, which is >8GB), and it is covered by blocks 640/8192.For 100% sequential writing (ie, no random writing), the entire range is covered by the pivot table and bitmap, so there is no need to load any blocks and therefore no performance degradation.For the host usage model where there are 10% random writes and 90% sequential writes, the pivot table and bitmap cover 90% (because the pivot table and bitmap only cover the sequential part). In 10% of cases where the partition should be loaded because it is not covered by the pivot table and bitmap, there is still a probability (640/8192) that the partition is hit in the volatile memory (and therefore there is no need to load a new partition).As illustrated in Figure 9, for most percentage randomness cases, the KIOPS of the hub method with the 8GB host range is greater than the KIOPS of the traditional method with the 8GB host range.FIG. 10 shows a method for determining a physical address based on a logical address in a command received from a host device by using a pivot table and a bitmap according to some embodiments. For example, the method of FIG. 10 may be implemented in the system of FIG. 1. In an example, the host device is the host 101. In an example, the pivot table is the pivot table 119, and the bitmap is the bitmap 117.The method in FIG. 10 may be executed by processing logic, which may include hardware (such as processing devices, circuit systems, dedicated logic, programmable logic, microcode, device hardware, integrated circuits, etc.), software (such as running or executing on Processing instructions on the device) or a combination thereof. In some embodiments, the method of FIG. 10 is at least partially executed by one or more processing devices (eg, the controller 107 of FIG. 1).Although shown in a specific sequence or order, unless otherwise specified, the order of the process can be modified. Therefore, the described embodiments should be understood as only examples, and the described processes can be performed in different orders, and some processes can be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At block 1001, a read command including a first logical address is received. In an example, the read command is received by the storage device 103 from the host 101 and includes the LBA address.In block 1003, it is determined whether the first logical address is within the sequence range. This determination is based on the bit corresponding to the first logical address. In an example, it is determined that the LBA address is within the sequence range 202 of FIG. 2.At block 1005, the displacement from the starting physical address associated with the sequence range is determined. The displacement is determined by the difference between the first logical address and the second logical address corresponding to the starting physical address. In an example, the starting physical address is determined using the pivot table 701 of FIG. 7. The second logical address is the LBA address corresponding to the sequential pointer of the pivot table 701 that provides the starting physical address. For example, the second logical address of hub index 1 is LBA 128. For example, the second logical address of hub index 0 is LBA 0.At block 1007, the physical address corresponding to the first logical address is determined by adding the displacement to the starting physical address. In an example, the physical address is determined by the controller 107.At block 1009, the determined physical address is used to read the data stored in the non-volatile storage medium. In an example, the storage data 113 is read from the non-volatile storage medium 109 using the determined physical address.In one embodiment, a method for a storage device (for example, the storage device 103) includes: receiving, by a controller (for example, the controller 107), data including storage in a nonvolatile storage medium (for example, the nonvolatile storage medium 109 ) Is a command of the first logical address of the data; the controller determines whether the first logical address is in the sequence range (for example, the sequence range 202) based on the first stored value corresponding to the first logical address; And in response to determining that the first logical address is within the sequence range, determining a first physical address corresponding to the first logical address.Determining the first physical address includes determining a displacement from a starting physical address associated with the sequence range (for example, a sequence pointer of LBA 0 that points to pivot index entry 0 in table 701), wherein the displacement is determined by The difference between the first logical address and a second logical address (for example, LBA 0) corresponding to the start physical address is determined; and the first physical address is determined by adding the displacement to the start physical address Physical address. In an example, as discussed above, for LBA=165 in the received read command, the starting physical address is 1978. The first physical address is equal to 1978+(165-128)=1978+37 (displacement)=2015.In an embodiment, the command is a read command or a write command.In an embodiment, the first stored value is one of a plurality of bits stored in a bitmap (for example, bitmap 117), and each of the plurality of bits corresponds to a host address range Logical address.In an embodiment, each bit of the bitmap has a first binary value or a second binary value, and the first binary value (for example, a bit set to a high state or 1) indicates the The corresponding logical address is within the sequence range, and the second binary value indicates that the corresponding logical address is not within the sequence range.In an embodiment, the starting physical address is a first starting physical address, and the method further includes: storing a pivot table (such as a pivot table) that includes a plurality of starting physical addresses including the first starting physical address. 119), where each starting physical address corresponds to the corresponding range of the logical address. The bitmap includes a plurality of bit arrays, and each bit array (for example, a bit array corresponding to LBA 0 to LBA 127, for example, described for bitmap index 0 in bitmap 801) includes a part of the plurality of bits, and Each array corresponds to the corresponding starting physical address of the pivot table.In an embodiment, the method further includes: storing the partition (for example, partition 115) in a volatile memory (for example, the volatile memory 106), wherein the partition is in a command received from the host device The logical address of provides a logical-to-physical mapping, and wherein the partition includes a plurality of physical addresses each corresponding to a corresponding logical address. Each starting physical address of the pivot table corresponds to a corresponding part of the block.In one embodiment, the command is received from a host device (for example, the host 101), and the host address range is used by the host device to logically address data stored in the non-volatile storage medium.In an embodiment, the command is the first command received from the host device, and the method further includes: storing a table (for example, the mapping table 111) in the non-volatile storage medium, wherein the table Provide a logical-to-physical mapping for the logical address in the command received from the host device; receive a second command containing a third logical address from the host device; determine based on a second stored value corresponding to the third logical address Whether the third logical address is within the sequence range; and in response to determining that the third logical address is not within the sequence range, determining a second physical address corresponding to the third logical address. Determining the second physical address includes: loading a logical-to-physical partition from the table into a volatile memory; and determining the second physical address using the load partition.In an embodiment, the starting physical address is the first starting physical address, and the method further includes: storing a plurality of starting physical addresses in a table (for example, the pivot table 119), and each starting physical address is Corresponding sequence ranges of logical addresses of data stored in the non-volatile storage medium are associated, and the starting physical address includes the first starting physical address.In an embodiment, a system includes: a non-volatile storage medium; a volatile memory configured to store a bitmap, the bitmap including a first bit corresponding to a first logical address; a controller; And firmware (such as firmware 104), which contains instructions configured to instruct the controller: to receive a command including the first logical address; to determine the first bit based on the first bit of the bitmap Whether a logical address is within the sequence range; and in response to determining that the first logical address is within the sequence range, determine the first physical address of the nonvolatile storage medium corresponding to the first logical address.Determining the first physical address includes: determining a displacement from a starting physical address associated with the sequence range, wherein the displacement is determined by the first logical address and a second logical corresponding to the starting physical address. The difference between the addresses is determined; and the first physical address is determined by adding the displacement to the starting physical address.In an embodiment, the instructions are configured to further instruct the controller to use the determined first physical address to read or write data in the non-volatile storage medium.In one embodiment, the starting physical address is the first starting physical address, and the instruction is configured to further instruct the controller: to store a plurality of starting physical addresses in a table, each starting physical address In association with the corresponding sequence range of the logical address of the data stored in the non-volatile storage medium, the starting physical address includes the first starting physical address. Determining the first physical address further includes using the table to determine the first starting physical address.In an embodiment, the volatile memory is further configured to store the table.In one embodiment, the table is the first table, and the second table is stored in the non-volatile storage medium, and the second table provides logical information for the logical address in the command received from the host device. Physical mapping.In an embodiment, the command is a first command, and the command is configured to further instruct the controller: to receive a second command including a third logical address; in response to receiving the second command, the logical Load the physical partition from the second table into the volatile memory; and use the load partition to determine a second physical address corresponding to the third logical address.In one embodiment, a non-transitory machine-readable storage medium stores instructions that, when executed on at least one processing device, cause the at least one processing device to at least: receive a read command including a first logical address ; Determine whether the first logical address is within the sequence range based on the bit corresponding to the first logical address; In response to determining that the first logical address is within the sequence range: Determine whether the first logical address is associated with the sequence range The displacement of the starting physical address, wherein the displacement is determined by the difference between the first logical address and the second logical address corresponding to the starting physical address; and by adding the displacement to the starting physical address The initial physical address is used to determine the physical address corresponding to the first physical address; and the determined physical address is used to read the data stored in the non-volatile storage medium.In an embodiment, the starting physical address is one of a plurality of starting physical addresses stored in a pivot table in a volatile memory.In an embodiment, the bit is one of a plurality of bits in a bitmap stored in the volatile memory.In an embodiment, the read command is a first command, the start physical address is a first start physical address, and the instruction further causes the at least one processing device to store a mapping table in the non- In a volatile storage medium, the mapping table provides a logical-to-physical mapping for the logical address in the command received from the host device; in response to receiving the second command from the host device, the logical-to-physical block is transferred from the The mapping table is loaded into the volatile memory; and in response to loading the partition, the second starting physical address in the pivot table is updated.In an embodiment, the sequence range is a first sequence range, and the instruction further causes the at least one processing device to: determine a plurality of sequence ranges within a logical address range; determine that the first sequence range has the The maximum length of the sequence range; and in response to determining that the first sequence range has the maximum length: associate the starting physical address with the first sequence range; and update the corresponding to the first sequence range The bits of the bitmap, where each updated bit indicates that a physical address can be determined for the corresponding logical address based on the displacement from the starting physical address.In one example, a performance simulation is performed. For simulation, the host range is 2GB. In the simulation, the host writes 2GB sequentially with a block size of 512KB. This implies that the 512K LBA address is written to the non-volatile memory.Perform random write of 50K commands in the same range. Consider 200MB divided by 2048MB (2GB), which implies that the 1/10 LBA address is random. This implies 10% randomness.In other words, the host writes 2GB/4KB=512K LBA (each host LBA is 4KB). The host rewrites a 50K random write (RW) command on the same 2GB. This implies that 200MB is written by the host. Finally, the file is written to 2GB: 200MB in order, and 200MB randomly. Therefore, the random LBA address is 200MB/2GB~=10%.Perform random reads of 50K commands on the same range. The probability of getting a random address is 1/10=10%.The traditional method of using a block with a size of 1MB in RAM (1GB coverage) (without using a pivot table or bitmap) implies that the probability of a block miss is 1/2=50%.In contrast, using the pivot method described in this article with smaller blocks in RAM and the pivot table and bitmap stored in RAM described above implies that less than 10% of the commands received will be Has a block miss.The simulated performance of the traditional method and the hub method for random read on 2GB is as follows: for the traditional method, the performance is 104KIOPS; and for the hub method, the performance is 140KIOPS.in conclusionThe present invention includes various devices for performing the above methods and implementing the above systems, including a data processing system for performing these methods and a computer-readable medium containing instructions that cause the system to perform these methods when executed on the data processing system.The description and drawings are illustrative and should not be limited by interpretation. Describe many specific details to provide detailed understanding. However, in certain examples, well-known or conventional details are not described so as not to obscure the description. Reference to an embodiment in the present invention is not necessarily a reference to the same embodiment; and such reference means at least one.Reference in this specification to "an embodiment" means that a specific feature, structure, or characteristic described in conjunction with the embodiment is included in at least one embodiment of the present invention. The phrase "in one embodiment" appearing in various places in this specification does not necessarily all refer to the same embodiment, and is not a separate or alternative embodiment mutually exclusive with other embodiments. In addition, various features that may be exhibited by some embodiments and not exhibited by other embodiments are described. Similarly, various requirements described may be requirements of some embodiments but not requirements of other embodiments.In this description, various functions and operations may be described as being executed or caused by software codes to simplify the description. However, those skilled in the art should recognize that this type of expression means that the function is defined by one or more of, for example, a microprocessor, an application specific integrated circuit (ASIC), a graphics processor, and/or a field programmable gate array (FPGA). Caused by the execution of code by the processor. Alternatively or in combination, functions and operations may be implemented using dedicated circuitry (such as logic circuitry) with or without software instructions. The embodiments can be implemented using hard-wired circuitry without software instructions or in combination with software instructions. Therefore, the technology is neither limited to any specific combination of hardware circuitry and software, nor is it limited to any specific source of instructions executed by the computing device.Although some embodiments can be implemented in full-function computers and computer systems, the various embodiments can be distributed as computing products in various forms and can be applied, regardless of the specific type of machine or computer readable that actually causes the distribution. What is the media.At least some aspects disclosed may be at least partially embodied in software. That is, the technology can be implemented on a computing device in response to its processor (such as a microprocessor) executing a sequence of instructions contained in a memory (such as ROM, volatile RAM, non-volatile memory, cache, or remote storage) Or in other systems.The routines executed to implement the embodiments can be implemented as operating systems, middleware, service providing platforms, SDK (software development kit) components, web services, or other specific applications, components, programs, objects, modules, or instruction sequences (called As part of the "computer program"). The calling interface to these routines can be exposed to the software development community as API (application programming interface). A computer program usually includes one or more instruction sets in various memories and storage devices in a computer at various times, and when read and executed by one or more processors in the computer, it causes the computer to execute and execute involves various aspects. Required operations for the components.Machine-readable media can be used to store software and data that, when executed by a computing device, cause the device to perform various methods. Executable software and data may be stored in various locations including, for example, ROM, volatile RAM, non-volatile memory, and/or cache. Portions of this software and/or data can be stored in any of these storage devices. In addition, data and instructions can be obtained from a centralized server or peer-to-peer network. Different parts of data and instructions can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or in the same communication session. Data and instructions can be fully obtained before executing the application program. Alternatively, it is possible to dynamically obtain the part of the data and instructions only when needed for execution. Therefore, there is no need for the data and instructions to be completely on the machine-readable medium at a particular moment.Examples of computer-readable media include, but are not limited to, recordable and non-recordable media, such as volatile and non-volatile memory devices, read-only memory (ROM), random access memory (RAM), flash Memory devices, solid-state drive storage media, removable magnetic disks, magnetic disk storage media, optical storage media (such as compact disk read only memory (CDROM), digital versatile disk (DVD), etc.). The computer-readable medium can store instructions.Generally speaking, a tangible or non-transitory machine-readable medium includes providing (e.g., storing) a machine (e.g., a computer, a mobile device, a network device, a personal digital assistant, a manufacturing tool, a set of one or more processors). Any device, etc.) access to any organization in the form of information.In various embodiments, hard-wired circuitry can be used in combination with software and firmware instructions to implement the technology. Therefore, the technology is neither limited to any specific combination of hardware circuitry and software, nor is it limited to any specific source of instructions executed by the computing device.The various embodiments set forth herein may be implemented using various different types of computing devices. As used herein, examples of "computing devices" include (but are not limited to) servers, centralized computing platforms, systems of multiple computing processors and/or components, mobile devices, user terminals, vehicles, personal communication devices, wearables Digital devices, electronic kiosks, general-purpose computers, electronic file readers, tablet computers, laptop computers, smart phones, digital cameras, home appliances, televisions, or digital music players. Additional examples of computing devices include devices that are part of the so-called "Internet of Things (IOT)." Such "things" can occasionally interact with their owners or administrators who can monitor things or modify settings on those things. In some cases, these owners or administrators play the role of users regarding the "things" device. In some instances, the user's primary mobile device (e.g., Apple phone) may be an administrator server for the paired "thing" device (e.g., Apple Watch) worn by the user.In some embodiments, the computing device may be a computer or host system, which may be implemented as a desktop computer, laptop computer, web server, mobile device, or other computing device including memory and processing devices, for example. The host system may include or be coupled to the memory subsystem such that the host system can read data from the memory subsystem or write data to the memory subsystem. The host system can be coupled to the memory subsystem via a physical host interface. Generally speaking, a host system can access multiple memory subsystems via the same communication connection, multiple individual communication connections, and/or a combination of communication connections.Although some diagrams illustrate several operations in a specific order, operations that are not related to the order can be reordered, and other operations can be combined or decomposed. Although some reordering or other groupings are specifically mentioned, other reorderings or groupings are obvious to those of ordinary skill in the art, and therefore an exhaustive list of alternatives is not presented. In addition, it should be recognized that the stages can be implemented in hardware, firmware, software, or any combination thereof.In the above description, the present invention has been described with reference to specific exemplary embodiments of the present invention. It should be understood that various modifications can be made without departing from the broader spirit and scope stated in the appended claims. The description and drawings should accordingly be regarded as intended to illustrate rather than limit. |
Methods, systems, apparatus, and articles of manufacture to extend the life of embedded processors are disclosed herein. Disclosed example apparatus include a policy selector to select a policy, based on input information. The apparatus extends an operating lifespan of a microprocessor having a plurality of cores. The apparatus also includes a cores partitioner to divide, based on the selected policy, the plurality of cores into subsets of cores, including a first subset and a second subset. A sensor monitors, based on the selected policy, at least one operational parameter of the cores, and a cores switcher switches a first core of the first subset of cores from active to inactive and to switch a second core of the second subset of cores from inactive to active based on the at least one operational parameter. The switches reduce an amount of degradation experienced by the first core and the second core. |
An apparatus comprising:a policy selector to select a policy, based on input information, the policy to extend an operating lifespan of a microprocessor having a plurality of cores;a cores partitioner to divide, based on the selected policy, the plurality of cores of the microprocessor into subsets of cores, including a first subset and a second subset;a sensor to monitor, based on the selected policy, at least one operational parameter of the plurality of cores; anda cores switcher to switch a first core of the first subset of cores from active to inactive and to switch a second core of the second subset of cores from inactive to active based on the at least one operational parameter, the switches by the cores switcher to reduce an amount of degradation experienced by the first core and the second core.The apparatus of claim 1, wherein the input information on which selection of the policy is based includes at least one of a user preference, a type of product in which the microprocessor is installed, an application in which the product is to operate, or an environment in which the product is to operate.The apparatus of claim 1 or claim 2, wherein at least some of the plurality of cores are inactive and at least some of the plurality of cores are active during operation of the microprocessor.The apparatus of any one of claims 1 to 3, wherein the plurality of cores are rated to operate in a first environment at a first temperature, are operating in a second environment at a second temperature, the second temperature accelerates degradation of silicon of the plurality of cores, the second temperature is higher than the first temperature, and the cores switcher operates to limit an amount of time that any of the plurality cores operate in the second environment.The apparatus of any one of claims 1 to 4, further including an operations transfer orchestrator to orchestrate a transfer of operations by which operations performed at the first core are to be transferred to the second core, the operations transfer orchestrator to orchestrate the transfer of operations in response to a notification from the cores switcher, and the operations to be transferred before the first core is switched to inactive and after the second core is switched to active.The apparatus of any one of claims 1 to 5, further including a workload orchestrator to compare a workload of the first core to a workload capacity of the second core, the comparison to be used by the cores switcher to determine whether the second core has sufficient capacity for the workload of the first core before issuing a switch command, the cores switcher to issue the switch command when the second core is determined to have sufficient capacity for the workload of the first core.The apparatus of any one of claims 1 to 6, further including a cores switchover configurer to configure the first core and the second core to switch between inactive and active states, configuring of the first and second cores to include at least one of (i) configuring the first core and the second core to communicate with one another, (ii) configuring the first core and the second core to receive and respond to activation signals and inactivation signals, or (iii) configuring memories associated with the first core and the second core to have the same addresses.A method comprising:selecting a policy, based on input information, the policy to extend the operating lifespan of a microprocessor having a plurality of cores;dividing, based on the selected policy, the plurality of cores of the microprocessor into subsets of core , including a first subset and a second subset;monitoring, based on the selected policy and sensed information received from one or more sensors, at least one operational parameter of the plurality of cores;switching a first core of the first subset of cores from active to inactive; andswitching a second core of the second subset of cores from inactive to active based on the at least one operational parameter, the switch of the first core and the second core to reduce an amount of degradation experienced by the first core and the second core.The method of claim 8, wherein the sensor includes a plurality of sensors, and the operational parameters include at least one of temperature, time, or core usage.The method of claim 9, wherein the plurality of sensors includes at least one of a core usage sensor, a digital thermal sensor, or a timer, the digital thermal sensor senses a junction temperature associated with the plurality of cores, and the core usage sensor measures at least one of respective workloads of respective ones of the cores or respective operating speeds of the respective ones of the cores.The method of claim 8, wherein the sensor is one of a plurality of sensors, the method further generating a time-series log of data collected by the plurality of sensors, the data used to compare a first operational parameter of a first core with the first operational parameter of the second core, the first operational parameter of the first core and the second core sensed at a same time and the comparison used to identify a time to switch the first core from active to inactive and to switch the second core from inactive to active.The method of any one of claims 8 to 11, wherein the first subset of cores are active and the second subset of cores are inactive and a first switch of the first subset of cores to inactive and a second switch of the second subset of cores to active occurs after a duration of time equal to an expected lifespan of the plurality of cores.The method of any one of claims 8 to 12, wherein the at least one operational parameter reflects an amount of quality degradation of the plurality of cores caused by one or more of a core temperature, a core operating voltage, a core operating frequency, and a core workload stress, and the quality degradation adversely affects the operating lifespan of the plurality of cores.The method of any one of claims 8 to 13 further including executing an algorithm that uses a combination of a first series of core usage values and a first series of core temperature values to determine when the first core is to be switched from active to inactive and the second core is to be switched from inactive to active.At least one computer readable medium comprising instructions that, when executed, cause at least one processor to implement a method or realize an apparatus as claimed in any preceding claim. |
FIELD OF THE DISCLOSUREThis disclosure relates generally to embedded processors, and, more particularly, to extending the life of embedded processors.BACKGROUNDModern microprocessors contain billions of transistors that sometimes operate in excess of three billion cycles per second. Microprocessor applications range from electronic devices used in the home to electronic equipment used in industrial/manufacturing applications. The home based applications and manufacturing based applications often differ, both in the usage of the devices and in the environments in which the devices are deployed.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 block diagram of a first printed circuit board on which a first cores controller and eight core are disposed.FIG. 2 is a block diagram of a second printed circuit board on which a second cores controller and a number (n) of cores are disposed.FIG. 3 is a block diagram of a third printed circuit board on which eight cores are disposed and in which the third printed circuit board is represented in four different active cores configurations.FIG. 4 is a block diagram of the first, second and/or third cores controller of any of FIGS. 1 - 3 .FIG. 5A and FIG. 5B collectively represent a flowchart representative of machine readable instructions which may be executed to implement any of the first, second and/or third cores controller of any of FIGS. 1 - 3 .FIG. 6 is a flowchart representative of machine readable instructions which may be executed to implement any of the first, second third and/or fourth cores controller of any of FIGS. 1 - 4 .FIG. 7 is a block diagram of an example processing platform structured to execute the instructions of FIGS. 5A , 5B and 6 to implement any of the first, second and/or third cores controller of any of FIGS. 1 - 3 .FIG. 8 is a block diagram of an example software distribution platform to distribute software (e.g., software corresponding to the example computer readable instructions of FIGS. 5A , 5B , and 6 to client devices such as consumers (e.g., for license, sale and/or use), retailers (e.g., for sale, re-sale, license, and/or sub-license), and/or original equipment manufacturers (OEMs) (e.g., for inclusion in products to be distributed to, for example, retailers and/or to direct buy customers).FIG. 9 illustrates an overview of an edge cloud configuration for edge computing.FIG. 10 illustrates operational layers among endpoints, an edge cloud, and cloud computing environments.FIG. 11 illustrates an example approach for networking and services in an edge computing system.The figures are not to scale. Instead, the thickness of the layers or regions may be enlarged in the drawings. Although the figures show layers and regions with clean lines and boundaries, some or all of these lines and/or boundaries may be idealized. In reality, the boundaries and/or lines may be unobservable, blended, and/or irregular. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. As used herein, unless otherwise stated, the term "above" describes the relationship of two parts relative to Earth. A first part is above a second part, if the second part has at least one part between Earth and the first part. Likewise, as used herein, a first part is "below" a second part when the first part is closer to the Earth than the second part. As noted above, a first part can be above or below a second part with one or more of: other parts therebetween, without other parts therebetween, with the first and second parts touching, or without the first and second parts being in direct contact with one another. As used in this patent, stating that any part (e.g., a layer, film, area, region, or plate) is in any way on (e.g., positioned on, located on, disposed on, or formed on, etc.) another part, indicates that the referenced part is either in contact with the other part, or that the referenced part is above the other part with one or more intermediate part(s) located therebetween. As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other. As used herein, stating that any part is in "contact" with another part is defined to mean that there is no intermediate part between the two parts.Unless specifically stated otherwise, descriptors such as "first," "second," "third," etc. are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name. As used herein, "approximately" and "about" refer to dimensions that may not be exact due to manufacturing tolerances and/or other real world imperfections. As used herein "substantially real time" refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc. Thus, unless otherwise specified, "substantially real time" refers to real time +/- 1 second.DETAILED DESCRIPTIONEmbedded processors used in many applications require useful lifetime of 10 to 15 years of continuous operation. These devices when used in harsh environmental condition are expected to operate reliably during that lifetime. Modern microprocessors contain billions of transistors, sometimes operating at clock speeds in excess of 3 billion cycles per second. Such high clock speed in highly dense, with small geometries mean that the transistors generate lot of heat, which accelerates their decline.Another concern is the difference between supply voltage and the threshold at which the transistors turn on getting smaller. Also, various improvements in the way silicon logic is fabricated have introduced new concerns about degradation. And transistors scaled down to today's small dimensions will be impacted more than ever by variations in their operating conditions, which, in turn, leads to great differences from one transistor to another in how fast they wear out. Thus, achieving long life as required for industrial applications is increasingly challenging with the increasing complexity of microprocessors.Currently long life is achieved through appropriate changes in process or physical design methodology. In order to get advantage of economies of scale, complex semiconductor devices are designed so that they can be used in multiple markets. Due to slight variations in the process technology, a percentage of the manufactured devices are screened with certain electrical characteristics, making them suitable for industrial applications. With proper selection criteria, these parts can be screened and tested for higher reliability that industrial applications demand.As semiconductor processing technology advances to smaller geometries, the number of parts that can be screened for industrial applications becomes smaller and smaller as semiconductor process advances with time. Therefore, semiconductor parts to be used in industrial applications will be required to be custom built with larger geometries which will be cost prohibitive.Further, industrial applications require parts that reliably operate for extended lifetimes (10-15 years). Client products only offer 3-5 years of product life. Currently, industry invests in Quality & Reliability (Q&R) and High-Volume Manufacturing (HVM) testers by binning the parts for extended life. As technology advances to smaller processor nodes, it will be challenging to qualify products for extended life due to smaller geometry and narrow margins.Additionally, CPUs built using CMOS process technology are prone to several degradation mechanisms like hot-carrier injection, bias temperature instability, gate oxide breakdown and electromigration. These degradation mechanisms are a function of environment (temperature, voltage), frequency and the workload stress.The methods, systems, apparatus, and articles of manufacture disclosed herein prolong/extend the life of multi-core processors by keeping only a subset of the total number of cores active at a time and switching between active and non-active cores based on one or more reservation policies.Such reservation policies include the provisioning of at least some of the CPU cores as reserve cores. At any time, only a subset of the total number of cores are active (NCActive) with the remaining cores inactive and disabled (NCReserve). In some examples, the number of inactive cores can be programmable and can be based on product application. The total number of cores in a product is represented using "NCActive+ NCReserve." For example, an eight core product can have six NCActive cores and two NCReserve cores such that the total number of cores is 8.In some examples, the policies are based on an amount of time a subset of cores are active (also referred to as "time-based reservation policies" (TBRP)). In some examples, the policies are based on an amount of degradation experienced by one or more of the cores or subsets of the cores (also referred to as "quality degradation monitoring reservation policies" (QDMRP)). In some examples, the quality degradation monitoring takes degradation mechanisms such as temperature, voltage, frequency and the workload stress of the cores or subset of cores into consideration when determining the amount of degradation.In some examples, the environmental characteristics and durations of operation of the cores are monitored and the monitoring data is logged in a time series for storage in a non-volatile memory (e.g., NVRAM/NVMe). In some such examples, the logged time-series data is used to determine whether a switch of cores (e.g., to activate reserved cores and to deactivate active cores) is to occur.FIG. 1 block diagram of a first printed circuit board 105 on which a first cores controller 110 and eight cores (e.g., CPU0 120, CPU1 130, CPU2 140, CPU3 150, CPU4 160, CPU5 170, CPU6 180, CPU7 190) are disposed. In the block diagram, the first cores controller 110 causes the cores CPU7 190 and CPU3 150 to be inactive/reserved and causes the remaining cores to be active. In some examples, though illustrated, for clarity, only with respect to the CPU0 120 of FIG. 1 , each of the cores disclosed herein includes (or is otherwise associated with) a digital thermal sensor 160 and core usage sensor 170. In some examples, the cores described herein are embedded processors. In some examples, based on a policy that involves monitoring of any of a number of factors including time, workload, temperature, etc., the first cores controller 110 performs a cores switch by which the inactive cores CPU7 190 and CPU3, 150 are activated and a different two of the remaining cores (e.g., CPU0 120, CPU1 130, CPU2 140, CPU4 160, CPU5 170, CPU6 180) are deactivated. In some examples, in addition to activating and deactivating cores, operations being performed at most recently deactivated cores are transferred to currently active cores. Controlling the cores in this manner reduces the amount of operating time and/or degradation to which each core is subject. In some examples, reducing the amount of operating time and/or degradation of the cores results in a longer lifespan for a product containing the cores (CPU0 120, CPU1 130, CPU2 140, CPU3 150, CPU4 160, CPU5 170, CPU6 180, CPU7 190).FIG. 2 is a block diagram 200 of a second printed circuit board 203 on which a second cores controller 205 and two groups/partitions of cores include a first partition of cores 207 and a second partition of cores 209. In some examples, the total number of cores is denoted as "n". In some such examples, the first partition of cores 207 includes Core 0 210, Core 2 230, Core 4 250, Core n-2 280 and any cores assigned even numbers between 4 and n-2. In some such examples, the second partition of cores 209 includes Core 1 220, Core 3 240, Core 5 260, Core n-1 270, and any cores assigned odd numbers between 5 and n-1.In some examples, the second cores controller 205 partitions the n-1 cores into the first partition of cores 207 and the second partition of cores 209. In some examples, the second cores controller 205 causes the cores included in the first partition of cores 207 to be active when the cores of the second partition of cores 209 are inactive. In addition, at a determined time based on one or more factors, the second cores controller 205 causes the cores of the first partition of cores 207 to become inactive and the cores of the second partition of cores 209 to become active.FIG. 3 is a block diagram 300 of four configurations (e.g., A, B, C and D) of a third printed circuit board 302 having eight cores (e.g., Core 0, Core 1, Core 2, Core 3, Core 4, Core 5, Core 6, Core 7). In the example of FIG. 3 , the numbers used to identify the cores (0 - 7) are followed by an A, B, C, or D, depending on whether the cores are on the first configuration A, the second configuration B, the third configuration C, and the fourth activation D, respectively. Additionally, the printed circuit board 302A, 302B, 302C, 302D includes a third cores controller 304A, 304B, 304C, 304D.In some examples, the third cores controller 304A, 304B, 304C, 304D partitions the 8 cores into four groups. In some examples, the first group includes the Core 3A and the Core 7A. In some examples, the second group includes the Core 0B and the Core 4B, the third group includes the Core 1C, and the Core 5C, and the fourth group includes the Core 2D and the Core 6D. In some examples, the third cores controller 304A, 304B, 304C, 304D causes the cores included in the first group to be inactive and the cores in the second, third and fourth groups to be active. In some examples, at a determined time and/or based on one or more factors, the third cores controller 304A, 304B, 304C, 304D causes the cores of the second group to be deactivated and causes the cores of the first group to be activated such that the first, third and fourth groups active. In some examples, the third cores controller 304A, 304B, 304C, 304D causes the cores of the third group to become inactive and causes the cores of the second group to be active such that the first, second, and fourth groups of cores are active. In some examples, the third cores controller 304A, 304B, 304C, 304D causes the cores of the fourth group to become inactive and causes the cores of the third group to be activated such that the cores of the first, second, and third groups are active. The third cores controller 304A, 304B, 304C, 304D switches the active/inactive status of the groups of cores in a manner that extends the life of the product in which the third cores controller 302A, 302B, 302C, 302D is installed.FIG. 4 is a block diagram of an example cores controller 402 that represents the first, second and/or third cores controller of any of FIGS. 1 - 3 . In some examples, the cores controller 402 includes an example cores partitioner 404, an example cores switcher 406, an example timer/clock 408, an example workload orchestrator 410, an example policy selector 412, an example subset selector 414, an example CPU usage monitor 416, an example digital thermal sensor 418, an example time series data logger 420, an example operations transfer orchestrator 422, an example cores switchover configurer 424, and an example non-volatile random access memory (NVRAM) 426.In some examples, the example cores controller 402 operates to switch cores (e.g., of a printed circuit board of a microprocessor) between active and inactive/reserved states. In some examples, the cores controller 402 causes the operations and/or workloads performed at a first (active) core to be transferred to a second (inactive) core at, before, or during the switching of the states of the first core to inactive and the second core to active. In some examples, the cores controller 402 achieves the switching of the cores in a manner such that operations being performed by the cores at the time of a switch are not interrupted or lost.In some examples, the example cores partitioner 404 of the example cores controller 402 determines a number of cores to be controlled by the core controller 402 and also operates to partitioner the number of cores into groups/subsets. The number of groups/subsets can vary based on the configuration of the cores and/or the electronic device in which the cores are disposed, a manner in which monitoring data is collected, etc. In some examples, the determination as to the number of groups/subsets of cores to be created is informed, in part, by the example policy selector 412. The policy selector 412 selects a policy from among a variety of policies that will govern the manner in which the cores controller switches cores between active and inactive states. In some examples, a policy is based on a duration of time in which the cores are active. In some examples, a policy is based on degradation of the cores which can be caused by a temperature in which the cores are operating and/or an amount of usage (workload) the cores are experiencing. Accordingly, the policy selector is an example means for selecting a policy and the cores partitioner is a means to divide or partition cores into a subset of cores.In some examples, based on, for example, the policy selected, the example subset selector 414 operates to select one or more groups/subsets of cores to be switched from a first state (e.g., inactive) to a second state (e.g., active) and vice versa. In some examples, the example cores switcher 406 switches the one or more groups/subset of cores based on information supplied by the example timer/clock 408, the example CPU usage monitor 416, the example digital thermal sensor monitor 418, the example time series data logger 420 and/or any of a variety of other monitored aspects/characteristics of the cores. In some examples, before the cores switcher 406 performs the switch operation (or any switching operation), the example cores switchover configurer 424 configures the cores in a manner that permits switchover to occur. Such configuring of the cores can include, for example, configuring the cores to be in communication (directly or indirectly) with one another, configuring the cores to receive and respond appropriately to activation signals and/or inactivation signals, configuring memories and/or other parts of the cores to have the same addresses, and/or any other of a variety of configuration operations that prepare the cores for switching between inactive and active states and vice versa. Accordingly, the cores switchover configurer is an example means to configure cores for a switch (also referred to as a core switchover) from an active to an inactive state and vice versa.In some examples, the example cores switcher 406, before, during or after issuing a switch command (e.g., a command that will cause one or more of the subsets of cores to active/deactivate), notifies the example workload orchestrator 410. In some examples, the workload orchestrator 410 responds to the notification by determining CPU utilization of the workloads associated with each of the active cores that are to be deactivated and further determines a workload capacity of each of the inactive cores that are to be activated. In some such examples, the workload orchestrator 410 may operate to adjust the transfer of the workload between cores to ensure that the newly activated cores are able to handle the workload of the deactivated (or soon to be deactivated cores). In some examples, the workload orchestrator includes a comparator to compare workload capacities of the cores and the workload orchestrate notifies the cores switcher 406 when a cores switch will result in a newly activated core having insufficient capacity to handle a workload to be transferred by the workload orchestrator. In some such examples, the workload orchestrator, upon determining that the newly activated core will have sufficient capacity orchestrates the transfer of the workload between the cores. Accordingly, the workload orchestrator 410 is an example means for orchestrating a transfer of workload from one or more core to one or more other cores.In some examples, the example cores switcher 406, before, during or after issuing a switch command (e.g., a command that will cause one or more of the subsets of cores to active/deactivate), notifies the example operations transfer orchestrator 422. In some examples, the operations transfer orchestrator 422 performs operations needed to ensure that operations of the cores being switched are successfully transferred without failing over, without damaging or affecting the operation of any processes that are executing at the time of switching, without resulting in dropped bits, etc. In some examples, the operations performed by the operations transfer orchestrator 422 can include identifying an order in which the operations are to be transferred, identifying different memory location into which different data is to be placed, etc. Accordingly, the operations transfer orchestrator 422 is a means for orchestrating a transfer of operations performed at a first core to a second core.In some examples, as described above, the example cores switcher 406 determines when a switch is to occur based on a policy/scheme that uses an amount of time during which the cores are active (also referred to as a time based reservation policy). In some such examples, the cores switcher 406 uses information provided by the example timer/clock 408 to determine when a core switch is to occur. Accordingly, the cores switcher is an example means for switching cores.In some examples, an example first time based reservation policy is employed, and the example cores partitioner 404 partitions the total number of cores into 2 groups as shown in Figure 2 . In this reservation policy/scheme, when an electronic device/product begins operating, one of the CPU groups (e.g., CPU Group 1 207) is activated and then the second CPU group (CPU Group2 209) is activated after a fixed amount of product lifetime. When the cores of the Group2 209 are activated, the cores of the CPU Group 1 207 are inactivated. In some such examples, the total number of cores in a product is twice the number of active cores. For example, an 8-core device/product that is designed for 5 years is expected to operate for 10 years. Thus, the number of active cores at any given time is 4. For the first 5 years, the cores of the CPU Group 1 are active and the cores of the CPU Group2 are activated for the remaining 5 years (at which time the cores of the CPU Group 1 are placed into inactive state). Accordingly, the overall lifetime of the product is 10 years, yet the 8 cores deployed in the product have lifespans of 5 years. In this manner, the lifespan of the product/device is extended from 5 years to 10 years through the usage of the cores controller 402 of FIG. 4 (also illustrated as the cores controller 205 in FIG. 2 ).In some examples, an example second time based reservation policy/scheme is employed, and the example cores partitioner 404 partitions the total number of cores into 2 groups as shown in Figure 2 . In the second reservation policy/scheme, when an electronic device/product begins operating, one of the CPU groups (e.g., CPU Group 1 207) is activated. Next, after a threshold amount of time has elapsed, the example cores switcher 406 switches the cores such that the cores of the CPU Group 1 207 are made inactive and the cores of the CPU Group2 are made active. The cores switcher 406 continue to switch the cores of CPU Group 1 and Group 2 between inactive and active states during an amount of time equal to 2 times the lifespan of the cores. In some examples, switching back and forth between CPU Group 1 and CPU Group2 invokes the example workload orchestrator 410 and the example operations transfer orchestrator 422 at each switch. In some examples, switches occur after a fixed amount of product lifetime has elapsed until the entire product lifespan has been reached. In some examples, the cores of CPU Group 1 are active for a total duration equal to one half the product lifespan and the cores of CPU Group2 are active for a total duration equal to one half the product lifespan.In some examples, an example third time-based reservation policy/scheme is deployed wherein a small subset of the total number CPU cores is inactive or disabled at a time, as the workloads are moved cyclically to the active cores as shown in Figure 3 . In the example of FIG. 3 , during a first cycle A, the core 3A and the core 7A are inactive and the remaining cores of cores A0-)7 are active. During a second cycle B, the core 0B and the core 4B are inactive and the remaining ones of cores 0 - 7 are active. During a third cycle C, the core 1C and the core 5C are inactive and the remaining ones of cores 0C - 7C are active. During a fourth cycle D, the core 2D and the core 6D are inactive and the remaining ones of cores 0D - 7D are active. In the third time based reservation policy/scheme a more effective and balanced utilization of the CPU cores 0 - 7 is achieved. In some such examples, the example workload orchestrator 410 and the example operations transfer orchestrator 422 operate to ensure smooth transitions during core switching as is described above. In some examples, switches occur cyclically after a fixed amount of product lifetime has elapsed until the target product lifespan has been reached. In some examples, such as those including two sets of cores, one set is active for one half the total product lifespan and the other set is activated for the remaining product lifespan. In some examples, an example quality degradation monitoring reservation policy (QDMRP) is used. A QDMRP accounts for silicon degradation as well as workload characteristics when determining that a switch is to occur. In some examples, CPU utilization monitoring is deployed, digital thermal sensor (DTS) monitoring is deployed and/or both CPU utilization and DTS monitoring are deployed. QDMRP policies account for the fact that modern day CPUs are built using CMOS process technology that is prone to several degradation mechanisms including hot-carrier injection, bias temperature instability, gate oxide breakdown, electromigration, etc. These degradation mechanisms are a function of environment (temperature, voltage), frequency and workload stress. QDMRP includes the usage of logging time series data for each CPU core in a non-volatile memory (NVRAM/NVMe) to track the effects of CPU utilization and temperature.In a first example QDMRP policy, CPU core utilization is monitored as an indicator of an amount of stress to which the CPU cores are subjected. In the first example QDMRP policy, an interval of time (tracked by the example timer/clock 408) is used to determine a monitoring frequency that sets the periodicity at which the example CPU core utilization data will be stored as time series data by the example time-series data logger 420. In some examples, the CPU core utilization data is sensed by one or more CPU usage sensors and provided to the example CPU usage monitor 416. The time series data logger 420 stores the time series data in the example NVRAM 426. In addition, the example workload orchestrator 410 executes a round-robin load balancing algorithm to distribute workloads across the cores based on the CPU core utilization data. In some examples, the workloads are distributed with a goal of achieving an extended CPU core lifespan of twice the non-extended lifespan. TABLE 1 below presents example log data collected in connection with the example CPU core utilization based QDMRP policy.In a second example QDMRP policy, Digital Thermal Sensors data is used to prevent the operation of any single CPU core at a high temperature for an extended period of time. The second example QDMRP policy can be deployed in products operating in high temperature industrial environments using silicon that is only qualified for commercial temperatures. As the temperatures in industrial environments are high, degradation of silicon is accelerated. The DTS monitoring is to reduce the likelihood that any individual one of the CPU cores operates at high temperatures for extended periods of time. In some examples, this is achieved by offloading workloads between the CPU cores through the use of strategic, temperature-based switching, and keeping cores in inactive states intermittently.In the example second QDMRP policy, time series data logging similar to that used with the first QDMRP policy is deployed except that instead of monitoring CPU usage, a junction temperature of each CPU core is measured. An example round-robin load balancing algorithm distributes workloads across the CPU cores based on DTS readings collected by the example digital thermal sensor monitor 418. As with the first QDMRP policy, an interval of time (tracked by the example timer/clock 408) is used to determine a monitoring frequency that sets the periodicity at which the example time series data logger 420 collects DTS monitoring data. The distribution of the workloads and intermittent core switching based on the DTS reading can ensure that the lifespan of the individual CPUs is extended to at least ten years (which exceeds a lifespan that would otherwise be achieved absent the application of the second QDMRP policy). TABLE 2 below presents example time-series log data collected in connection with the second (e.g., DTS monitoring based) QDMRP policy.In an example third QDMRP, the example cores switcher 406 employs a switching scheme that accounts for both CPU core utilization and the DTS temperatures. In the third QDMRP, a round-robin load balancing algorithm strategically distributes workloads across the cores based on a combination of CPU core utilization monitoring data and DTS monitoring data to extend the lifetime of individual ones of the CPU cores to extend to at least ten years. As with the first and second QDMRP, an interval of time (tracked by the example timer/clock 408) is used to determine a monitoring frequency that sets the periodicity at which the example time series data logger 420 collects data. In the third QDMRP, the collected data includes DTS monitoring data (collected by the example DTS monitor 418) and CPU core usage monitoring data (collected by the example CPU usage monitor 416. The distribution of the workloads and intermittent core switching based on both the DTS monitoring data and the CPU core utilization data can ensure that the lifespan of the individual CPUs is extended to at least ten years (which exceeds a lifespan that would otherwise be achieved absent the application of the third QDMRP policy). TABLE 3 below presents example time series log data collected in connection with the third QDMRP policy.While an example manner of implementing the cores controller 402 is illustrated in FIG. 4 (also shown as the first cores controller 110 of FIG. 1 , the second cores controller 205 of FIG. 2 , and the third cores controller 304B of FIG. 3 ), one or more of the elements, processes and/or devices illustrated in FIG. 4 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example cores partitioner 404, the example cores switcher 406, the example timer/clock 408, the example workload orchestrator 410, the example policy selector 412, the example subset selector 414, the example CPU usage monitor 416, the example digital thermal sensor 418, the example time series data logger 420, the example operations transfer orchestrator 422, the example cores switchover configurer 424, and the example non-volatile random access memory (NVRAM) 426 and/or, more generally, the example cores controller 402 of FIG. 4 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example cores partitioner 404, the example cores switcher 406, the example timer/clock 408, the example workload orchestrator 410, the example policy selector 412, the example subset selector 414, the example CPU usage monitor 416, the example digital thermal sensor 418, the example time series data logger 420, the example operations transfer orchestrator 422, the example cores switchover configurer 424, and the example non-volatile random access memory (NVRAM) 426 and/or, more generally, the example cores controller 402 of FIG. 4 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example cores partitioner 404, the example cores switcher 406, the example timer/clock 408, the example workload orchestrator 410, the example policy selector 412, the example subset selector 414, the example CPU usage monitor 416, the example digital thermal sensor 418, the example time series data logger 420, the example operations transfer orchestrator 422, and/or the example cores switchover configurer 424 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example cores controller 402 of FIG. 4 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIG. 4 , and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase "in communication," including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the cores controller 402 is shown in FIG. 4 . The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 712 shown in the example processor platform 700 discussed below in connection with FIG. 7 . The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 712, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 712 and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated in FIGS. 5A , 5B , and 6 , many other methods of implementing the example cores controller 402 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The processor circuitry may be distributed in different network locations and/or local to one or more devices (e.g., a multi-core processor in a single machine, multiple processors distributed across a server rack, etc).The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that may together form a program such as that described herein.In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.As mentioned above, the example processes of FIGS. 5A 5B , and 6 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media."Including" and "comprising" (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of "include" or "comprise" (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase "at least" is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term "comprising" and "including" are open ended. The term "and/or" when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.As used herein, singular references (e.g., "a", "an", "first", "second", etc.) do not exclude a plurality. The term "a" or "an" entity, as used herein, refers to one or more of that entity. The terms "a" (or "an"), "one or more", and "at least one" can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.A program 500 of FIGS. 5A and 5B includes block 502 at which a value of N (representing a target lifespan of a product in which cores are disposed) is set for reference by the example timer/clock and/or the example cores switcher 406. In some examples, the value of N can be stored in the NVRAM of FIG. 426. In some examples, the value of N can be entered at an input device by a system administrator. In some examples, the value of N can be entered prior to sale of the product. A policy is selected at the example policy selector 412. (Block 504.) In some examples, the policy selector 412 can select from among a variety of policies based on, for example, a user preference, a type of product, an application to which the product will be put to use, an environment in which the product will be used, etc. The policy selector 412 determines whether a first scheme (Scheme 1) was selected (block 506) and, if so, the example cores partitioner 404 partitions the cores of a printed circuit board into a first group (Group 1) and a second group (Group 2). (Block 508.) The example cores switcher 510 activates one of the groups of cores (e.g., Group 1) (block 510). The example timer/clock 408 monitors a duration of time starting at the time of activation. (Block 512). If the duration of time that has elapsed starting at the activation of the cores of Group 1 is not equal to a threshold amount of time (e.g., N/2), as determined by the timer/clock 408 (or the example cores switcher 406 based on an output of timer/clock 408) (block 514), the timer/clock 408 continues to monitor the duration of time elapsed from the activation of the cores of Group 1 (block 512).When the duration of time is equal to N/2 (one-half the lifespan of the product), as determined by the example timer/clock 408 (or the example cores switcher 406 based on an output of timer/clock 408) (block 514), the cores switcher 406 responds by switching from the cores of the Group 1 to the cores of the Group 2 (e.g., the cores switcher 406 causes the cores of the Group 1 to be inactivated and causes the cores of the Group 2 to be activated) (block 516). In the meantime, the timer/clock 408 continues to monitor the elapsed time from the activation of the cores of Group 1. When the elapsed time is equal to N as determined by the timer/clock 408 (or the cores switcher 406 based on an output of the timer/clock 408) (block 518) the product has reached its lifespan and the execution of the portion of the program 500 that pertains to the selection of the first scheme ends. At that time, any number of actions may be taken with respect to the product including replacement, inactivation, upgrade, etc.Referring still to FIG. 5A , if the policy selector 412 ( FIG. 4 ), selects (based on programmed information and/or information from an input device), the second scheme (Scheme 2) (block 520), the example cores partitioner 404 partitions the cores of a printed circuit board into a first group of cores (Group 1) and a second group of cores (Group 2). (Block 522.) The example cores switcher 510 activates one of the groups of cores (e.g., Group 1), but does not activate a second one of the groups of cores (e.g., Group 2) (block 524). In some examples, the example timer/clock 408 initializes a counter denoted by the variable "i." (Block 526). The timer/clock also monitors a duration of time starting at the time of activation of the cores of Group 1. (Block 528.) In some examples, the duration of time that has elapsed starting at the activation of the cores of Group 1 is not equal to a threshold amount of time (e.g., N/2), as determined by the timer/clock 408 (or the example cores switcher 406 based on an output of the timer/clock 408) (block 530). In some such examples, the timer/clock 408 and/or cores switcher 406 determines whether the amount of time that has elapsed since the activation of the cores of Group 1 is equal to the value of the counter, i, multiplied by a threshold amount of time) (block 532). In some examples, the threshold is an amount of time equal to the lifespan "N" of the product divided by a whole number and represents an amount of time to be allowed to elapse between core switches executed by the core switcher 406. Thus, if the amount of elapsed time is equal to a threshold value, the cores switcher 406 executes a core switch (the active cores are deactivated and vice versa) (block 534). The timer/clock 408 increments the counter, i, by 1 (e.g., (i = i + 1)) (block 536) and the timer/clock 408 again determines whether the time elapsed since the first cores activation is equal to N (Block 530). When the elapsed time is not equal to N, the program 500 re-executes the operations described with respect to the blocks 532, 534 and 536.When the elapsed time is equal to N, the lifespan of the product in which the cores are installed is reached and the program 500 ends. Thereafter, any number of actions may be performed with respect to the product as described above.Referring still to FIG. 5A , if the policy selector 412 ( FIG. 4 ), (based on programmed information and/or information from an input device, or any of a variety of factors), does not select the second scheme (block 520), a third scheme (Scheme 3) is deployed and the program 500 continues from the marker A of FIG. 5A to the marker A of FIG. 5B . Thereafter, the example cores partitioner 404 partitions the cores of a printed circuit board into a set of round robin (RR) groups (block 538). In some examples, the RR groups each include two cores, though any other number of cores can instead be included in the RR groups. The example time/clock 408 initializes a counter denoted by the variable "i" to a value of 1 (Block 540). The value of the counter, "i," denotes the number of RR groups formed by the cores partitioner 404.The example cores switcher 406 activates the cores of the RR groups except for the cores of the i-th RR group which are deactivated (block 542). As will be understood, at the inception of the first deactivation operation (block 522), none of the cores of the RR groups may be active such that the inactivation of the cores of the i-th RR group is not performed.The example timer/clock 408 monitors an amount of elapsed time since the deactivation of the i-th RR group (block 544). In some examples, the timer/clock 408 and/or the cores switcher 406 determines whether the timer is equal to the value of the variable N, which as described above, denotes the value of the lifespan of the product (block 546). If so, the lifespan of the product has been reached and the portion of the program 500 associated with the third scheme is completed such that the program ends.In some examples, the timer/clock 408 or the cores switcher 406 determines that the timer 408 has not reached the value N (block 546) and then determines whether the value of the timer is equal to the value of the counter, i, multiplied by a threshold value (block 548). In some examples, the threshold value is equal to a portion of the lifespan of the product and represents an amount of time between which the cores switcher 406 will perform cores switches. When the timer/clock 408 determines the timer is equal to (i x threshold) (block 548), the example timer/clock 408 causes the counter to be incremented by 1 (e.g., i = i + 1) (block 550). The cores switcher 406 then executes another core switch by deactivating the i-th RR group of cores and activating the other RR groups of cores (block 542). Thereafter, the operations described with respect to blocks 544, 546 and 548 and 550 are repeated such that a next, i-th group of cores are deactivated, etc., until the amount of elapsed time since the activation of the first RR group of cores is equal to the lifespan of the product and the portion of the program 500 associated with the third scheme ends. As described above, after the program 500 ends, any of a variety of actions can be performed relative to the product.FIG. 6 is a program 600 that can be executed to implement any of the first, second, third and fourth cores controllers of any of FIGS. 1 - 4 . In some examples, the program 600 can perform a number of different quality degradation monitoring reservation policies (QDMRP). In some examples, the program 600 begins at a block 602 at which the example policy selector 412 selects one of the life-extending policies (e.g., QDMRPs) based on a variety of factors as described above or based on an input provided at an input device. Based on the selected policy, or any of a variety of other factors, the example cores switchover configurer 424 executes an algorithm to enable CPU changeover/switchover (block 604). In some such examples, the cores switchover configurer 424 configures the cores of a product by, for example, configuring the cores to be communicate (directly or indirectly) with one another, configuring the cores to receive and respond appropriately to activation signals and/or inactivation signals, configuring memories and/or other parts of the cores to have the same addresses, and/or any other of a variety of configuration operations that prepare the cores for switching between inactive and active states. In some examples, the algorithms executed by the cores switchover configurer can be different based on which policy was selected by the policy selector 412.Next a path, based on the selected policy (e.g., PATH 1, PATH 2, PATH 3), is chosen by the policy selector 602 (Block 606). In some examples, PATH 1 refers to a QDMRP that monitors CPU core utilization as an indicator of an amount of stress to which the CPU cores are subjected and makes cores switches based on the monitoring. Thus, when PATH 1 is chosen, a monitoring interval of time (also referred to as a CPU utilization interval) is provided by, for example, the cores switcher 406 (block 608) based on any number of factors including the capabilities of the CPU usage monitor 416, an environment in which the product is installed, an operating speed of the cores, etc. In some examples, the CPU utilization interval can be set at a time at which the product or cores are manufactured or can be set based on user input and/or programmed into the cores controller 402 of FIG. 4 . Next, the CPU usage monitor 416 begins monitoring the CPU usage in accordance with the CPU utilization interval (block 610). The CPU usage monitoring data is supplied to the example time series data logger 420 of FIG. 4 which logs the data in a time-series format and stores the logged data in the example NVRAM 426 (block 612). The cores switcher 406 of FIG. 4 causes core switches to occur based on a workload distribution algorithm that uses the log time-series core utilization data (block 614). In some examples, the cores switcher algorithm causes switching groups of round robin (RR) cores formed/created by the cores partition 604 (616). In some examples, a different RR group is deactivated during each cores switch and the remaining ones of the RR group are activated (or, if already active, are not affected). In some examples, the cores switching algorithm limits the amount of time that the cores are experiencing high CPU usage. In some examples, the cores switcher 406 executes the algorithm until a target lifespan of the product (e.g., "N") has been reached, wherein the target lifespan exceeds the lifespan of the cores. When the target lifespan is reached, the cores switcher 406 halts execution of the algorithm and the portion of the program 600 associated with CPU usage monitoring ends.In some examples, as described above, the example workload orchestrator 410, the example operations transfer orchestrator 422 and/or the cores switchover configurer are involved, as needed to achieve a smooth and properly balanced transfer of the workloads, during a cores switch.In some examples, PATH 2 is chosen based on the selected life extender policy (see block 602). In some examples, PATH 2 corresponds to a QDMRP that monitors core operating temperatures based on information collected by the example digital thermal sensor (DTS) monitor 418 of FIG. 4 . Thus, when PATH 2 is chosen, a monitoring interval of time (also referred to as a DTS monitor interval) is provided by, for example, the cores switcher 406 (block 618) based on any number of factors including the capabilities of the digital thermal sensors, an environment in which the product is installed, an operating speed of the cores, etc. In some examples, the DTS monitor interval can be set at a time at which the product or cores are manufactured or can be set based on user input and/or programmed into the cores controller 402 of FIG. 4 . Next, the DTS monitor 418 begins monitoring data provided by the digital thermal sensors in accordance with the DTS monitoring interval (block 620). The DTS monitoring data is supplied to the example time series data logger 420 of FIG. 4 which logs the data in a time-series format and stores the logged data in the example NVRAM 426 (block 622).The cores switcher 406 of FIG. 4 causes core switches to occur based on a workload distribution algorithm that uses the log time-series DTS data (block 614). As described above, the cores switcher algorithm causes the switching groups of round robin (RR) cores formed/created by the cores partition 604 (616). In some examples, a different RR group is deactivated during each cores switch and the remaining ones of the RR group are activated (or, if already active, are not affected). In some examples, the cores switching algorithm limits the amount of time that the cores are exposed to high temperatures associated with silicon degradation. In some examples, the cores switcher 406 executes the algorithm until a target lifespan of the product (e.g., "N") has been reached, wherein the target lifespan exceeds the lifespan of the cores. When the target lifespan is reached, the cores switcher 406 halts execution of the algorithm and the portion of the program 600 associated with CPU usage monitoring ends.The distribution of the workloads and intermittent core switching based on both the DTS monitoring data and the CPU core utilization data can ensure that the lifespan of the individual CPUs is extended to at least ten years (which exceeds a lifespan that would otherwise be achieved absent the application of the third QDMRP policy). TABLE 3 below presents example time series log data collected in connection with the third QDMRP policy.In some examples, PATH 3 is chosen based on the policy selected by or at selected life extender policy (see block 602). In some examples, PATH 3 corresponds to a QDMRP that monitors both core operating temperatures and core usage data based on information collected by the example CPU usage monitor 416 and the example digital thermal sensor (DTS) monitor 418, respectively. Thus, when PATH 3 is chosen, the temperature monitoring interval of time (also referred to as a DTS monitor interval) and the CPU utilization monitoring interval are provided by, for example, the cores switcher 406 (block 608 and block 618). Next, the DTS monitor 418 and the CPU usage monitor 416 begin monitoring data provided by the digital thermal sensors and the CPUs in accordance with the respective, corresponding time intervals. (Block 624). The monitoring data is supplied to the example time series data logger 420 of FIG. 4 which logs the data in a time-series format and stores the logged data in the example NVRAM 426 (block 626).The cores switcher 406 of FIG. 4 causes core switches to occur based on a workload distribution algorithm that uses the log time-series DTS data (block 614). As described above, the cores switcher algorithm causes the switching groups of round robin (RR) cores formed/created by the cores partition 604 (616). In some examples, a different RR group is deactivated during each cores switch and the remaining ones of the RR group are activated (or, if already active, are not affected). In some examples, the cores switching algorithm operates to strategically switch the cores in a manner that limits the amount of time that the cores are exposed to high temperatures and the amount of time that the cores are experiencing high workload demand. In some examples, the cores switcher 406 executes the algorithm until a target lifespan of the product (e.g., "N") has been reached, wherein the target lifespan exceeds the lifespan of the cores. When the target lifespan is reached, the cores switcher 406 halts execution of the algorithm and the portion of the program 600 associated with CPU usage monitoring ends.As will be understood the switching algorithm executed by the cores controller 614 can vary based on the PATH chosen at the policy selector 412 of FIG. 4 . In some examples, the policy selector 412 notifies other components of the cores controller 402 so that the proper algorithm is executed using appropriate data.FIG. 7 is a block diagram of an example processor platform 700 structured to execute the instructions of FIGS 5A , 5B , and 6 to implement the cores controller 402 of FIG. 4 . The processor platform 700 can be, for example, a programmable logic controller (PLC), a programmable automation controller (PAC), an embedded controller (EC), and industrial PC (IPC), a Human Machine Interface (HMI), a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a headset or other wearable device, or any other type of computing device.The processor platform 700 of the illustrated example includes a processor 712. The processor 712 of the illustrated example is hardware. For example, the processor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example cores partitioner 404, the example cores switcher 406, the example timer/clock 408, the example workload orchestrator 410, the example policy selector 412, the example subset selector 414, the example CPU usage monitor 416, the example digital thermal sensor 418, the example time series data logger 420, the example operations transfer orchestrator 422, and the example cores switchover configurer 424, and/or, more generally, the example cores controller 402 of FIG. 4 .The processor 712 of the illustrated example includes a local memory 713 (e.g., a cache). The processor 712 of the illustrated example is in communication with a main memory including a volatile memory 714 and a non-volatile memory 716 via a bus 718. The volatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 714, 716 is controlled by a memory controller. In some examples, the non-volatile memory 716 can implement the NVRAM 426 of FIG. 4 .The processor platform 700 of the illustrated example also includes an interface circuit 720. The interface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.In the illustrated example, one or more input devices 722 are connected to the interface circuit 720. The input device(s) 722 permit(s) a user to enter data and/or commands into the processor 712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.One or more output devices 724 are also connected to the interface circuit 720 of the illustrated example. The output devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.The interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.The processor platform 700 of the illustrated example also includes one or more mass storage devices 728 for storing software and/or data. Examples of such mass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.The machine executable instructions 732 of FIGS 5A , 5B , and 6 may be stored in the mass storage device 728, in the volatile memory 714, in the non-volatile memory 716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.FIGS. 7 , 8 , and 9 illustrate environments in which the apparatus, systems, methods and articles of manufacture can be implemented. For example, a block diagram illustrating an example software distribution platform 805 to distribute software such as the example computer readable instructions 732 of FIG. 7 to third parties is illustrated in FIG. 8 . The example software distribution platform 805 may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. The third parties may be customers of the entity owning and/or operating the software distribution platform. For example, the entity that owns and/or operates the software distribution platform may be a developer, a seller, and/or a licensor of software such as the example computer readable instructions 732 of FIG. 7 . The third parties may be consumers, users, retailers, OEMs, etc., who purchase and/or license the software for use and/or re-sale and/or sub-licensing. In the illustrated example, the software distribution platform 805 includes one or more servers and one or more storage devices. The storage devices store the computer readable instructions 732, which may correspond to the example computer readable instructions 500 and 600 of FIGS. 5A , 5B and 6 , as described above. The one or more servers of the example software distribution platform 805 are in communication with a network 810, which may correspond to any one or more of the Internet and/or any of the example networks 726 described above. In some examples, the one or more servers are responsive to requests to transmit the software to a requesting party as part of a commercial transaction. Payment for the delivery, sale and/or license of the software may be handled by the one or more servers of the software distribution platform and/or via a third party payment entity. The servers enable purchasers and/or licensors to download the computer readable instructions 732 from the software distribution platform 805. For example, the software, which may correspond to the example computer readable instructions 500 and 600 of FIGS. 5A , 5B and 6 , may be downloaded to the example processor platform 1000, which is to execute the computer readable instructions 732 to implement the example cores partitioner 404, the example cores switcher 406, the example timer/clock 408, the example workload orchestrator 410, the example policy selector 412, the example subset selector 414, the example CPU usage monitor 416, the example digital thermal sensor 418, the example time series data logger 420, the example operations transfer orchestrator 422, and the example cores switchover configurer 424, and/or, more generally, the example cores controller 402 of FIG. 4 . In some example, one or more servers of the software distribution platform 805 periodically offer, transmit, and/or force updates to the software (e.g., the example computer readable instructions 732 of FIG. 7 ) to ensure improvements, patches, updates, etc. are distributed and applied to the software at the end user devices.FIG. 9 is a block diagram 900 showing an overview of a configuration for edge computing, which includes a layer of processing referred to in many of the following examples as an "edge cloud". As shown, the edge cloud 910 is co-located at an edge location, such as an access point or base station 940, a local processing hub 950, or a central office 920, and thus may include multiple entities, devices, and equipment instances. The edge cloud 910 is located much closer to the endpoint (consumer and producer) data sources 960 (e.g., autonomous vehicles 961, user equipment 962, business and industrial equipment 963, video capture devices 964, drones 965, smart cities and building devices 966, sensors and IoT devices 967, etc.) than the cloud data center 930. Compute, memory, and storage resources which are offered at the edges in the edge cloud 910 are critical to providing ultra-low latency response times for services and functions used by the endpoint data sources 960 as well as reduce network backhaul traffic from the edge cloud 910 toward cloud data center 930 thus improving energy consumption and overall network usages among other benefits.Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer endpoint devices, than at a base station, than at a central office). However, the closer that the edge location is to the endpoint (e.g., user equipment (UE)), the more that space and power is often constrained. Thus, edge computing attempts to reduce the amount of resources needed for network services, through the distribution of more resources which are located closer both geographically and in network access time. In this manner, edge computing attempts to bring the compute resources to the workload data where appropriate, or, bring the workload data to the compute resources.The following describes aspects of an edge cloud architecture that covers multiple potential deployments and addresses restrictions that some network operators or service providers may have in their own infrastructures. These include, variation of configurations based on the edge location (because edges at a base station level, for instance, may have more constrained performance and capabilities in a multi-tenant scenario); configurations based on the type of compute, memory, storage, fabric, acceleration, or like resources available to edge locations, tiers of locations, or groups of locations; the service, security, and management and orchestration capabilities; and related objectives to achieve usability and performance of end services. These deployments may accomplish processing in network layers that may be considered as "near edge", "close edge", "local edge", "middle edge", or "far edge" layers, depending on latency, distance, and timing characteristics.Edge computing is a developing paradigm where computing is performed at or closer to the "edge" of a network, typically through the use of a compute platform (e.g., x86 or ARM compute hardware architecture) implemented at base stations, gateways, network routers, or other devices which are much closer to endpoint devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with standardized compute hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices. Within edge computing networks, there may be scenarios in services which the compute resource will be "moved" to the data, as well as scenarios in which the data will be "moved" to the compute resource. Or as an example, base station compute, acceleration and network resources can provide services in order to scale to workload demands on an as needed basis by activating dormant capacity (subscription, capacity on demand) in order to manage corner cases, emergencies or to provide longevity for deployed resources over a significantly longer implemented lifecycle.FIG. 10 illustrates operational layers among endpoints, an edge cloud, and cloud computing environments. Specifically, FIG. 10 depicts examples of computational use cases 1005, utilizing the edge cloud 910 among multiple illustrative layers of network computing. The layers begin at an endpoint (devices and things) layer 1000, which accesses the edge cloud 910 to conduct data creation, analysis, and data consumption activities. The edge cloud 910 may span multiple network layers, such as an edge devices layer 1010 having gateways, on-premise servers, or network equipment (nodes 1015) located in physically proximate edge systems; a network access layer 1020, encompassing base stations, radio processing units, network hubs, regional data centers (DC), or local network equipment (equipment 1025); and any equipment, devices, or nodes located therebetween (in layer 1012, not illustrated in detail). The network communications within the edge cloud 910 and among the various layers may occur via any number of wired or wireless mediums, including via connectivity architectures and technologies not depicted.Examples of latency, resulting from network communication distance and processing time constraints, may range from less than a millisecond (ms) when among the endpoint layer 1000, under 5 ms at the edge devices layer 1010, to even between 10 to 40 ms when communicating with nodes at the network access layer 1020. Beyond the edge cloud 910 are core network 1030 and cloud data center 1040 layers, each with increasing latency (e.g., between 50-60 ms at the core network layer 1030, to 100 or more ms at the cloud data center layer). As a result, operations at a core network data center 1035 or a cloud data center 1045, with latencies of at least 50 to 100 ms or more, will not be able to accomplish many time-critical functions of the use cases 1005. Each of these latency values are provided for purposes of illustration and contrast; it will be understood that the use of other access network mediums and technologies may further reduce the latencies. In some examples, respective portions of the network may be categorized as "close edge", "local edge", "near edge", "middle edge", or "far edge" layers, relative to a network source and destination. For instance, from the perspective of the core network data center 1035 or a cloud data center 1045, a central office or content data network may be considered as being located within a "near edge" layer ("near" to the cloud, having high latency values when communicating with the devices and endpoints of the use cases 1005), whereas an access point, base station, on-premise server, or network gateway may be considered as located within a "far edge" layer ("far" from the cloud, having low latency values when communicating with the devices and endpoints of the use cases 1005). It will be understood that other categorizations of a particular network layer as constituting a "close", "local", "near", "middle", or "far" edge may be based on latency, distance, number of network hops, or other measurable characteristics, as measured from a source in any of the network layers 1000-1040.The various use cases 1005 may access resources under usage pressure from incoming streams, due to multiple services utilizing the edge cloud. To achieve results with low latency, the services executed within the edge cloud 910 balance varying requirements in terms of: (a) Priority (throughput or latency) and Quality of Service (QoS) (e.g., traffic for an autonomous car may have higher priority than a temperature sensor in terms of response time requirement; or, a performance sensitivity/bottleneck may exist at a compute/accelerator, memory, storage, or network resource, depending on the application); (b) Reliability and Resiliency (e.g., some input streams need to be acted upon and the traffic routed with mission-critical reliability, where as some other input streams may be tolerate an occasional failure, depending on the application); and (c) Physical constraints (e.g., power, cooling and form-factor).The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the "terms" described may be managed at each layer in a way to assure real time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to SLA, the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement steps to remediate.Thus, with these variations and service features in mind, edge computing within the edge cloud 910 may provide the ability to serve and respond to multiple applications of the use cases 1005 (e.g., object tracking, video surveillance, connected cars, etc.) in real-time or near real-time, and meet ultra-low latency requirements for these multiple applications. These advantages enable a whole new class of applications (Virtual Network Functions (VNFs), Function as a Service (FaaS), Edge as a Service (EaaS), standard processes, etc.), which cannot leverage conventional cloud computing due to latency or other limitations.However, with the advantages of edge computing comes the following caveats. The devices located at the edge are often resource constrained and therefore there is pressure on usage of edge resources. Typically, this is addressed through the pooling of memory and storage resources for use by multiple users (tenants) and devices. The edge may be power and cooling constrained and therefore the power usage needs to be accounted for by the applications that are consuming the most power. There may be inherent power-performance tradeoffs in these pooled memory resources, as many of them are likely to use emerging memory technologies, where more power requires greater memory bandwidth. Likewise, improved security of hardware and root of trust trusted functions are also required, because edge locations may be unmanned and may even need permissioned access (e.g., when housed in a third-party location). Such issues are magnified in the edge cloud 910 in a multi-tenant, multi-owner, or multi-access setting, where services and applications are requested by many users, especially as network usage dynamically fluctuates and the composition of the multiple stakeholders, use cases, and services changes.At a more generic level, an edge computing system may be described to encompass any number of deployments at the previously discussed layers operating in the edge cloud 910 (network layers 1000-1040), which provide coordination from client and distributed computing devices. One or more edge gateway nodes, one or more edge aggregation nodes, and one or more core data centers may be distributed across layers of the network to provide an implementation of the edge computing system by or on behalf of a telecommunication service provider ("telco", or "TSP"), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives.Consistent with the examples provided herein, a client compute node may be embodied as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. Further, the label "node" or "device" as used in the edge computing system does not necessarily mean that such node or device operates in a client or agent/minion/follower role; rather, any of the nodes or devices in the edge computing system refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud 910.As such, the edge cloud 910 is formed from network components and functional features operated by and within edge gateway nodes, edge aggregation nodes, or other edge compute nodes among network layers 1010-1030. The edge cloud 910 thus may be embodied as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are discussed herein. In other words, the edge cloud 910 may be envisioned as an "edge" which connects the endpoint devices and traditional network access points that serve as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks.The network components of the edge cloud 910 may be servers, multi-tenant servers, appliance computing devices, and/or any other type of computing devices. For example, the edge cloud 910 may include an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case or a shell. In some circumstances, the housing may be dimensioned for portability such that it can be carried by a human and/or shipped. Example housings may include materials that form one or more exterior surfaces that partially or fully protect contents of the appliance, in which protection may include weather protection, hazardous environment protection (e.g., EMI, vibration, extreme temperatures), and/or enable submergibility. Example housings may include power circuitry to provide power for stationary and/or portable implementations, such as AC power inputs, DC power inputs, AC/DC or DC/AC converter(s), power regulators, transformers, charging circuitry, batteries, wired inputs and/or wireless power inputs. Example housings and/or surfaces thereof may include or connect to mounting hardware to enable attachment to structures such as buildings, telecommunication structures (e.g., poles, antenna structures, etc.) and/or racks (e.g., server racks, blade mounts, etc.). Example housings and/or surfaces thereof may support one or more sensors (e.g., temperature sensors, vibration sensors, light sensors, acoustic sensors, capacitive sensors, proximity sensors, etc.). One or more such sensors may be contained in, carried by, or otherwise embedded in the surface and/or mounted to the surface of the appliance. Example housings and/or surfaces thereof may support mechanical connectivity, such as propulsion hardware (e.g., wheels, propellers, etc.) and/or articulating hardware (e.g., robot arms, pivotable appendages, etc.). In some circumstances, the sensors may include any type of input devices such as user interface hardware (e.g., buttons, switches, dials, sliders, etc.). In some circumstances, example housings include output devices contained in, carried by, embedded therein and/or attached thereto. Output devices may include displays, touchscreens, lights, LEDs, speakers, I/O ports (e.g., USB), etc. In some circumstances, edge devices are devices presented in the network for a specific purpose (e.g., a traffic light), but may have processing and/or other capacities that may be utilized for other purposes. Such edge devices may be independent from other networked devices and may be provided with a housing having a form factor suitable for its primary purpose; yet be available for other compute tasks that do not interfere with its primary task. Edge devices include Internet of Things devices. The appliance computing device may include hardware and software components to manage local issues such as device temperature, vibration, resource utilization, updates, power issues, physical and network security, etc. Example hardware for implementing an appliance computing device is described in conjunction with FIG. 7 . The edge cloud 910 may also include one or more servers and/or one or more multi-tenant servers. Such a server may include an operating system and implement a virtual computing environment. A virtual computing environment may include a hypervisor managing (e.g., spawning, deploying, destroying, etc.) one or more virtual machines, one or more containers, etc. Such virtual computing environments provide an execution environment in which one or more applications and/or other software, code or scripts may execute while being isolated from one or more other applications, software, code or scripts.In FIG. 11 , various client endpoints 1110 (in the form of mobile devices, computers, autonomous vehicles, business computing equipment, industrial processing equipment) exchange requests and responses that are specific to the type of endpoint network aggregation. For instance, client endpoints 1110 may obtain network access via a wired broadband network, by exchanging requests and responses 1122 through an on-premise network system 1132. Some client endpoints 1110, such as mobile computing devices, may obtain network access via a wireless broadband network, by exchanging requests and responses 1124 through an access point (e.g., cellular network tower) 1134. Some client endpoints 1110, such as autonomous vehicles may obtain network access for requests and responses 1126 via a wireless vehicular network through a street-located network system 1136. However, regardless of the type of network access, the TSP may deploy aggregation points 1142, 1144 within the edge cloud 910 to aggregate traffic and requests. Thus, within the edge cloud 910, the TSP may deploy various compute and storage resources, such as at edge aggregation nodes 1140, to provide requested content. The edge aggregation nodes 1140 and other systems of the edge cloud 910 are connected to a cloud or data center 1160, which uses a backhaul network 1150 to fulfill higher-latency requests from a cloud/data center for websites, applications, database servers, etc. Additional or consolidated instances of the edge aggregation nodes 1140 and the aggregation points 1142, 1144, including those deployed on a single server framework, may also be present within the edge cloud 910 or other areas of the TSP infrastructure.From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that extended the lifespan of CPU cores through strategic switching of core usage. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by extending the lifespan of a product that incorporates the computing device. In addition, the disclosed methods, systems, apparatus and articles of manufacture cause cores to experience less degradation from high temperatures and heavy workloads and thereby can permit the usage of commercial grade cores in industrial applications. As a result, there is less downtime as products using the methods, apparatus, systems and articles of manufacture disclosed herein require less frequent replacement. Moreover, because commercial grade cores are less expensive to manufacture than industrial grade cores, the methods, apparatus, systems, and articles of manufacture also provide a cost saving. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.Example methods, apparatus, systems, and articles of manufacture to extend the lifespan of embedded processors are disclosed herein. Further examples and combinations thereof include the following:Example 1 includes an apparatus having a policy selector to select a policy, based on input information. The policy extends an operating lifespan of a microprocessor having a plurality of cores. The apparatus also includes a cores partitioner to divide, based on the selected policy, the plurality of cores of the microprocessor into subsets of cores, including a first subset and a second subset. A sensor monitors, based on the selected policy, at least one operational parameter of the plurality of cores. A cores switcher switches a first core of the first subset of cores from active to inactive and to switch a second core of the second subset of cores from inactive to active based on the at least one operational parameter. In Example 1, the core switches reduce an amount of degradation experienced by the first core and the second core.Example 2 includes the apparatus of example 1. In example 2, the input information on which selection of the policy is based includes at least one of a user preference, a type of product in which the microprocessor is installed, an application in which the product is to operate, or an environment in which the product is to operate.Example 3 includes the apparatus of example 1. In example 3, at least some of the plurality of cores are inactive and at least some of the plurality of cores are active during operation of the microprocessor.Example 4 includes the apparatus of example 1. In example 4, the plurality of cores are rated to operate in a first environment at a first temperature, are operating in a second environment at a second temperature, the second temperature accelerates degradation of silicon of the plurality of cores, the second temperature is higher than the first temperature, and the cores switcher operates to limit an amount of time that any of the plurality cores operate in the second environment.Example 5 includes the apparatus of example 1. The apparatus of example 5 further includes an operations transfer orchestrator to orchestrate a transfer of operations by which operations performed at the first core are to be transferred to the second core. The operations transfer orchestrator orchestrates the transfer of operations in response to a notification from the cores switcher and the operations to be transferred before the first core is switched to inactive and after the second core is switched to active.Example 6 includes the apparatus of example 1. The apparatus of example 6, further includes a workload orchestrator that compares a workload of the first core to a workload capacity of the second core. The comparison is used by the cores switcher to determine whether the second core has sufficient capacity for the workload of the first core before issuing a switch command. The cores switcher issues the switch command when the second core is determined to have sufficient capacity for the workload of the first core.Example 7 includes the apparatus of example 1 and further includes a cores switchover configurer that configures the first core and the second core to switch between inactive and active states. The configuring performed by the cores switchover configurer includes at least one of (i) configuring the first core and the second core to communicate with one another, (ii) configuring the first core and the second core to receive and respond to activation signals and inactivation signals, or (iii) configuring memories associated with the first core and the second core to have the same addresses.Example 8 includes the apparatus of example 1. In example 8, the sensor includes a plurality of sensors, and the operational parameters include at least one of temperature, time, or core usage.Example 9 includes the apparatus of example 8. In example 9, the plurality of sensors includes at least one of a core usage sensor, a digital thermal sensor, or a timer. In example 9, the digital thermal sensor senses a junction temperature associated with the plurality of cores, the core usage sensor measures at least one of respective workloads of respective ones of the cores or respective operating speeds of the respective ones of the cores, and the timer measure an amount of operating time of the cores.Example 10 includes the apparatus of example 1 and further includes a subset selector to select at least two subsets of cores for switching.Example 11 includes the apparatus of example 1 and further includes a time-series logger that generates a time-series log of data collected by one or more sensors. In example 11, the data is used to compare a first operational parameter of a first core with the first operational parameter of the second core and the first operational parameter of the first core and the second core are sensed at a same time. In example 11, the comparison is used by the cores switcher to identify a time to switch the first core from active to inactive and to switch the second core from inactive to active.Example 12 includes the apparatus of example 1. In example 12, the first subset of cores are active and the second subset of cores are inactive. Also, the cores switcher switches the first subset of cores to inactive and the second subset of cores to active after expiration of a timer. In example 12, the timer expires when a time equal to an expected lifespan of the plurality of cores has been reached.Example 13 includes the apparatus of example 1. In example 13, the at least one operational parameter reflects an amount of quality degradation of the plurality of cores caused by one or more of a core temperature, a core operating voltage, a core operating frequency, and a core workload stress, and the quality degradation adversely affects the operating lifespan of the plurality of cores.Example 14 includes the apparatus of example 1. In example 14, the cores switcher switches the first core from active to inactive and switches the second core from inactive to active by executing an algorithm that uses a combination of a first series of core usage values and a first series of core temperature values.Example 15 includes at least one non-transitory computer readable medium having instructions that, when executed, cause at least one processor to at least select a policy, based on input information. The policy extends the operating lifespan of a microprocessor having a plurality of cores. In addition, the instructions cause the at least one processor to divide, based on the selected policy, the plurality of cores of the microprocessor into subsets of core , including a first subset and a second subset. The instructions also cause the at least one processor to monitor, based on the selected policy and sensed information received from one or more sensors, at least one operational parameter of the plurality of cores. The instructions further cause the at least one processor to switch a first core of the first subset of cores from active to inactive, and switch a second core of the second subset of cores from inactive to active based on the at least one operational parameter. The switch of the first core and the second core reduces an amount of degradation experienced by the first core and the second core.Example 16 includes the at least one non-transitory computer readable medium of example 15. In example 16, the input information on which selection of the policy is based includes at least one of a user preference, a type of product in which the microprocessor is installed, an application in which the product is to operate, or an environment in which the product is to operate.Example 17 includes the at least one non-transitory computer readable medium of example 15. In example 17, at least some of the plurality of cores are inactive and at least some of the plurality of cores are active during operation of the microprocessor.Example 18 includes the at least one non-transitory computer readable medium of example 15. In example 18, the plurality of cores are rated to operate in a first environment at a first temperature, and are operating in a second environment at a second temperature. In example 18, the second temperature accelerates degradation of silicon of the plurality of cores and is higher than the first temperature. In example 18, switching the state of any of the plurality of cores from an inactive to an active state and vice versa limits an amount of time that any of the plurality cores operate in the second environment.Example 19 includes the at least one non-transitory computer readable medium of example 15. In example 19, the instructions further cause the at least one processor to orchestrate the transfer of operations in response to a notification that a switch is to occur. The operations are transferred before the first core is switched to inactive and after the second core is switched to active.Example 20 includes the at least one non-transitory computer readable medium of example 15, wherein the instructions further cause the at least one processor to compare a workload of the first core to a workload capacity of the second core, the comparison to be used to determine whether the second core has sufficient capacity for the workload of the first core before a switch is to occur. In example 20, the switch occurs when the second core is determined to have sufficient capacity for the workload of the first core.Example 21 includes the at least one non-transitory computer readable medium of example 9, wherein the instructions further cause the at least one processor to configure the first core and the second core to switch between inactive and active states. In example 21, configuring of the first and second cores includes at least one of (i) configuring the first core and the second core to communicate with one another, (ii) configuring the first core and the second core to receive and respond to activation signals and inactivation signals, or (iii) configuring memories associated with the first core and the second core to have the same addresses.Example 22 includes the at least one non-transitory computer readable medium of example 15. In example 22, the sensor includes a plurality of sensors, and the operational parameters include at least one of temperature, time, or core usage.Example 23 includes the at least one non-transitory computer readable medium of example 21. In example 23, the plurality of sensors includes at least one of a core usage sensor, a digital thermal sensor, or a timer. In example 23, the digital thermal sensor senses a junction temperature associated with the plurality of cores, and the core usage sensor measures at least one of respective workloads of respective ones of the cores or respective operating speeds of the respective ones of the cores.Example 24 includes the at least one non-transitory computer readable medium of example 15. In example 24, the sensor is one of a plurality of sensors and the instructions further cause the at least one processor to generate a time-series log of data collected by the plurality of sensors. The collected data is used to compare a first operational parameter of a first core with the first operational parameter of the second core. In example 24, the first operational parameter of the first core and the second core is sensed at a same time and the comparison is used to identify a time to switch the first core from active to inactive and to switch the second core from inactive to active.Example 25 includes the at least one non-transitory computer readable medium of example 15. In example 25, the first subset of cores are active and the second subset of cores are inactive. In addition, the first subset of cores are switched to inactive and the second subset of cores are switched to active after a duration of time equal to an expected lifespan of the plurality of cores.Example 26 includes the at least one non-transitory computer readable medium of example 15. In example 26, the at least one operational parameter reflects an amount of quality degradation of the plurality of cores caused by one or more of a core temperature, a core operating voltage, a core operating frequency, and a core workload stress. The quality degradation adversely affects the operating lifespan of the plurality of cores.Example 27 includes the at least one non-transitory computer readable medium of example 15. In example 27, the instructions further to cause the at least one processor to execute an algorithm that uses a combination of a first series of core usage values and a first series of core temperature values to determine when the first core is to be switched from active to inactive and the second core is to be switched from inactive to active.Example 28 includes a method that includes selecting a policy, based on input information. The policy of example 28 extends the operating lifespan of a microprocessor having a plurality of cores. The method of example 28 also includes dividing, based on the selected policy, the plurality of cores of the microprocessor into subsets of core , including a first subset and a second subset. Monitoring is performed, based on the selected policy and sensed information received from one or more sensors. At least one operational parameter of the plurality of cores is monitored. The method of example 28 further includes switching a first core of the first subset of cores from active to inactive, and switching a second core of the second subset of cores from inactive to active based on the at least one operational parameter. In example 28, the switch of the first core and the second core reduces an amount of degradation experienced by the first core and the second core.Example 29 includes the method of example 28. In example 29, the input information on which selection of the policy is based includes at least one of a user preference, a type of product in which the microprocessor is installed, an application in which the product is to operate, or an environment in which the product is to operate.Example 30 includes the method of example 28. In example 30, at least some of the plurality of cores are inactive and at least some of the plurality of cores are active during operation of the microprocessor.Example 31 includes the method of example 28. In example 31, the plurality of cores are rated to operate in a first environment at a first temperature and are operating in a second environment at a second temperature higher than the first temperature. Further, the second temperature accelerates degradation of silicon of the plurality of cores, and the switching of the plurality of the cores from an inactive to active and vice versa limits an amount of time that any of the plurality cores operate in the second environment.Example 32 includes the method of example 28 and further includes orchestrating the transfer of operations in response to a notification that a core switch is to occur. In example 32, the operations are to be transferred before the first core is switched to inactive and after the second core is switched to active.Example 33 includes the method of example 28 and further includes comparing a workload of the first core to a workload capacity of the second core. The comparison is used to determine whether the second core has sufficient capacity for the workload of the first core before a switch is to occur. The switch occurs when the second core is determined to have sufficient capacity for the workload of the first core.Example 34 includes the method of example 28 and further includes configuring the first core and the second core to switch between inactive and active states. The configuring of the first and second cores includes at least one of (i) configuring the first core and the second core to communicate with one another, (ii) configuring the first core and the second core to receive and respond to activation signals and inactivation signals, or (iii) configuring memories associated with the first core and the second core to have the same addresses.Example 35 includes the method of example 28. In example 35, the sensor includes a plurality of sensors, and the operational parameters include at least one of temperature, time, or core usage.Example 36 includes the method of example 35. In example 36, the plurality of sensors includes at least one of a core usage sensor, a digital thermal sensor, or a timer. In addition, the digital thermal sensor senses a junction temperature associated with the plurality of cores, and the core usage sensor measures at least one of respective workloads of respective ones of the cores or respective operating speeds of the respective ones of the cores.Example 37 includes the method of example 28. In example 37, the sensor is one of a plurality of sensors, and the method further includes generating a time-series log of data collected by the plurality of sensors. In example 37, the data is used to compare a first operational parameter of a first core with the first operational parameter of the second core, the first operational parameter of the first core and the second core are sensed at a same time and the comparison is used to identify a time to switch the first core from active to inactive and to switch the second core from inactive to active.Example 38 includes the method of example 28. In example 38, the first subset of cores are active and the second subset of cores are inactive and a first switch of the first subset of cores to inactive and a second switch of the second subset of cores to active occurs after a duration of time equal to an expected lifespan of the plurality of cores.Example 39 includes the method of example 28. In example 39, the at least one operational parameter reflects an amount of quality degradation of the plurality of cores caused by one or more of a core temperature, a core operating voltage, a core operating frequency, and a core workload stress. In example 39, the quality degradation adversely affects the operating lifespan of the plurality of cores.Example 40 includes the method of example 28 and further includes executing an algorithm that uses a combination of a first series of core usage values and a first series of core temperature values to determine when the first core is to be switched from active to inactive and the second core is to be switched from inactive to active.Example 41 includes an apparatus having a means for selecting a policy based on input information. The policy extends an operating lifespan of a microprocessor having a plurality of cores. The apparatus also includes a means for dividing (or partitioning), based on the selected policy, the plurality of cores of the microprocessor into subsets of cores, including a first subset and a second subset. In example 41, a means for sensing monitors, based on the selected policy, at least one operational parameter of the plurality of cores. In addition, a means for switching cores switches a first core of the first subset of cores from active to inactive and switches a second core of the second subset of cores from inactive to active based on the at least one operational parameter. In Example 41, the core switches reduce an amount of degradation experienced by the first core and the second core.Example 42 includes the apparatus of example 41. In the apparatus of example 42 a means to orchestrate a transfer of operations orchestrates a transfer of operations by which operations performed at the first core are to be transferred to the second core. The transfer of operations is performed in response to a notification from the cores switcher and the operations are transferred before the first core is switched to inactive and after the second core is switched to active.Example 43 includes the apparatus of example 41. The apparatus of example 43, further includes a means to orchestrate a workload that compares a workload of the first core to a workload capacity of the second core. The comparison is used by the means to switch cores to determine whether the second core has sufficient capacity for the workload of the first core before issuing a switch command. The means to switch cores issues the switch command when the second core is determined to have sufficient capacity for the workload of the first core.Example 44 includes the apparatus of example 41 and further includes a means to configure a cores switchover that configures the first core and the second core to switch between inactive and active states. The configuring performed by the cores switchover configurer includes at least one of (i) configuring the first core and the second core to communicate with one another, (ii) configuring the first core and the second core to receive and respond to activation signals and inactivation signals, or (iii) configuring memories associated with the first core and the second core to have the same addresses.Example 45 includes the apparatus of example 41. In example 45, the means for monitoring is performed by a sensor and the sensor includes a plurality of sensors. In example 45, the plurality of sensors includes at least one of a core usage sensor, a digital thermal sensor, or a timer. In example 45, the digital thermal sensor senses a junction temperature associated with the plurality of cores, the core usage sensor measures at least one of respective workloads of respective ones of the cores or respective operating speeds of the respective ones of the cores, and the timer measure an amount of operating time of the cores.Example 46 includes the apparatus of example 41 and further includes a means to log time-series data that generates a time-series log of data collected by one or more sensors. In example 46, the time-series log of data is used to compare a first operational parameter of a first core with the first operational parameter of the second core and the first operational parameter of the first core and the second core are sensed at a same time. In example 46, the comparison is used identify a time to switch the first core from active to inactive and to switch the second core from inactive to active.Example 47 includes the apparatus of example 41. In example 41, the first subset of cores are active and the second subset of cores are inactive. Also, the means for core switching switches the first subset of cores to inactive and the second subset of cores to active after expiration of a timer. In example 47, the timer expires when a time equal to an expected lifespan of the plurality of cores has been reached.Example 48 includes the apparatus of example 41. In example 48, the at least one operational parameter reflects an amount of quality degradation of the plurality of cores caused by one or more of a core temperature, a core operating voltage, a core operating frequency, and a core workload stress, and the quality degradation adversely affects the operating lifespan of the plurality of cores.Example 49 includes the apparatus of examples 1 and further includes the apparatus of any one of examples 2- 8, and 10 - 14.Example 50 includes the at least one non-transitory computer readable medium of example 15 and further includes the at least one non-transitory computer readable medium of any one of examples 16 - 22 and 24 - 27.Example 51 includes the method of claim 28 and further includes the methods of any one of examples 29 - 35, and 37 - 40.Example 52 includes the apparatus of example 41 and further includes the apparatus of any one of examples 42 - 48.Example 53 is a computer readable medium having computer readable instructions that, when executed, cause at least one processor to perform the method of any one of examples 28 -40.Example 54 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement any of Examples 1-52.Example 55 is an apparatus comprising means to implement any of Examples 1-81.Example 56 is a system to implement any of Examples 1-81.Example 57 is a method to implement any of Examples 1-81.Example 58 is a multi-tier edge computing system, comprising a plurality of edge computing nodes provided among on-premise edge, network access edge, or near edge computing settings, the plurality of edge computing nodes configured to perform any of the methods of Examples 1-52.Example 59 is an edge computing system, comprising a plurality of edge computing nodes, each of the plurality of edge computing nodes configured to perform any of the methods of Examples 1-52.Example 60 is an edge computing node, operable as a server hosting the service and a plurality of additional services in an edge computing system, configured to perform any of the methods of Examples 1-52.Example 61 is an edge computing node, operable in a layer of an edge computing network as an aggregation node, network hub node, gateway node, or core data processing node, configured to perform any of the methods of Examples 1-52.Example 62 is an edge provisioning, orchestration, or management node, operable in an edge computing system, configured to implement any of the methods of Examples 1-52.Example 63 is an edge computing network, comprising networking and processing components configured to provide or operate a communications network, to enable an edge computing system to implement any of the methods of Examples 1-52.Example 64 is an access point, comprising networking and processing components configured to provide or operate a communications network, to enable an edge computing system to implement any of the methods of Examples 1-52.Example 65 is a base station, comprising networking and processing components configured to provide or operate a communications network, configured as an edge computing system to implement any of the methods of Examples 1-52.Example 66 is a road-side unit, comprising networking components configured to provide or operate a communications network, configured as an edge computing system to implement any of the methods of Examples 1-52.Example 67 is an on-premise server, operable in a private communications network distinct from a public edge computing network, configured as an edge computing system to implement any of the methods of Examples 1-52.Example 68 is a 3GPP 4G/LTE mobile wireless communications system, comprising networking and processing components configured as an edge computing system to implement any of the methods of Examples 1-52.Example 69 is a 5G network mobile wireless communications system, comprising networking and processing components configured as an edge computing system to implement any of the methods of Examples 1-52.Example 70 is an edge computing system configured as an edge mesh, provided with a microservice cluster, a microservice cluster with sidecars, or linked microservice clusters with sidecars, configured to implement any of the methods of Examples 1-52.Example 71 is an edge computing system, comprising circuitry configured to implement services with one or more isolation environments provided among dedicated hardware, virtual machines, containers, or virtual machines on containers, the edge computing system configured to implement any of the methods of Examples 1-52.Example 72 is an edge computing system, comprising networking and processing components to communicate with a user equipment device, client computing device, provisioning device, or management device to implement any of the methods of Examples 1-52.Example 73 is networking hardware with network functions implemented thereupon, operable within an edge computing system, the network functions configured to implement any of the methods of Examples 1-52.Example 74 is acceleration hardware with acceleration functions implemented thereupon, operable in an edge computing system, the acceleration functions configured to implement any of the methods of Examples 1-52.Example 75 is storage hardware with storage capabilities implemented thereupon, operable in an edge computing system, the storage hardware configured to implement any of the methods of Examples 1-52.Example 76 is computation hardware with compute capabilities implemented thereupon, operable in an edge computing system, the computation hardware configured to implement any of the methods of Examples 1-52.Example 77 is an edge computing system configured to implement services with any of the methods of Examples 1-52, with the services relating to one or more of: compute offload, data caching, video processing, network function virtualization, radio access network management, augmented reality, virtual reality, autonomous driving, vehicle assistance, vehicle communications, industrial automation, retail services, manufacturing operations, smart buildings, energy management, internet of things operations, object detection, speech recognition, healthcare applications, gaming applications, or accelerated content processing.Example 78 is an apparatus of an edge computing system comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform any of the methods of Examples 1-52.Example 79 is one or more computer-readable storage media comprising instructions to cause an electronic device of an edge computing system, upon execution of the instructions by one or more processors of the electronic device, to perform any of the methods of Examples 1-52.Example 80 is a computer program used in an edge computing system, the computer program comprising instructions, wherein execution of the program by a processing element in the edge computing system is to cause the processing element to perform any of the methods of Examples 1-52.Example 81 is an edge computing appliance device operating as a self-contained processing system, comprising a housing, case, or shell, network communication circuitry, storage memory circuitry, and processor circuitry adapted to perform any of the methods of Examples 1-52.Example 82 is an apparatus of an edge computing system comprising means to perform any of the methods of Examples 1-52.Example 83 is an apparatus of an edge computing system comprising logic, modules, or circuitry to perform any of the methods of Examples 1-52.Example 84 is an edge computing system, including respective edge processing devices and nodes to invoke or perform any of the operations of Examples 1-52, or other subject matter described herein.Example 85 is a client endpoint node, operable to invoke or perform the operations of any of Examples 1-52, or other subject matter described herein.Example 86 is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of any of Examples 1-52, or other subject matter described herein.Example 87 is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operable to invoke or perform the operations of any of Examples 1-52, or other subject matter described herein.Example 88 is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled to an edge computing system, operable to invoke or perform the operations of any of Examples 1-52, or other subject matter described herein.Example 89 is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of any of Examples 1-52, or other subject matter described herein.Example 90 is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of any Examples 1-52, or other subject matter described herein.Example 91 is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V), vehicle-to-everything (V2X), or vehicle-to-infrastructure (V2I) scenarios, and optionally operating according to European Telecommunications Standards Institute (ETSI) Multi-Access Edge Computing (MEC) specifications, operable to invoke or perform the use cases discussed herein, with use of any of Examples 1-52, or other subject matter described herein.Example 92 is an edge computing system adapted for mobile wireless communications, including configurations according to a 3GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of any of Examples 1-52, or other subject matter described herein.Example 93 is an edge computing node, operable in a layer of an edge computing network or edge computing system as an aggregation node, network hub node, gateway node, or core data processing node, operable in a close edge, local edge, enterprise edge, on-premise edge, near edge, middle, edge, or far edge network layer, or operable in a set of nodes having common latency, timing, or distance characteristics, operable to invoke or perform the use cases discussed herein, with use of any of Examples 1-52, or other subject matter described herein.Example 94 is networking hardware, acceleration hardware, storage hardware, or computation hardware, with capabilities implemented thereupon, operable in an edge computing system to invoke or perform the use cases discussed herein, with use of any of Examples 1-52, or other subject matter described herein.Example 95 is an apparatus of an edge computing system comprising: one or more processors and one or more computer-readable media comprising instructions that, when deployed and executed by the one or more processors, cause the one or more processors to invoke or perform the use cases discussed herein, with use of any of Examples 1-52, or other subject matter described herein.Example 96 is one or more computer-readable storage media comprising instructions to cause an electronic device of an edge computing system, upon execution of the instructions by one or more processors of the electronic device, to invoke or perform the use cases discussed herein, with use of any of Examples 1-52, or other subject matter described herein.Example 97 is an apparatus of an edge computing system comprising means, logic, modules, or circuitry to invoke or perform the use cases discussed herein, with the use of any of Examples 1-52, or other subject matter described herein.Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure. |
A method for fabricating a semiconductor resistor in embedded FLASH memory applications is described. In the method a gate array (9) is formed on a semiconductor substrate. Isolations regions (70) are removed and the exposed silicon implanted forming diffused regions (180). The SAS so formed can be configured to function as a resistor element (240). |
We claim: 1. A method for forming a resistor in a semiconductor substrate comprising:providing a region of a first conductivity type in said semiconductor substrate; providing on said region of a first conductivity type, a plurality of substantially parallel wordlines that cross a plurality of substantially parallel isolation regions, said isolation regions containing an isolation material; implanting said region of a first conductivity type in said semiconductor substrate containing said plurality of substantially parallel wordlines and said plurality of substantially parallel isolation regions with a first species; etching said isolation material from all regions of said plurality of substantially parallel isolation regions not covered by said plurality of substantially parallel wordlines; and implanting said region of a first conductivity type in said semiconductor substrate containing said plurality of substantially parallel wordlines and said plurality of substantially parallel isolation regions with said first species to form a resistor. 2. The method of claim 1 wherein said first conductivity type is p-type.3. The method of claim 2 wherein said first species is selected from the group consisting of P, As, Sb and Bi.4. The method of claim 1 wherein said first conductivity type is n-type.5. The method of claim 4 wherein said first species is selected from the consisting of B, Ga, BF2, and In. |
CROSS-REFERENCE TO RELATED PATENT/PATENT APPLICATIONSThe following commonly assigned patent/patent applications are hereby incorporated herein by reference:<tb> <sep> <sep>Patent No./Serial No.<sep>Filing Date<sep>TI Case No.<tb> <sep> <sep>60/068,543<sep>12/23/97<sep>TI-23167<tb> <sep> <sep>60/117,774<sep> 1/29/99<sep>TI-28594<tb> <sep> <sep>*<sep>*<sep>TI-FIELD OF THE INVENTIONThis invention relates generally to the field of electronic devices and more particularly to a method for forming a general purpose self aligned source resistor in embedded flash memory applicationsBACKGROUND OF THE INVENTIONElectronic equipment such as televisions, telephones, radios, and computers are often constructed using semiconductor components, such as integrated circuits, memory chips, and the like. The semiconductor components are typically constructed from various microelectronic devices fabricated on a semiconductor substrate, such as transistors, capacitors, diodes, resistors, and the like. Each microelectronic device is typically a pattern of conductor, semiconductor, and insulator regions formed on the semiconductor substrate.The density of the microelectronic devices on the semiconductor substrate may be increased by decreasing spacing between each of the various semiconductor devices. The decrease in spacing allows a larger number of such microelectronic devices to be formed on the semiconductor substrate. As a result, the computing power and speed of the semiconductor component may be greatly improved.FLASH memory, also known as FLASH EPROM or FLASH EEPROM, is a semiconductor component that is formed from an array of memory cells with each cell having a floating gate transistor. Data can be written to each cell within the array, but the data is erased in blocks of cells. Each cell is a floating gate transistor having a source, drain, floating gate, and a control gate. The floating gate uses channel hot electrons for writing from the drain and uses Fowler-Nordheim tunneling for erasure from the source. The sources of each floating gate in each cell in a row of the array are connected to form a source line.Embedding FLASH memory circuits in CMOS logic circuits is finding increasing usage in building more complex integrated circuits such as digital signal processors for applications such as hard disk controllers. In addition to CMOS transistors and FLASH memory cells, it is necessary to have other components such as resistors as a part of the integrated circuits. These resistors are usually formed using polycyrstalline silicon which is commonly used to form the gate electrode. This polycrystalline (poly) resistor can be formed during the gate poly process where it is defined at the gate level and protected from silicidation by using an extra mask to prevent the sidewall dry etch from etching the nitride from the top of the resistor. Since the use of this extra mask is not desirable, attempts are being made to eliminate this mask. In applications where FLASH memory is used, this mask can be eliminated by using the poly-1 layer in the floating gate transistor to form the resistor. The sheet resistance of the poly-1 film is typically about 1500-2500 ohm/sq. For high frequency applications however, the capacitances associated with the poly-1 resistor and the floating/control gate structure make the resistance frequency dependent and therefore not suitable for use. The instant invention addresses this problem and describes a method for fabricating a general purpose self aligned source resistor in embedded FLASH applications.SUMMARY OF THE INVENTIONThe instant invention provides a method of forming a resistor in an integrated circuit containing FLASH memory cells. The method comprises: A method for forming a resistor in a semiconductor substrate comprising: providing a region of a first conductivity type in said semiconductor substrate; providing on said region of a first conductivity type, a plurality of substantially parallel wordlines that cross a plurality of substantially parallel isolation regions, said isolation regions containing an isolation material; implanting said region of a first conductivity type in said semiconductor substrate containing said plurality of substantially parallel wordlines and said plurality of substantially parallel isolation regions with a first species; etching said isolation material from all regions of said plurality of substantially parallel isolation regions not covered by said plurality of substantially parallel wordlines; and implanting said region of a first conductivity type in said semiconductor substrate containing said plurality of substantially parallel wordlines and said plurality of substantially parallel isolation regions with said first species to form a resistor.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings, wherein like reference numerals represent like features, in which:FIG. 1 is an electrical schematic diagram, in partial block diagram form, of an electronic device which includes a memory cell array in accordance with the prior art.FIG. 2 is a perspective view of a portion of the memory cell array of FIG. 1.FIG. 3 is an enlarged plan view of a portion of the memory cell of FIG. 1.FIG. 4 is cross-sectional view of an FLASH memory cell poly-1 resistor in accordance with the prior art.FIG. 5 is an equivalent circuit of the distributed resistor and capacitor network of the FLASH poly-1 resistor illustrated in FIG. 4.FIG. 6 is a perspective view of a portion of the resistor according to an embodiment of the instant invention.FIGS. 7A and 7B are cross-sections of the resistor through planes of FIG. 6 according to an embodiment of the instant invention.FIGS. 8A and 8B are cross-sections of the resistor through planes of FIG. 6 according to an embodiment of the instant invention.FIG. 9 is a cross-section of the completed resistor through a plane in FIG. 6 according to an embodiment of the instant invention.FIG. 10 is a circuit element diagram of an embodiment of the instant invention.DETAILED DESCRIPTION OF THE INVENTIONFIGS. 1 through 8 illustrates various aspects of an electronic device and the method of forming a self aligned source resistor in embedded FLASH applications.FIG. 1 is an electrical schematic diagram, in partial block form, of an electronic device 8 in accordance with the prior art. The electronic device 8 includes a wordline decoder 22, a column decoder 28, a Read/Write/Erase control circuit 32 for controlling the decoders 22 and 28, and a memory cell array 9. The memory cell array 9 comprises a number of memory cells 10 arranged in rows and columns. Each memory cell 10 includes a floating-gate transistor 11 having a source 12, a drain 14, a floating gate 16, and a control gate 18.Each of the control gates 18 in a row of cells 10 is coupled to a wordline 20, and each of the wordlines 20 is coupled to the wordline decoder 22. Each of the sources 12 in a row of cells 10 is coupled to a source line 24. Each of the drains 14 in a column of cells 10 is coupled to a drain-column line 26. Each of the source lines 24 is coupled by a column line 27 to the column decoder 28 and each of the drain-column lines 26 is coupled to the column decoder 28.In a write or program mode, the wordline decoder 22 may function, in response to wordline address signals on lines 30 and to signals from the Read/Write/Erase control circuit 32 to place a preselected first programming voltage VRW, approximately +12V, on a selected wordline 20, which is coupled to the control gate 18 of a selected cell 10. Column decoder 28 also functions to place a second programming voltage VPP, approximately +5 to +10V, on a selected drain-column line 26 and, therefore, the drain 14 of the selected cell 10. Source lines 24 are coupled to a reference potential VSS through line 27. All of the deselected drain-column lines 26 are coupled to the reference potential VSS. These programming voltages create a high current (drain 14 to source 12) condition in the channel of the selected memory cell 10, resulting in the generation near the drain-channel junction of channel-hot electrons and avalanche breakdown electrons that are injected across the gate oxide to the floating gate 16 of the selected cell 10. The programming time is selected to be sufficiently long to program the floating gate 16 with a negative program charge of approximately -2V to -6V with respect to the gate region.The floating gate 16 of the selected cell 10 is charged with channel-hot electrons during programming, and the electrons in turn render the source-drain path under the floating gate 16 of the selected cell 10 nonconductive, a state which is read as a "zero" bit. Deselected cells 10 have source-drain paths under the floating gate 16 that remain conductive, and those cells 10 are read as "one" bits.In a flash erase mode, the column decoder 28 functions to leave all drain-column lines 26 floating. The wordline decoder 22 functions to connect all of the word lines 20 to the reference potential VSS. The column decoder 28 also functions to apply a high positive voltage VEE, approximately +10V to +15V, to all of the source lines 24. These erasing voltages create sufficient field strength across the tunneling area between floating gate 16 and the semiconductor substrate to generate a Fowler-Nordheim tunnel current that transfers charge from the floating gate 16, thereby erasing the memory cell 10.In the read mode, the wordline decoder 22 functions, in response to wordline address signals on lines 30 and to signals from Read/Write/Erase control circuit 32, to apply a preselected positive voltage VCC, approximately +5V, to the selected wordline 20, and to apply a low voltage, ground or VSS, to deselected wordlines 20. The column decoder 28 functions to apply a preselected positive voltage VSEN, approximately +1.0V, to at least the selected drain column line 28 and to apply a low voltage to the source line 24. The column decoder 28 also functions, in response to a signal on an address line 34, to connect the selected drain-column line 26 of the selected cell 10 to the DATA OUT terminal. The conductive or non-conductive state of the cell 10 coupled to the selected drain-column line 26 and the selected wordline 20 is detected by a sense amplifier (not shown) coupled to the DATA OUT terminal. The read voltages applied to the memory array 9 are sufficient to determine channel impedance for a selected cell 10 but are insufficient to create either hot-carrier injection or Fowler-Nordheim tunneling that would disturb the charge condition of any floating gate 16.For convenience, a table of read, write and erase voltages is given in TABLE 1 below:<tb> <sep> <sep>TABLE 1<tb> <sep> <sep>Read<sep>Write<sep>Flash Erase<tb> <sep>Selected Wordline<sep>5 V<sep>12 V <sep>0 V (All)<tb> <sep>Deselected Word lines<sep>0 V<sep>0 V<sep>-<tb> <sep>Selected Drain Line<sep>1.0 V <sep>5-10 V<sep>Float (All)<tb> <sep>Deselected Drain Lines<sep>Float<sep>0 V<sep>-<tb> <sep>Source lines<sep>0 V<sep>About 0 V<sep>10-15 V (All)FIGS. 2 and 3 illustrate the structure of a portion of the memory array 9 illustrated in FIG. 1. Specifically, FIG. 2 is a perspective view of a portion of the memory array 9 and FIG. 3 is an enlarged plan view of a portion of memory array 9. As discussed previously, the memory array 9 includes a number of memory cells 10 arranged in rows and columns.As best illustrated in FIG. 2, each row of memory cells 10 is formed from a continuous stack structure 50 that includes a number of memory cells 10. The floating gate transistor 11 within each memory cell 10 is formed on a semiconductor substrate 52 and separated from each adjacent memory cell 10 in the continuous stack structure 50 by a shallow trench isolation structure 70. The semiconductor substrate 52 includes a source region 60 and a drain region 62 separated by a channel region 64. The floating gate transistor 11 is generally fabricated by forming a gate stack 54 outwardly from a portion of the channel region 64 and doping a portion of the source region 60 and a portion of the drain region 62 adjacent the gate stack 54 to form a source 12 and a drain 14, respectively.The semiconductor substrate 52 may comprise a wafer formed from a single-crystalline silicon material. The semiconductor substrate 52 may include an epitaxial layer, a recrystallized semiconductor material, a polycrystalline semiconductor material, or any other suitable semiconductor material.The regions 60, 62, and 64 are substantially parallel and may extend the length of the memory array 9. The channel region 64 of the semiconductor substrate 52 is doped with impurities to form a semiconductive region. The channel region 64 of the semiconductor substrate 12 may be doped with p-type or n-type impurities to change the operating characteristics of a microelectronic device (not shown) formed on the doped semiconductor substrate 52.As best illustrated in FIG. 2, the floating gate transistors 11 in each continuous stack structure 50 in the memory array 9 are electrically isolated from one another by the shallow trench isolation (STI) structure 70. The STI structures 70 are generally formed prior to the fabrication of the gate stack 54 on the semiconductor substrate 52. The STI structures 70 are formed by etching a trench 72 into the semiconductor substrate 52. The trench 72 is generally on the order of 0.2 to 8.5 [mu]m in depth. The trench 72 comprises a first sidewall surface 74 and a second sidewall surface 76.The trench 72 is then filled with a trench dielectric material 78 to electrically isolate the active regions of the semiconductor substrate 52 between the STI structures 70. The trench dielectric material 78 may comprise silicon dioxide, silicon nitride, or a combination thereof. The trench dielectric material 78 is generally etched back, followed by a deglaze process to clean the surface of the semiconductor substrate 52 prior to fabrication of the gate stack 54.The continuous stack structure 50 is then fabricated outwardly from the semiconductor substrate 52 and the filled trench 72. The continuous stack structure 50 is formed from a series of gate stacks 54 fabricated outwardly from the channel region 64 of the semiconductor substrate 52. As best shown in FIG. 2, the gate stack 54 comprises a gate insulator 56, the floating gate 16, an interstitial dielectric 58, and the control gate 18. The gate insulator 56 is formed outwardly from the semiconductor substrate 52, and the floating gate 16 is formed outwardly from the gate insulator 56. The interstitial dielectric 58 is formed between the floating gate 16 and the control gate 18 and operates to electrically isolate the floating gate 16 from the control gate 18.The gate insulator 56 is generally grown on the surface of the semiconductor substrate 52. The gate insulator 56 may comprise oxide or nitride on the order of 25 to 500 A in thickness.The floating gate 16 and the control gate 18 are conductive regions. The gates 16 and 18 generally comprise a polycrystalline silicon material (polysilicon) that is in-situ doped with impurities to render the polysilicon conductive. The thicknesses of the gates16 and 18 are generally on the order of 100 nanometers and 300 nanometers, respectively.The interstitial dielectric 58 may comprise oxide, nitride, or a heterostructure formed by alternating layers of oxide and nitride. The interstitial dielectric 58 is on the order of 5 to 40 nanometers in thickness.As best illustrated in FIG. 3, the control gate 18 of each floating gate transistor 11 is electrically coupled to the control gates 18 of adjacent floating gate transistors 11 within adjacent continuous stack structures 50 to form a continuous conductive path. In the context of the memory array 9 discussed with reference to FIG. 1, the continuous line of control gates 18 operate as the wordline 20 of the memory array 9.In contrast, the floating gate 16 of each floating gate transistor 11 is not electrically coupled to the floating gate 16 of any other floating gate transistor 11. Thus, the floating gate 16 in each floating gate transistor 11 is electrically isolated from all other floating gates 16. The floating gates 16 in adjacent memory cells 10 are isolated by a gap 80. The gap 80 is generally etched into a layer of conductive material (not shown) that is used to form the floating gate 16.As shown in FIG. 2, the source 12 and the drain 14 of the floating gate transistor 11 are formed within a portion of the source region 60 and the drain region 62 of the semiconductor substrate 52, respectively. The source 12 and the drain 14 comprise portions of the semiconductor substrate 52 into which impurities have been introduced to form a conductive region. The drains 14 of each floating gate transistor 11 in a column are electrically coupled to each other by a number of drain contacts 82 to form the drain column line 26 (not shown). The drain column line 26 is generally formed outwardly from the wordline 20. As will be discussed in greater detail below, the source 12 of each floating gate transistor 11 forms a portion of the source line 24 and is formed during the fabrication of the source line 24.As best illustrated in FIG. 2, a portion of the source line 24 forms the source 12 of the floating gate transistor 11. The source line 24 connects the sources 12 to each other by a continuous conductive region formed within the semiconductor substrate 52 proximate the source region 60. As best illustrated in FIG. 2, the source line 24 crosses the STI structures 70 in the source region 60 of the semiconductor substrate 52 below the STI structures 70. In contrast, the STI structures 70 electrically isolate the adjacent floating gate transistors 11 in the channel region 64 of the semiconductor substrate.The source line 24, and correspondingly the sources 12 of each floating gate transistor 11, is generally fabricated after at least a portion of the gate stack 54 has been fabricated. The gate stack 54 is pattern masked (not shown) using conventional photolithography techniques, leaving the semiconductor substrate 52, proximate the source region 60, exposed. The exposed region of the semiconductor substrate 52 is then etched to remove the trench dielectric material 78 in the exposed region. The etching process to remove the trench dielectric material 78 may be an anisotropic etching process. Anisotropic etching may be performed using a reactive ion etch (RIE) process using carbon-fluorine based gases such as CF4 or CHF3.The semiconductor substrate 52 proximate the source region 60, including that portion of the semiconductor substrate 52 forming the trench 72, is doped with impurities to render the region conductive. The conductive region is then thermally treated to diffuse the impurities into the source region 60 of the semiconductor substrate 52. The diffused conductive region forms both the source 12 of each floating gate transistor 11 as well as the source line 24. The source region 60 of the semiconductor substrate 52 is generally doped by an implantation process in which dopant ions are impacted into the semiconductor substrate 52. After formation of the source line 24, and as a part of subsequent processing, the trench 72 is refilled with a dielectric material.Shown in FIG. 3 is the source line contact 90. In typical FLASH memory layout design there is one source contact for every sixteen drain contacts. Because of the spacing of the source line the word line 20 has to bend 95 around the source contact 90. In addition for high density designs, the width of the drain region 62 is larger than the width of the source region 60. This results in a non-uniform spacing of the wordlines 20.Shown in FIG. 4 is a floating gate (poly-1) resistor 112 fabricated in accordance with the prior art. This resistor may be part of an integrated circuit that contains embedded FLASH memory circuits among CMOS circuits. In this application the resistor is typically formed on large areas of the isolation oxide 70. This isolation oxide can be formed using a LOCOS or a STI process. To form the poly-1 resistor structure 112, the FLASH cell gate stack is formed as described above. The openings 100 are formed over the floating gate during the gate etch process of the CMOS circuits which occurs after floating gate stack formation. A layer of photoresist is formed on the circuit and patterned to define the CMOS gate structures and the openings over the floating gate 100. During the polysilicon etch process used to define the CMOS gate structures, the openings 100 will be formed. Sidewall nitride structures 90 are formed to isolate the contact structures 110 and 115 from the control gate 20. The contacts 110 and 115 will provide electrical contact to the poly-1 resistor structure formed using the poly-1 (floating gate) layer 16. As shown in FIG. 4, the control gate 20 and the poly-1 (floating gate) 16 are separated by the interpoly dielectric 58. As described earlier, this interpoly dielectric can comprise layers of silicon oxide and silicon nitride. During the sequence of processes used to complete the integrated circuit, a low resistivity layer of titanium, tungsten or cobalt silicide 120 will be formed on the control gate 20. This layer typically has a sheet resistivity of about 1-8 ohm/sq.Shown in FIG. 5 is the equivalent circuit for the poly-1 resistor structure illustrated in FIG. 4. The circuit comprises a distributed resistor capacitor network 125. The line of resistors 135 are due to the resistivity of the poly-1 (floating gate) layer 16. The line of resistors 130 are due to the presence of the control gate layer 20 and the silicide layer 120. The capacitors 140 are due to the presence of the interpoly dielectric layer 58. For high frequency applications such as wireless communications, the reactance of the capacitors 140 (which is inversely proportional to the frequency) will decrease. This will cause the low resistivity silicide layer 120 to have a larger contribution to the overall resistance of the structure resulting in a decrease in the resistance between contacts 100 and 115. This decrease in resistance makes the structure of FIG. 4 unsuitable for high frequency applications.Shown in FIG. 6, is an embodiment for a layout of a self aligned source (SAS) resistor according to the instant invention. The isolation regions 70 comprise dielectric material as described earlier. The word lines 20 are polysilicon lines as described earlier. The SAS resistor shown in FIG. 6 can be fabricated in either an n-type region or a p-type region in the semiconductor substrate 52. The SAS resistor shown in FIG. 6 is fabricated simultaneously with the FLASH memory cell using identical processes. These processes include, the floating gate 16, the gate insulator 56, the control gate 18, the word line 20, the isolation regions 70, and the source and drain regions 12 and 14 respectively. The implanted regions in FIG. 6 formed during the source and drain implants are denoted as 150. If the SAS resistor is fabricated in a p-type region then a n-type species will be implanted to form 150. In an embodiment, the n-type species can be selected from the group of P, As, Sb and Bi either singly or in combination. If the SAS resistor is fabricated in a n-type region then a p-type species will be implanted to form 150. In an embodiment, the p-type species can be selected from the group of B, Ga, BF2, and In either singly or in combination. If the SAS resistor is fabricated in a n-type region, then the source 12 and drain 14 region implants will not be used. In this embodiment the implant used to form the source and drain regions for the PMOS transistor can be used to form 150. In the embodiment where the SAS resistor is fabricated in an p-type region of the semiconductor substrate 52, the process sequence for the formation of the FLASH memory circuit and the SAS resistor circuit is identical up to the self aligned source etch process described earlier for the FLASH memory circuit. In the FLASH memory circuit a photoresist film is patterned and a continuous source line formed by removing the necessary isolation regions 70. In the fabrication of the SAS resistor, the entire resistor circuit is exposed to the SAS etch process. The SAS etch process is an oxide etch and will remove the isolation oxide 70 from all the trench regions not covered or protected by the word lines. This is shown in FIGS. 7A and 7B which shows the cross-sections through 160 and 170 in FIG. 6. During the subsequent SAS implant process which introduces n-type dopants to form the source line 24 in the FLASH memory cell, resistor regions 180 illustrated in FIGS. 8A and 8B are formed. FIGS. 8A and 8B are cross sections taken through 160 and 170 in FIG. 6. The implanted regions 150 and 180 now form a continuous doped diffusion region that will form the SAS resistor. As shown in FIG. 6, regions 150 and 180 form a serpentine structure that can be electrically contacted at various points determined by the required resistor value. Since the physical value of the resistor is proportion to its length, larger values of resistor will require that the electrical contacts to the serpentine structure be placed further apart. Electrical contacts to the SAS resistor can be formed in any number of ways currently used in the art. The serpentine layout shown in FIG. 6 is an embodiment of the instant invention. The instant invention in not however to be limited to this structure. In other embodiments of the instant invention different shapes or arrangements of the resistor can be used.Shown in FIG. 9 is the cross-section through 230 in FIG. 6 for the completed SAS resistor structure. The dielectric film 190 is formed as a part of the sidewall formation process for the FLASH and CMOS circuits. The dielectric film 190 can be a material selected form the group consisting of silicon nitride, silicon oxide, silicon oxynitride, or any suitable dielectric material. The post sidewall etch film 190 will fillup the SAS resistor blocking the formation of a silicide film. This allows the elimination of a masking step. The dielectric film 200 will be formed as a part of the plaranization process before formation of the metal layers of the integrated circuit. In an embodiment of the instant invention, the layer 200 is a polysilicon/Metall dielectric (PMD). This PMD layer 200 may comprise doped silicon oxide where the dopant maybe phosphorous (phosphosilicate glass) or both phosphorous and boron (borophosphosilicate) glass. Both layers 190 and 200 are usually formed using a chemical vapor deposition process.Another embodiment of the instant invention is shown in FIG. 10. A FLASH memory circuit as shown in FIG. 2 uses a SAS process to form a continuous source line 60. As described earlier, this continuous source line 60 comprises the source regions of the various memory cells 12 linked by an implanted region 24 formed after removing selected portions of the isolation regions 70. This continuous source line will have some resistance associated with it and can be represented by a lumped resistor element 240 in FIG. 10. The ends of the lumped resistor element in FIG. 10, 242 and 244 represent first and second terminals on the continuous source line. These terminals are points along the continuous source line where electrical connection is made to the source line 60. By connecting the continuous source line to an external circuit element 250 through these terminals as shown in FIG. 10, the continuous source line will form a discrete semiconductor resistor element. It should be noted that the external circuit element 250 can be outside of the memory array. In embedded FLASH applications this external circuit element can be a part of the CMOS portion of the integrated circuit. The connection of the continuous source line 60 to an external circuit element 250 can be performed in any number of ways currently used in the art.While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments. |
Methods, systems, and devices for memory cells with asymmetrical electrode interfaces are described. A memory cell with asymmetrical electrode interfaces may mitigate shorts in adjacent word lines, which may be leveraged for accurately reading a stored value of the memory cell. The memory device may include a self-selecting memory component with a top surface area in contact with a top electrode and a bottom surface area in contact with a bottom electrode, where the top surface area in contact with the top electrode is a different size than the bottom surface area in contact with the bottom electrode. |
CLAIMSWhat is claimed is:1. A memory device, comprising:a top electrode; anda bottom electrode; anda self-selecting memory component having a first area of a top surface in contact with the top electrode and a second area of a bottom surface opposite the top surface, wherein the first area in contact with the top electrode is a different size than the second area in contact with the bottom electrode.2. The memory device of claim 1, further comprising:a dielectric liner, formed in a first direction, in contact with two side surfaces of the self-selecting memory component along the first direction.3. The memory device of claim 1, wherein a dielectric liner is in contact with the top surface of the self-selecting memory component and two side surfaces of the top electrode in a first direction.4. The memory device of claim 1, wherein a dielectric liner is contact with the top surface of the self-selecting memory component, two side surfaces of the top electrode, and two side surfaces of a digit line that extends in a second direction.5. The memory device of claim 1, wherein the top surface of the self- selecting memory component has an area equal to an area of the bottom surface of the self- selecting memory component.6. The memory device of claim 1, wherein the first area of the top surface in electrical contact with the top electrode is less than the second area of the bottom surface in electrical contact with the bottom electrode.7. The memory device of claim 1, wherein a length of the top electrode is less than a length of the self-selecting memory component in a first direction.8. The memory device of claim 1, wherein a length of the top electrode is less than a length of the bottom electrode in a first direction.9. The memory device of claim 1, wherein a length of the top electrode and a dielectric liner is equal to a length of the self-selecting memory component in a first direction.10. The memory device of claim 1, wherein a dielectric liner is in contact with two side surfaces of the self-selecting memory component and two side surfaces of the top electrode in a first direction.11. The memory device of claim 1, wherein a dielectric liner is in contact with two side surfaces of the self-selecting memory component, two side surfaces of the top electrode, and two side surfaces of a digit line that extends in a second direction.12. The memory device of claim 1, wherein the first area of the top surface in electrical contact with the top electrode is greater than the second area of the bottom surface in electrical contact with the bottom electrode.13. The memory device of claim 1, wherein an area of the bottom surface of the self-selecting memory component is greater than an area of the top surface of the bottom electrode.14. The memory device of claim 1, wherein the bottom electrode tapers from a bottom surface to a top surface opposite the bottom surface.15. The memory device of claim 1, wherein a length between inner surfaces of a dielectric liner in contact with two side surfaces of the top electrode is greater than a length of the bottom electrode in a first direction, wherein a length between inner surfaces of a dielectric liner in contact with two side surfaces of the self-selecting memory component is greater than the length of the bottom electrode in the first direction.16. A method of forming a memory device, comprising:forming a stack comprising a bottom electrode, a top electrode, and a self- selecting memory component between the bottom electrode and the top electrode;etching the top electrode to a first length in a first direction based at least in part on forming the stack;depositing a dielectric liner in contact with two side surfaces of the top electrode based at least in part on etching the top electrode; and etching the stack to form a line comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner, the line having a second length in the first direction greater than the first length of the top electrode.17. The method of claim 16, further comprising:depositing a hard mask material on a top surface of the top electrode, wherein a portion of the hard mask material is removed when the line is formed.18. The method of claim 16, wherein the dielectric liner is deposited using an in- situ technique or an ex- situ technique.19. The method of claim 16, further comprising:etching the stack to form the line inside a first chamber, wherein depositing the dielectric liner occurs inside the first chamber.20. The method of claim 16, further comprising:etching the stack to form the line inside a first chamber; andtransferring the stack from the first chamber to a second chamber, wherein depositing the dielectric liner occurs inside the second chamber.21. The method of claim 16, further comprising:etching the stack to form a pillar comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner, the pillar having a second length in a second direction greater than the first length of the top electrode.22. A method of forming a memory device, comprising:forming a stack comprising a bottom electrode, a top electrode, and a self- selecting memory component between the bottom electrode and the top electrode;etching the top electrode based at least in part on forming the stack;etching from a top surface to a bottom surface of the self-selecting memory component based at least in part on etching the top electrode;depositing a dielectric liner in contact with two side surfaces of the top electrode and two side surfaces of the self-selecting memory component based at least in part on etching from the top surface to the bottom surface of the self-selecting memory component; and etching the stack to form a pillar comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner; andforming a taper from a bottom surface to a top surface opposite the bottom surface of the bottom electrode.23. The method of claim 22, wherein the dielectric liner is deposited using an in-situ technique or an ex-situ technique.24. The method of claim 22, further comprising:etching the stack to form the pillar inside a first chamber, wherein depositing the dielectric liner occurs inside the first chamber.25. The method of claim 22, further comprising:etching the stack to form the pillar inside a first chamber; and transferring the stack from the first chamber to a second chamber, wherein depositing the dielectric liner occurs inside the second chamber. |
MEMORY CELLS WITH ASYMMETRICAL ELECTRODE INTERFACESCROSS-REFERENCE[0001] The present Application for Patent claims priority to ET.S. Application No.15/893,108 by Pirovano et al., entitle“Memory Cells with Asymmetrical ElectrodeInterfaces,” filed February 9, 2018, assigned to the assignee hereof, which is expressly incorporated by reference in its entirety.BACKGROUND[0002] The following relates generally to self-selecting memory cells and more specifically to memory cells with asymmetrical electrode interfaces.[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing different states of a memory device. For example, binary devices have two states, often denoted by a logic“1” or a logic“0.” In other systems, more than two states may be stored. To access the stored information, a component of the electronic device may read, or sense, the stored state in the memory device. To store information, a component of the electronic device may write, or program, the state in the memory device.[0004] Multiple types of memory devices exist, including magnetic hard disks, random access memory (RAM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), read only memory (ROM), flash memory, phase change memory (PCM), and others. Memory devices may be volatile or non-volatile. Non-volatile memory, e.g., FeRAM, may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices, e.g., DRAM, may lose their stored state over time unless they are periodically refreshed by an external power source. Improving memory devices may include increasing memory cell density, increasing read/write speeds, increasing reliability, increasing data retention, reducing power consumption, or reducing manufacturing costs, among other metrics.[0005] Some types of memory devices may use variations in resistance across a cell to program and sense different logic states. For example, in a self-selecting memory cell a logic state may be stored based on a distribution of charges and/or ions and/or elements within the memory cell. The manner in which a cell is programmed may affect the distribution of various materials that compose the cell, which may affect the ion migration of the cell, which, in turn, may affect a threshold voltage of the cell. The threshold voltage may be related to or indicative of the logic state of the cell. Small variations in threshold voltages between different logic states may therefore affect the accuracy with which cells may be read.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates an example memory array that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. [0007] FIG. 2 illustrates an example memory array that memory cells with asymmetrical electrode interfaces profiles in accordance with examples of the present disclosure.[0008] FIG. 3 illustrates example cross-sectional views of a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. [0009] FIG. 4 illustrates example cross-sectional views of a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0010] FIG. 5 illustrates example cross-sectional views of a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0011] FIG. 6 illustrates example cross-sectional views of a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0012] FIG. 7 illustrates an example process flow for forming a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0013] FIG. 8 illustrates an example process flow for forming a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. [0014] FIG. 9 illustrates an example memory array that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0015] FIG. 10 illustrates a device, including a memory array, that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0016] FIG. 11 is a flowchart that illustrates a method or methods for forming a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0017] FIG. 12 is a flowchart that illustrates a method or methods for forming a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0018] FIG. 13 illustrates example memory cells that support memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.DETAILED DESCRIPTION[0019] A self-selecting memory cell with asymmetrical electrode interfaces may affect a distribution of ions in a memory cell. As the distribution of ions in the memory cell changes, it may affect a threshold voltage of the memory cell and may be used to store different programmed states. For example, applying a particular programming pulse may cause ions to crowd at or near a particular electrode of a cell. Asymmetrical electrode interfaces may enhance the sensing window for the cell, which may result in more accurate sensing compared to cells with symmetric electrode interfaces. When a self-selecting memory cell is programed, elements within the cell separate, causing ion migration. Ions may migrate towards a particular electrode, depending on the polarity of the programming pulse applied to the cell.[0020] Increased sensing reliability in a self-selecting memory device may be realized using asymmetrical electrode interfaces with a memory storage element of the self-selecting memory cell. Each memory cell may be configured such that, when programmed, ions within the cell migrate towards one electrode. Due to asymmetrical electrode interfaces with the self-selecting memory component, a greater density of ions may build up at or near one electrode. This may create a region with a high density of ions and a region with a low density of ions within the cell. Depending on the polarity of the programming pulse applied to the memory cell, this concentration of ions may represent a logic“1” or logic“0” state. [0021] A self-selecting memory device with asymmetrical electrode interfaces may be formed by varying a size of a bottom electrode and/or a top electrode in contact with the self- selecting memory component. The area of a top surface of the self-selecting memory component contacting the top electrode may be less than the area of a bottom surface of the self-selecting memory component contacting the bottom electrode from the perspective of the word line and/or digit line directions. In some examples, a dielectric liner may be in contact with side surfaces of the top electrode in the word line and digit line directions to achieve the asymmetrical electrode interfaces.[0022] Alternatively, the area of a top surface of the self-selecting memory component contacting the top electrode may be greater than the area of a bottom surface of the self- selecting memory component contacting the bottom electrode from the perspective of the word line and digit line directions. In some examples, a dielectric liner may be in contact with side surfaces of the top electrode and the self-selecting memory component in the word line and digit line directions to achieve the asymmetrical electrode interfaces. In some examples, a dielectric liner may be in contact with side surfaces of the top electrode and the self- selecting memory component in the word line direction to achieve the asymmetrical electrode interfaces.[0023] A self-selecting memory device with asymmetrical electrode interfaces may be formed using examples of etching techniques. For example, the self-selecting memory device may be partially etched in the word line direction through the top electrode. A dielectric liner may then be deposited to be in contact with side surfaces of the top electrode using in-situ or ex-situ techniques. The dielectric liner may serve as a spacer for subsequent etching steps in order to allow for wider dimensions of the self-selecting memory component than dimensions of the top electrode. Therefore, the area of electrode interface between the top electrode and the self-selecting memory component may be less than the area of electrode interface between the bottom electrode and the self-selecting memory component.[0024] Alternatively, a self-selecting memory device with asymmetrical electrode interfaces may be formed using other examples of etching techniques. For example, the self- selecting memory device may be partially etched in the word line direction through the top electrode and the self-selecting memory component. A dielectric liner may then be deposited to be in contact with side surfaces of the top electrode and the self-selecting memory component using in-situ or ex-situ techniques. The dielectric liner may serve as a spacer for subsequent etching steps in order to allow for wider dimensions of the self-selecting memory component than dimensions of the bottom electrode. Therefore, the area of electrode interface between the top electrode and the self-selecting memory component may be greater than the area of electrode interface between the bottom electrode and the self-selecting memory component.[0025] Features of the disclosure introduced above are further described below in the context of a memory array. Self-selecting memory cells with asymmetrical electrode interfaces are illustrated and depicted in the context of a cross-point architecture. These and other features of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to memory cells with asymmetrical electrode interfaces.[0026] FIG. 1 illustrates an example memory array 100 that supports memory cells with asymmetrical electrode interfaces in accordance with various examples of the present disclosure. Memory array 100 may also be referred to as an electronic memory apparatus. Memory array 100 includes memory cells 105 that are programmable to store different states. Each memory cell 105 may be programmable to store two states, denoted a logic“0” and a logic“1.” In some cases, memory cell 105 is configured to store more than two logic states.[0027] A memory cell 105 may include a chalcogenide material, which may be referred to as a self-selecting memory component, that has a variable and configurable threshold voltage or electrical resistance, or both, that is representative of the logic states. In some examples, a threshold voltage of a cell changes depending on a polarity of a pulse used to program the cell. For example, a self-selecting memory cell programmed with one polarity may have certain resistive properties and thus one threshold voltage. And that self-selecting memory cell may be programmed with a different polarity that may result in different resistive properties of the cell and thus a different threshold voltage. As discussed above, when a self-selecting memory cell is programed, elements within the cell may separate, causing redistribution of charges and/or ions and/or elements within the memory cell 105. As used herein, the term“ions” may relate to any of these possibilities. Ions may migrate towards a particular electrode, depending on the given cell’s polarity. For example, in a self- selecting memory cell, ions may migrate towards the negative electrode. The memory cell may then be read by applying a voltage across the cell to sense which electrode ions have migrated towards. In some examples, cations may migrate towards one of the electrodes while anions may migrate towards the other of the electrodes.[0028] In some examples, cell programming may exploit the crystalline structure or atomic configuration to achieve different logic states. For example, a material with a crystalline or an amorphous atomic configuration may have different electrical resistances. A crystalline state may have a low electrical resistance and may, in some cases, be referred to as the“set” state. An amorphous state may have a high electrical resistance and may be referred to as the“reset” state. A voltage applied to the memory cell 105 may thus result in different currents depending on whether the material is in a crystalline or an amorphous state, and the magnitude of the resulting current may be used to determine the logic state stored by memory cell 105.[0029] In some cases, a material in the amorphous, or reset, state may have a threshold voltage associated with it— that is, current flows after the threshold voltage is exceed. Thus, if the applied voltage is less than the threshold voltage, no current may flow if the memory element is in the reset state; if the memory element is in the set state, it may not have a threshold voltage (i.e., a threshold voltage of zero) and, thus, a current may flow in response to the applied voltage. In other cases, the memory cell 105 may have a combination of crystalline and amorphous areas that may result in intermediate resistances, which may correspond to different logic states (i.e., states other than logic 1 or logic 0) and may allow memory cells 105 to store more than two different logic states. As discussed below, the logic state of a memory cell 105 may be set by heating, including melting, the memory element.[0030] Memory array 100 may be a three-dimensional (3D) memory array, where two- dimensional (2D) memory arrays are formed on top of one another. This may increase the number of memory cells that may formed on a single die or substrate as compared with 2D arrays, which in turn may reduce production costs or increase the performance of the memory array, or both. According to the example depicted in FIG. 1, memory array 100 includes two levels of memory cells 105 and may thus be considered a three-dimensional memory array; however, the number of levels is not limited to two. Each level may be aligned or positioned so that memory cells 105 may be approximately aligned with one another across each level, forming a memory cell stack 145.[0031] Each row of memory cells 105 is connected to an access line 110 and an access line 115. Access lines 110 may also be known as word lines 110, and bit lines 115, respectively. Bit lines 115 may also be known digit lines 115. References to word lines and bit lines, or their analogues, are interchangeable without loss of understanding or operation. Word lines 110 and bit lines 115 may be substantially perpendicular to one another to create an array. The two memory cells 105 in a memory cell stack 145 may share a common conductive line such as a digit line 115. That is, a digit line 115 may be in electronic communication with the bottom electrode of the upper memory cell 105 and the top electrode of the lower memory cell 105. Other configurations may be possible; for example, memory cell 105 may include asymmetrical electrode interfaces with the memory storage element.[0032] In general, one memory cell 105 may be located at the intersection of two conductive lines such as a word line 110 and a digit line 115. This intersection may be referred to as a memory cell’s address. A target memory cell 105 may be a memory cell 105 located at the intersection of an energized word line 110 and digit line 115; that is, a word line 110 and digit line 115 may be energized in order to read or write a memory cell 105 at their intersection. Other memory cells 105 that are in electronic communication with (e.g., connected to) the same word line 110 or digit line 115 may be referred to as untargeted memory cells 105.[0033] As discussed above, electrodes may be coupled to a memory cell 105 and a word line 110 or a digit line 115. The term electrode may refer to an electrical conductor, and in some cases, may be employed as an electrical contact to a memory cell 105. An electrode may include a trace, wire, conductive line, conductive layer, or the like that provides a conductive path between elements or components of memory array 100.[0034] Operations such as reading and writing may be performed on memory cells 105 by activating or selecting a word line 110 and digit line 115, which may include applying a voltage or a current to the respective line. Word lines 110 and bit lines 115 may be made of conductive materials, such as metals (e.g., copper (Cu), aluminum (Al), gold (Au), tungsten (W), titanium (Ti), etc.), metal alloys, carbon, conductively-doped semiconductors, or other conductive materials, alloys, or compounds. Upon selecting a memory cell 105, a migration of, for example, ions may be leveraged to set a logic state of the cell.[0035] To read the cell, a voltage may be applied across memory cell 105 and the resulting current or the threshold voltage at which current begins to flow may berepresentative of a logic“1” or a logic“0” state. The crowding of ions at one or the other ends of self-selecting memory component may affect the resistivity and/or the threshold voltage, resulting in greater distinctions in cell response between logic states.[0036] Accessing memory cells 105 may be controlled through a row decoder 120 and a column decoder 130. For example, a row decoder 120 may receive a row address from the memory controller 140 and activate the appropriate word line 110 based on the received row address. Similarly, a column decoder 130 receives a column address from the memory controller 140 and activates the appropriate digit line 115. Thus, by activating a word line 110 and a digit line 115, a memory cell 105 may be accessed.[0037] Upon accessing, a memory cell 105 may be read, or sensed, by sense component 125. For example, sense component 125 may be configured to determine the stored logic state of memory cell 105 based on a signal generated by accessing memory cell 105. The signal may include a voltage or electrical current, and sense component 125 may include voltage sense amplifiers, current sense amplifiers, or both. For example, a voltage may be applied to a memory cell 105 (using the corresponding word line 110 and digit line 115) and the magnitude of the resulting current may depend on the electrical resistance of the memory cell 105. Likewise, a current may be applied to a memory cell 105 and the magnitude of the voltage to create the current may depend on the electrical resistance of the memory cell 105. Sense component 125 may include various transistors or amplifiers in order to detect and amplify a signal, which may be referred to as latching. The detected logic state of memory cell 105 may then be output as output 135. In some cases, sense component 125 may be a part of column decoder 130 or row decoder 120. Or, sense component 125 may be connected to or in electronic communication with column decoder 130 or row decoder 120.[0038] A memory cell 105 may be programmed, or written, by similarly activating the relevant word line 110 and digit line 115— i.e., a logic value may be stored in the memory cell 105. Column decoder 130 or row decoder 120 may accept data, for example input/output 135, to be written to the memory cells 105. In the case of phase change memory or self- selecting memory, a memory cell 105 may be written by heating the self-selecting memory component, for example, by passing a current through the self-selecting memory component. Depending on the logic state written to memory cell 105— e.g., logic“1” or logic“0”— ions may crowd at or near a particular electrode. For example, dependent on the polarity of memory cell 105, ion crowding at or near a first electrode may result in a first threshold voltage representative of a logic“1” state and ion crowding at or near a second electrode may result in a second threshold voltage, different from the first, representative of a logic“0” state. The first threshold voltage and second threshold voltage may, for example, be determined during a read operation performed in a predetermined polarity. The difference between the first and second threshold voltages may be more pronounced in memory cells with asymmetrical electrode interfaces, including those described with reference to FIGs. 3- 8[0039] In some memory architectures, accessing the memory cell 105 may degrade or destroy the stored logic state and re-write or refresh operations may be performed to return the original logic state to memory cell 105. In DRAM, for example, the logic-storing capacitor may be partially or completely discharged during a sense operation, corrupting the stored logic state. So the logic state may be re-written after a sense operation. Additionally, activating a single word line 110 may result in the discharge of all memory cells in the row; thus, all memory cells 105 in the row may need to be re-written. But in non-volatile memory, such as PCM and/or self-selecting memory, accessing the memory cell 105 may not destroy the logic state and, thus, the memory cell 105 may not require re-writing after accessing.[0040] Some memory architectures, including DRAM, may lose their stored state over time unless they are periodically refreshed by an external power source. For example, a charged capacitor may become discharged over time through leakage currents, resulting in the loss of the stored information. The refresh rate of these so-called volatile memory devices may be relatively high, e.g., tens of refresh operations per second for DRAM, which may result in significant power consumption. With increasingly larger memory arrays, increased power consumption may inhibit the deployment or operation of memory arrays (e.g., power supplies, heat generation, material limits, etc.), especially for mobile devices that rely on a finite power source, such as a battery. As discussed below, non-volatile PCM and/or self- selecting memory cells may have beneficial properties that may result in improved performance relative to other memory architectures. For example, PCM and/or self-selecting memory may offer comparable read/write speeds as DRAM but may be non-volatile and allow for increased cell density.[0041] The memory controller 140 may control the operation (read, write, re-write, refresh, discharge, etc.) of memory cells 105 through the various components, for example, row decoder 120, column decoder 130, and sense component 125. In some cases, one or more of the row decoder 120, column decoder 130, and sense component 125 may be co-located with the memory controller 140. Memory controller 140 may generate row and column address signals in order to activate the desired word line 110 and digit line 115. Memory controller 140 may also generate and control various voltages or currents used during the operation of memory array 100. For example, it may apply discharge voltages to a word line 110 or digit line 115 after accessing one or more memory cells 105.[0042] In general, the amplitude, shape, or duration of an applied voltage or current discussed herein may be adjusted or varied and may be different for the various operations discussed in operating memory array 100. Furthermore, one, multiple, or all memory cells 105 within memory array 100 may be accessed simultaneously; for example, multiple or all cells of memory array 100 may be accessed simultaneously during a reset operation in which all memory cells 105, or a group of memory cells 105, are set to a single logic state.[0043] FIG. 2 illustrates an example memory array 200 that supports reading and writing non-volatile memory cells and programming enhancement in memory cells in accordance with various examples of the present disclosure. Memory array 200 may be an example of memory array 100 with reference to FIG. 1.[0044] Memory array 200 may include memory cell l05-a, memory cell l05-b, word line 1 lO-a, and digit line 1 l5-a, which may be examples of a memory cell 105, word line 110, and digit line 115, as described with reference to FIG. 1. Memory cell l05-a may include electrode 205 (e.g., top electrode), electrode 210 (e.g., a bottom electrode), and self-selecting memory component 220. The logic state of memory cell l05-a may be based on at least one characteristic of self-selecting memory component 220. Memory cell l05-b may include a top electrode, bottom electrode, and self-selecting memory component similar to memory cell l05-a. some cases, a 3D memory array may be formed by stacking multiple memory arrays 200 on one another. The two stacked arrays may, in some examples, have common conductive lines so each level may share word line 1 lO-a or digit line 1 l 5-a. Memory cell l05-a may depict a target memory cell— i.e., a target of a sensing operation, as described elsewhere herein.[0045] The architecture of memory array 200 may be referred to as a cross-point architecture. It may also be referred to as a pillar structure. For example, as shown in FIG. 2, a pillar may be in contact with a first conductive line (e.g., access line such as word line 1 lO-a) and a second conductive line (e.g., access line such as digit line 1 l5-a). The pillar may comprise memory cell l05-a, where memory cell l05-a includes a first electrode (e.g., top electrode 205), self-selecting memory component 220, and a second electrode (e.g., bottom electrode 210). Memory cell l05-a may have asymmetrical electrode interfaces (including those described with reference to FIGs. 3-8). The asymmetrical electrode interfaces may cause ion crowding at the top electrode 205 or bottom electrode 210, depending on the polarity of memory cell l05-a. Ion crowding at top electrode 205 or bottom electrode 210 may allow for more-accurate sensing of memory cell l05-a, as described above. In addition, the asymmetrical electrode interfaces may mitigate shorts between adjacent word lines[0046] The cross-point or pillar architecture depicted in FIG. 2 may offer relatively high- density data storage with lower production costs compared to other memory architectures.For example, the cross-point architecture may have memory cells with a reduced area and thus an increased memory cell density compared to other architectures. For example, the architecture may have a 4F2memory cell area, where F is the smallest feature size, compared to other architectures with a 6F2memory cell area, such as those with a three-terminal selection. For example, DRAM may use a transistor, which is a three-terminal device, as the selection component for each memory cell and may have a larger memory cell area compared to the pillar architecture.[0047] In some examples, memory array 200 may be operated using a positive voltage source and the magnitude of an intermediary voltage is between the magnitude of the positive voltage source and a virtual ground. In some examples, both bit line access voltage and word line access voltage are maintained at an intermediary voltage prior to an access operation of memory cell l05-a. And during an access operation, bit line access voltage may be increased (e.g., to a positive supply rail) while word line access voltage may be simultaneously decreased (e.g., to a virtual ground), generating a net voltage across memory cell l05-a. The threshold voltage at which current begins to flow through memory cell l05-a as a result of applying a voltage across memory cell l05-a may be a function of ion migration towards top electrode 205 or bottom electrode 210, which in turn may vary with the shape of self- selecting memory component 220 and the asymmetrical electrode interfaces between self- selecting memory component 220 and top electrode 205 and bottom electrode 210.[0048] Self-selecting memory component 220 may, in some cases, be connected in series between a first conductive line and a second conductive line, for example, between word line 1 lO-a and digit line 1 l5-a. For example, as depicted in FIG. 2, self-selecting memory component 220 may be located between top electrode 205 and bottom electrode 210; thus, self-selecting memory component 220 may be located in series between digit line 1 l5-a and word line l lO-a. Other configurations are possible. As mentioned above, self-selecting memory component 220 may have a threshold voltage such that a current flows through self- selecting memory component 220 when the threshold voltage is met or exceeded. The threshold voltage may depend on the programing of memory cell l05-a and the asymmetrical electrode interfaces between self-selecting memory component 220 and top electrode 205 and bottom electrode 210.[0049] Self-selecting memory component 220 may be arranged in a series configuration between the word line l lO-a and digit line 1 l5-a. Self-selecting memory component 220 may include a chalcogenide glass comprising selenium. In some examples, self-selecting memory component 220 comprises a composition of at least one of selenium, arsenic (As), tellurium (Te), silicon (Si), germanium (Ge), or antimony (Sb). When a voltage is applied across the self-selecting memory component 220 (or when there is a voltage difference between top electrode 205 and bottom electrode 210, ions may migrate toward one or the other electrode. Self-selecting memory component 220 may also serve as a selector device. This type of memory architecture may be referred to as self-selecting memory.[0050] FIG. 3 illustrates example cross-sectional views 300-a and 300-b of a memory device 302 that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. Self-selecting memory component 220-a may have asymmetric electrode interfaces with top electrode 205-a and bottom electrode 2l0-a in a word line direction (e.g., first direction) and/or digit line direction (e.g., second direction).For example, a length of the top electrode 205-a may be less than a length of the bottom electrode 2l0-a, thereby causing the top electrode interface with the self-selecting memory component 220-a to be smaller than the bottom electrode interface with the self-selecting memory component 220-a. Top electrode 205-a may be coupled to digit line 115-b and bottom electrode 2l0-a may be coupled to word line 1 lO-b.[0051] Self-selecting memory component 220-a includes top surface 310 and bottom surface 315 opposite the top surface 310. Self-selecting memory component 220-a may also include length 340 in the word line direction and length 360 in the digit line direction. Length 340 and length 360 may determine the dimensions and area of top surface 310 and bottom surface 315. In some cases, length 340 may be equal when measured along top surface 310 and bottom surface 315 in the word line direction. That is, the cross-section of self-selecting memory component 220-a may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 360 may be equal when measured along top surface 310 and bottom surface 315 in the digit line direction. That is, the cross-section of self- selecting memory component 220-a may be a rectangle in the digit line direction and illustrate a straight profile. The area of top surface 310 and the area of bottom surface 315 may also be equal.[0052] In some cases, length 340 may be unequal when measured along top surface 310 and bottom surface 315 in the word line direction. That is, the cross-section of self-selecting memory component 220-a may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile). In some cases, length 360 may be unequal when measured along top surface 310 and bottom surface 315 in the digit line direction. That is, the cross-section of self-selecting memory component 220-a may be a trapezoid or an inverted trapezoid in the digit line direction and illustrate a tapered profile. The area of top surface 310 and the area of bottom surface 315 may also be unequal.[0053] Self-selecting memory component 220-a includes top surface 310 in contact with top electrode 205-a. In some case, the area of contact between top electrode 205-a and top surface 310 of self-selecting memory component 220-a may be an electrode interface. In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-a and top electrode 205-a and bottom electrode 2l0-a. Top electrode 205-a may include length 335 in the word line direction and length 355 in the digit line direction. Length 335 and length 355 may determine the dimensions and area of the top surface and bottom surface of top electrode 205-a. In some cases, length 335 may be equal when measured along the top surface and bottom surface of top electrode 205-a in the word line direction. That is, the cross-section of top electrode 205-a may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 355 may be equal when measured along the top surface and bottom surface of top electrode 205-a in the digit line direction. That is, the cross-section of top electrode 205-a may be a rectangle in the digit line direction and illustrate a straight profile. The area of the top surface and the area of bottom surface of top electrode 205-a may also be equal.[0054] In some cases, length 335 may be unequal when measured along the top surface and bottom surface of top electrode 205-a in the word line direction. That is, the cross-section of top electrode 205-a may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile. In some cases, length 355 may be unequal when measured along the top surface and bottom surface of top electrode 205-a in the digit line direction. That is, the cross-section of top electrode 205-a may be a trapezoid or an inverted trapezoid rectangle in the digit line direction and illustrate a tapered profile. The area of the top surface and the area of bottom surface of top electrode 205-a may also be unequal.[0055] In some cases, length 335 of top electrode 205-a may be less than length 340 of self-selecting memory component 220-a in the word line direction. In other examples, length 355 of top electrode 205-a may be less than length 360 of self-selecting memory component 220-a in the digit line direction. That is, top electrode 205-a may be smaller than self- selecting memory component 220-a. Such a configuration of the top electrode 205-a affects the size of the interface between the top electrode 205-a and the self-selecting memory component 220-a. The area of the interface may be less than the area of the top surface 310 of the self-selecting memory component 220-a.[0056] From the perspective of the word line, a dielectric liner 305 may be in contact with one or more surfaces of top electrode 205-a and self-selecting memory component 220-a. For example, dielectric liner 305 may be in contact with side surface 320 and side surface 325 of top electrode 205-a. Dielectric liner 305 may also be in contact with top surface 310 of self-selecting memory component 220-a. For example, the dielectric liner 305 may be contact with portions of the top surface 310 that are not in contact with the top electrode 205-a. In some examples, dielectric liner 305 may be in contact with side surface 320, side surface 325, top surface 310, or a combination thereof. Dielectric liner 305 may be a dielectric material compatible with the material of the self-selecting memory component 220-a. For example, dielectric liner 305 may be an electrically neutral material.[0057] Dielectric liner 305 may be disposed along one or more surfaces of memory device 302 to create space between the dimension of top electrode 205-a and the dimension of self-selecting memory component 220-a. For example, length 330 may greater than length 335 of top electrode 205-a and include dielectric liner 305 in contact with side surface 320 and side surface 325. In some cases, length 330 may be greater than length 335 of top electrode 205-a. In some examples, length 330 may be equal to length 340 of self-selecting memory component 220-a from the perspective of the word line. [0058] In some examples, length 330 may vary depending on the length of dielectric liner 305 in contact with top surface 310 of self-selecting memory component 220-a. For example, an amount of dielectric liner 305 in contact with side surface 320 of top electrode 205-a and top surface 310 of self-selecting memory component 220-a may be different than an amount of dielectric liner 305 in contact with side surface 325 of top electrode 205-a and top surface 310 of self-selecting memory component 220-a. That is, the amount of dielectric liner 305 in contact with side surface 320 of top electrode 205-a and top surface 310 of self-selecting memory component 220-a may be greater than the amount of dielectric liner 305 in contact with side surface 325 of top electrode 205-a and top surface 310 of self-selecting memory component 220-a. Alternatively, the amount of dielectric liner 305 in contact with side surface 320 of top electrode 205-a and top surface 310 of self-selecting memory component 220-a may be less than the amount of dielectric liner 305 in contact with side surface 325 of top electrode 205-a and top surface 310 of self-selecting memory component 220-a.[0059] From the perspective of the digit line, dielectric liner 305 may be in contact with side surface 380 and side surface 385 of top electrode 205-a. Additionally, dielectric liner 305 may be in contact with side surface 370 and side surface 375 of digit line 115-b in the digit line direction. Dielectric liner 305 may also be in contact with top surface 310 of self- selecting memory component 220-a. Dielectric liner 305 may also be in contact with side surface 370, side surface 380, side surface 375, side surface 385, top surface 310, or a combination thereof. Length 350 may include length 355 of top electrode 205-a and dielectric liner 305 in contact with side surface 380 and side surface 385. In some cases, length 350 may be greater than length 355 of top electrode 205-a. In some examples, length 350 may be equal to length 360 of self-selecting memory component 220-a from the perspective of the digit line.[0060] Length 350 may vary depending on the length of dielectric liner 305 in contact with top surface 310 of self-selecting memory component 220-a. For example, an amount of dielectric liner 305 in contact with side surface 380 of top electrode 205-a, side surface 370 of digit line 115-b, and top surface 310 of self-selecting memory component 220-a may be different than an amount of dielectric liner 305 in contact with side surface 385 of top electrode 205-a, side surface 375 of digit line 1 l5-b, and top surface 310 of self-selecting memory component 220-a. That is, the amount of dielectric liner 305 in contact with side surface 380 of top electrode 205-a, side surface 370 of digit line 115-b, and top surface 310 of self-selecting memory component 220-a may be greater than the amount of dielectric liner 305 in contact with side surface 385 of top electrode 205-a, side surface 375 of digit line 115-b, and top surface 310 of self-selecting memory component 220-a. Alternatively, the amount of dielectric liner 305 in contact with side surface 380 of top electrode 205-a, side surface 370 of digit line 115-b, and top surface 310 of self-selecting memory component 220-a may be less than the amount of dielectric liner 305 in contact with side surface 385 of top electrode 205-a, side surface 375 of digit line 115-b, and top surface 310 of self-selecting memory component 220-a.[0061] Self-selecting memory component 220-a also includes bottom surface 315 in contact with bottom electrode 2l0-a. In some case, the area of contact between bottom electrode 2l0-a and bottom surface 315 of self-selecting memory component 220-a may be an electrode interface. In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-a and top electrode 205-a and bottom electrode 2l0-a. Bottom electrode 2l0-a may include length 345 in the word line direction and length 365 in the digit line direction. Length 345 and length 365 may determine the dimensions and area of the top surface and bottom surface of top electrode 205-a. In some cases, length 345 may be equal when measured along the top surface and bottom surface of bottom electrode 2l0-a in the word line direction. That is, the cross-section of bottom electrode 2l0-a may be a rectangle in the word line direction and illustrate a straight profile.In some cases, length 365 may be equal when measured along the top surface and bottom surface of bottom electrode 2l0-a in the digit line direction. That is, the cross-section of bottom electrode 2l0-a may be a rectangle in the digit line direction and illustrate a straight profile.[0062] In some cases, length 345 of bottom electrode 2l0-a may be equal to length 340 of self-selecting memory component 220-a in the word line direction. From the perspective of the digit line, length 365 of bottom electrode 2l0-a may be greater than length 360 of self- selecting memory component 220-a. Such a configuration of the bottom electrode 2l0-a affects the size of the interface between the bottom electrode 2l0-a and the self-selecting memory component 220-a. The area of the interface may be equal to the area of the bottom surface 315 of the self-selecting memory component 220-a.[0063] In some cases, bottom electrode 2l0-a may illustrate a tapered profile in the word line direction, the digit line direction, or both. For example, bottom electrode 2l0-a may taper from a bottom surface in contact with word line 1 lO-b to a top surface in contact with self- selecting memory component 220-a. The cross section of bottom electrode 2l0-a may be a trapezoid. Alternatively, bottom electrode 2l0-a may illustrate an inverted taper profile in the word line direction, the digit line direction, or both. That is, bottom electrode 2l0-a may taper from a top surface in contact with self-selecting memory component 220-a to a bottom surface in contact with word line 1 lO-b. The cross section of bottom electrode 2l0-a may be an inverted trapezoid.[0064] Bottom electrode 2l0-a may form different geometric shapes. For example, bottom electrode 2l0-a may be in the shape of a trapezoidal prism, and a cross-section of bottom electrode 2l0-a may include a trapezoid in the word line direction and a rectangle in the digit line direction. Alternatively, bottom electrode 2l0-a may be in the shape of an inverted trapezoidal prism, and a cross section of bottom electrode 2l0-a may include an inverted trapezoid in the word line direction and a rectangle in the digit line direction. In some cases, bottom electrode 2l0-a may be a frustum. A frustum, as used herein, includes a shape of or resembling the portion of a cone or pyramid with the upper portion removed, or a shape of or resembling the portion of a cone or pyramid between a first plane that intercepts the cone or pyramid below the top and a second plane at or above the base.[0065] Top electrode 205-a may be in electronic communication with bottom electrode 2l0-a through self-selecting memory component 220-a. In some cases, length 335 of top electrode 205-a may be less than length 345 of bottom electrode 2l0-a in the word line direction. Alternatively, length 355 of top electrode 205-a may be less than length 365 of bottom electrode 2l0-a in the digit line direction. However, length 330 may be equal to length 345 of bottom electrode 2l0-a in the word line direction. In some cases, length 350 may be less than length 365 of bottom electrode 2l0-a in the digit line direction.[0066] The area of contact (e.g., the interface) between top surface 310 of self-selecting memory component 220-a and top electrode 205-a may be determined by the dimensions of length 335 and length 355 of top electrode 205-a. The area of contact (e.g., the interface) between bottom surface 315 of self-selecting memory component 220-a and bottom electrode 2l0-a may be determined by the dimensions of length 345 and length 365 of bottom electrode 2l0-a. In some cases, the area of contact between top surface 310 of self-selecting memory component 220-a and top electrode 205-a and the area of contact between bottom surface 315 of self-selecting memory component 220-a and bottom electrode 2l0-a may be different to achieve asymmetrical electrode interfaces between top electrode 205-a and bottom electrode 2l0-a. For example, the area of contact between top surface 310 of self-selecting memory component 220-a and top electrode 205-a may be less than the area of contact between bottom surface 315 of self-selecting memory component 220-a and bottom electrode 2l0-a in the word line and digit line directions.[0067] Self-selecting memory component 220-a may mimic a tapered profile 390 due to the asymmetrical electrode interfaces. From the perspective of the word line and digit line, self-selecting memory component 220-a may mimic a tapered profile 390 such that the area of contact between top surface 310 of self-selecting memory component 220-a and top electrode 205-a is less than the area of contact between bottom surface 315 of self-selecting memory component 220-a and bottom electrode 2l0-a. The tapered profile 390 may be from bottom surface 315 to top surface 310 of self-selecting memory component 220-a.[0068] Memory cells may be read by applying a voltage across self-selecting memory component 220-a. The voltage may be applied across self-selecting memory component 220-a in a predetermined polarity (e.g., a positive polarity). The voltage may be applied to top surface 310 or bottom surface 315 of the self-selecting memory component 220-a. In some cases, the positive polarity voltage may be applied to the surface of self-selecting memory component 220-a with a greater area in contact with top electrode 205-a or bottom electrode 2l0-a. For example, the positive polarity voltage may be applied to bottom surface 315 in contact with bottom electrode 2l0-a.[0069] The threshold voltage of self-selecting memory component 220-a and/or resulting current through self-selecting memory component 220-a may depend on the location of a high resistivity region and low resistivity region within self-selecting memory component 220-a due to the distribution of ions within self-selecting memory component 220-a that may be affected by ion migration. The resistivity of the region may be based on the composition of self-selecting memory component 220-a. For example, self-selecting memory component 220-a may be a chalcogenide material.[0070] FIG. 4 illustrates cross-sectional views 400-a and 400-b of a memory device 402 that support memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. Self-selecting memory component 220-b may have asymmetric electrode interfaces with top electrode 205-b and bottom electrode 2l0-b in a word line direction (e.g., first direction). For example, a length of the top electrode 205-b may be less than a length of the bottom electrode 2l0-b, thereby causing the top electrode interface with the self-selecting memory component 220-b to be smaller than the bottom electrode interface with the self-selecting memory component 220-b. Top electrode 205-b may be coupled to digit line 115-c and bottom electrode 2l0-b may be coupled to word line l lO-c.[0071] Self-selecting memory component 220-b includes top surface 3 l0-a and bottom surface 3 l5-a opposite the top surface 3 l0-a. Self-selecting memory component 220-b may also include length 415 in the word line direction and length 440 in the digit line direction. Length 415 and length 440 may determine the dimensions and area of top surface 3 lO-a and bottom surface 3 l5-a. In some cases, length 415 may be equal when measured along top surface 3 lO-a and bottom surface 3 l5-a in the word line direction. That is, the cross-section of self-selecting memory component 220-b may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 440 may be equal when measured along top surface 3 lO-a and bottom surface 3 l5-a in the digit line direction. That is, the cross-section of self-selecting memory component 220-b may be a rectangle in the digit line direction and illustrate a straight profile. The area of top surface 3 lO-a and the area of bottom surface 3 l5-a may also be equal.[0072] In some cases, length 415 may be unequal when measured along top surface 3 l0-a and bottom surface 3 l5-a in the word line direction. That is, the cross-section of self- selecting memory component 220-b may be a trapezoid or an inverted trapezoid in the word line direction and illustrate a tapered profile. In some cases, length 440 may be unequal when measured along top surface 3 lO-a and bottom surface 315-a in the digit line direction. That is, the cross-section of self-selecting memory component 220-b may be a trapezoid or an inverted trapezoid in the digit line direction and illustrate a tapered profile. The area of top surface 3 lO-a and the area of bottom surface 3 l5-a may also be unequal.[0073] Self-selecting memory component 220-b includes top surface 3 l0-a in contact with top electrode 205-b. In some case, the area of contact between top electrode 205-b and top surface 3 l0-a of self-selecting memory component 220-b may be an electrode interface.In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-b and top electrode 205-b and bottom electrode 2l0-b. Top electrode 205-b may include length 420 in the word line direction and length 435 in the digit line direction. Length 425 and length 435 may determine the dimensions and area of the top surface and bottom surface of top electrode 205-b. In some cases, length 420 may be equal when measured along the top surface and bottom surface of top electrode 205-b in the word line direction. That is, the cross-section of top electrode 205-b may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 435 may be equal when measured along the top surface and bottom surface of top electrode 205-b in the digit line direction. That is, the cross-section of top electrode 205-b may be a rectangle in the digit line direction and illustrate a straight profile. The area of the top surface and the area of bottom surface of top electrode 205-b may also be equal.[0074] In some cases, length 420 may be unequal when measured along the top surface and bottom surface of top electrode 205-b in the word line direction. That is, the cross-section of top electrode 205-b may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile. In some cases, length 435 may be unequal when measured along the top surface and bottom surface of top electrode 205-b in the digit line direction. That is, the cross-section of top electrode 205-b may be a trapezoid or an inverted trapezoid rectangle in the digit line direction and illustrate a tapered profile. The area of the top surface and the area of bottom surface of top electrode 205-b may also be unequal.[0075] In some cases, length 420 of top electrode 205-b may be less than length 415 of self-selecting memory component 220-b in the word line direction. In other examples, length 435 of top electrode 205-b may be equal to length 440 of self-selecting memory component 220-b in the digit line direction. Such a configuration of the top electrode 205-b affects the size of the interface between the top electrode 205-b and the self-selecting memory component 220-b. The area of the interface may be less than the area of the top surface 3 lO-a of the self-selecting memory component 220-b.[0076] From the perspective of the word line, a dielectric liner 305-a may be in contact with one or more surfaces of top electrode 205-b and self-selecting memory component 220-b. For example, dielectric liner 305-a may be in contact with side surface 405 and side surface 410 of top electrode 205-b. Dielectric liner 305-a may also be in contact with top surface 3 l0-a of self-selecting memory component 220-b. In some examples, dielectric liner 305-a may be in contact with side surface 405, side surface 410, top surface 3 l0-a, or a combination thereof. Dielectric liner 305-a may be a dielectric material compatible with the material of the self-selecting memory component 220-b. For example, dielectric liner 305-a may be an electrically neutral material. [0077] Dielectric liner 305-a may be disposed along one or more surfaces of memory device 402 to create space between the dimension of top electrode 205-b and the dimension of self-selecting memory component 220-b. For example, length 430 may include length 420 of top electrode 205-b and dielectric liner 305-a in contact with side surface 405 and side surface 410. In some cases, length 430 may be greater than length 420 of top electrode 205-b. In some examples, length 430 may be equal to length 415 of self-selecting memory component 220-b from the perspective of the word line.[0078] In some examples, length 430 may vary depending on the length of dielectric liner 305-a in contact with top surface 3 lO-a of self-selecting memory component 220-b. For example, an amount of dielectric liner 305-a in contact with side surface 405 of top electrode 205-b and top surface 3 lO-a of self-selecting memory component 220-b may be different than an amount of dielectric liner 305-a in contact with side surface 410 of top electrode 205-b and top surface 3 l0-a of self-selecting memory component 220-b. That is, the amount of dielectric liner 305-a in contact with side surface 405 of top electrode 205-b and top surface 3 lO-a of self-selecting memory component 220-b may be greater than the amount of dielectric liner 305-a in contact with side surface 410 of top electrode 205-b and top surface 3 l0-a of self-selecting memory component 220-b. Alternatively, the amount of dielectric liner 305-a in contact with side surface 405 of top electrode 205-b and top surface 3 lO-a of self-selecting memory component 220-b may be less than the amount of dielectric liner 305-a in contact with side surface 410 of top electrode 205-b and top surface 3 l0-a of self-selecting memory component 220-b. From the perspective of the digit line, dielectric liner 305-a may be absent from memory device 402.[0079] Self-selecting memory component 220-b also includes bottom surface 3 l5-a in contact with bottom electrode 2l0-b. In some case, the area of contact between bottom electrode 2l0-b and bottom surface 3 l5-a of self-selecting memory component 220-b may be an electrode interface. In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-b and top electrode 205-b and bottom electrode 2l0-b. Bottom electrode 2l0-b may include length 425 in the word line direction and length 445 in the digit line direction. Length 425 and length 445 may determine the dimensions and area of the top surface and bottom surface of top electrode 205-b. In some cases, length 425 may be equal when measured along the top surface and bottom surface of bottom electrode 2l0-b in the word line direction. That is, the cross-section of bottom electrode 2l0-b may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 445 may be equal when measured along the top surface and bottom surface of bottom electrode 2l0-b in the digit line direction. That is, the cross-section of bottom electrode 2l0-b may be a rectangle in the digit line direction and illustrate a straight profile.[0080] In some cases, length 425 of bottom electrode 2l0-b may be equal to length 415 of self-selecting memory component 220-b in the word line direction. From the perspective of the digit line, length 445 of bottom electrode 2l0-b may be greater than length 440 of self- selecting memory component 220-b. Such a configuration of the bottom electrode 2l0-b affects the size of the interface between the bottom electrode 2l0-b and the self-selecting memory component 220-b. The area of the interface may be equal to the area of the bottom surface 3 l5-a of the self-selecting memory component 220-b.[0081] In some cases, bottom electrode 2l0-b may illustrate a tapered profile in the word line direction, the digit line direction, or both. For example, bottom electrode 2l0-b may taper from a bottom surface in contact with word line 1 lO-c to a top surface in contact with self- selecting memory component 220-b. The cross section of bottom electrode 2l0-b may be a trapezoid. Alternatively, bottom electrode 2l0-b may illustrate an inverted taper profile in the word line direction, the digit line direction, or both. That is, bottom electrode 2l0-b may taper from a top surface in contact with self-selecting memory component 220-b to a bottom surface in contact with word line 1 lO-c. The cross section of bottom electrode 2l0-b may be an inverted trapezoid.[0082] Bottom electrode 2l0-b may form different geometric shapes. For example, bottom electrode 2l0-b may be in the shape of a trapezoidal prism, and a cross-section of bottom electrode 2l0-b may include a trapezoid in the word line direction and a rectangle in the digit line direction. Alternatively, bottom electrode 2l0-b may be in the shape of an inverted trapezoidal prism, and a cross section of bottom electrode 2l0-b may include an inverted trapezoid in the word line direction and a rectangle in the digit line direction. In some cases, bottom electrode 2l0-b may be a frustum. A frustum, as used herein, includes a shape of or resembling the portion of a cone or pyramid with the upper portion removed, or a shape of or resembling the portion of a cone or pyramid between a first plane that intercepts the cone or pyramid below the top and a second plane at or above the base.[0083] Top electrode 205-b may be in electronic communication with bottom electrode 2l0-b through self-selecting memory component 220-b. In some cases, length 420 of top electrode 205-b may be less than length 425 of bottom electrode 2l0-b in the word line direction. Alternatively, length 435 of top electrode 205-b may be less than length 445 of bottom electrode 2l0-b in the digit line direction. However, length 430 may be equal to length 425 of bottom electrode 2l0-b in the word line direction.[0084] The area of contact (e.g., the interface) between top surface 3 l0-a of self-selecting memory component 220-b and top electrode 205-b may be determined by the dimensions of length 420 and length 435 of top electrode 205-b. The area of contact (e.g., the interface) between bottom surface 3 l5-a of self-selecting memory component 220-b and bottom electrode 2l0-b may be determined by the dimensions of length 425 and length 445 of bottom electrode 2l0-b. In some cases, the area of contact between top surface 3 lO-a of self- selecting memory component 220-b and top electrode 205-b and the area of contact between bottom surface 3 l5-a of self-selecting memory component 220-b and bottom electrode 2l0-b may be different to achieve asymmetrical electrode interfaces between top electrode 205-b and bottom electrode 2l0-b. For example, the area of contact between top surface 3 lO-a of self-selecting memory component 220-b and top electrode 205-b may be less than the area of contact between bottom surface 3 l5-a of self-selecting memory component 220-b and bottom electrode 2l0-b in the word line direction.[0085] Self-selecting memory component 220-b may mimic a tapered profile 450 due to the asymmetrical electrode interfaces. From the perspective of the word line, self-selecting memory component 220-b may mimic a tapered profile 450 such that the area of contact between top surface 3 l0-a of self-selecting memory component 220-b and top electrode 205-b is less than the area of contact between bottom surface 3 l5-a of self-selecting memory component 220-b and bottom electrode 2l0-b. The tapered profile 450 may be from bottom surface 3 l5-a to top surface 3 lO-a of self-selecting memory component 220-b.[0086] Memory cells may be read by applying a voltage across self-selecting memory component 220-b. The voltage may be applied across self-selecting memory component 220-b in a predetermined polarity (e.g., a positive polarity). The voltage may be applied to top surface 3 lO-a or bottom surface 3 l5-a of the self-selecting memory component 220-b. In some cases, the positive polarity voltage may be applied to the surface of self-selecting memory component 220-b with a greater area in contact with top electrode 205-b or bottom electrode 2l0-b. For example, the positive polarity voltage may be applied to bottom surface 3 l5-a in contact with bottom electrode 2l0-b. [0087] The threshold voltage of self-selecting memory component 220-b and/or resulting current through self-selecting memory component 220-b may depend on the location of a high resistivity region and low resistivity region within self-selecting memory component 220-b due to the distribution of ions within self-selecting memory component 220-b that may be affected by ion migration. The resistivity of the region may be based on the composition of self-selecting memory component 220-b. For example, self-selecting memory component 220-b may be a chalcogenide material.[0088] FIG. 5 illustrates cross-sectional views 500-a and 500-b of a memory device 501 that support memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. Self-selecting memory component 220-c may have asymmetric electrode interfaces with the top electrode 205-c and bottom electrode 210-C in a word line direction (e.g., first direction) and digit line direction (e.g., second direction). For example, a length of the bottom electrode 210-C may be less than a length of the top electrode 205-c, thereby causing the bottom electrode interface with the self-selecting memory component 220-c to be smaller than the bottom electrode interface with the self-selecting memory component 220-c. Top electrode 205-c may be coupled to digit line 115-d and bottom electrode 210-C may be coupled to word line 1 lO-d.[0089] Self-selecting memory component 220-c includes top surface 3 l0-b and bottom surface 315-b is opposite the top surface 3 l0-b. Self-selecting memory component 220-c may also include length 530 in the word line direction and length 585 in the digit line direction. Length 530 and length 585 may determine the dimensions and area of top surface 3 l0-b and bottom surface 315-b. In some cases, length 530 may be equal when measured along top surface 3 lO-b and bottom surface 315-b in the word line direction. That is, the cross-section of self-selecting memory component 220-c may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 585 may be equal when measured along top surface 3 l0-b and bottom surface 315-b in the digit line direction. That is, the cross-section of self-selecting memory component 220-c may be a rectangle in the digit line direction and illustrate a straight profile. The area of top surface 3 lO-b and the area of bottom surface 315-b may also be equal.[0090] In some cases, length 530 may be unequal when measured along top surface 3 lO-b and bottom surface 315-b in the word line direction. That is, the cross-section of self- selecting memory component 220-c may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile. In some cases, length 585 may be unequal when measured along top surface 3 l0-b and bottom surface 315-b in the digit line direction. That is, the cross-section of self-selecting memory component 220-c may be a trapezoid or an inverted trapezoid in the digit line direction and illustrate a tapered profile. The area of top surface 3 lO-b and the area of bottom surface 315-b may also be unequal.[0091] Self-selecting memory component 220-c includes top surface 3 l0-b in contact with top electrode 205-c. In some case, the area of contact between top electrode 205-c and top surface 3 l0-b of self-selecting memory component 220-c may be an electrode interface.In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-c and top electrode 205-c and bottom electrode 210-C. Top electrode 205-c may include length 525 in the word line direction and length 580 in the digit line direction. Length 525 and length 580 may determine the dimensions and area of the top surface and bottom surface of top electrode 205-c. In some cases, length 525 may be equal when measured along the top surface and bottom surface of top electrode 205-c in the word line direction. That is, the cross-section of top electrode 205-c may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 580 may be equal when measured along the top surface and bottom surface of top electrode 205-c in the digit line direction. That is, the cross-section of top electrode 205-c may be a rectangle in the digit line direction and illustrate a straight profile. The area of the top surface and the area of bottom surface of top electrode 205-c may also be equal.[0092] In some cases, length 525 may be unequal when measured along the top surface and bottom surface of top electrode 205-c in the word line direction. That is, the cross-section of top electrode 205-c may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile. In some cases, length 580 may be unequal when measured along the top surface and bottom surface of top electrode 205-c in the digit line direction. That is, the cross-section of top electrode 205-c may be a trapezoid or an inverted trapezoid rectangle in the digit line direction and illustrate a tapered profile. The area of the top surface and the area of bottom surface of top electrode 205-c may also be unequal.[0093] In some cases, length 525 of top electrode 205-c may be equal to length 530 of self-selecting memory component 220-c in the word line direction. In other examples, length 580 of top electrode 205-c may be equal to length 585 of self-selecting memory component 220-c in the digit line direction. That is, top electrode 205-c may be the same size as self- selecting memory component 220-c. Such a configuration of the top electrode 205-c affects the size of the interface between the top electrode 205-c and the self-selecting memory component 220-c. The area of the interface may be equal to the area of the top surface 3 lO-b of the self-selecting memory component 220-c.[0094] From the perspective of the word line, a dielectric liner 305-b may be in contact with one or more surfaces of top electrode 205-c and self-selecting memory component 220-c. For example, dielectric liner 305-b may be in contact with side surface 505 and side surface 510 of top electrode 205-c. Dielectric liner 305-b may also be in contact with side surface 515 and side surface 520 of self-selecting memory component 220-c. In some examples, dielectric liner 305-b may be in contact with side surface 505, side surface 510, side surface 515, side surface 520, or a combination thereof. Dielectric liner 305-b may be a dielectric material compatible with the material of the self-selecting memory component 220-c. For example, dielectric liner 305-b may be an electrically neutral material.[0095] Dielectric liner 305-b may be disposed along one or more surfaces of memory device 501- to create space between the dimension of bottom electrode 210-C and the dimension of self-selecting memory component 220-c. For example, length 535 may include length 525 of top electrode 205-c and dielectric liner 305-b in contact with side surface 505 and side surface 510. In some cases, length 535 may be greater than length 525 of top electrode 205-c. In some examples, length 535 may be greater than length 530 of self- selecting memory component 220-c from the perspective of the word line.[0096] Further, length 508 may be measured between inner surface 504 and 506 of dielectric liner 305-b in the word line direction. Inner surfaces 504 and 506 of dielectric liner 305-b may be in contact with side surfaces 505 and 510 of top electrode 205-c. In addition, inner surfaces 504 and 506 of dielectric liner 305-b may also be in contact with side surfaces 515 and 520 of self-selecting memory component 220-c. In some cases, length 508 may be greater than length 540 and 545 of bottom electrode 210-C.[0097] In some examples, length 535 may vary depending on the length of dielectric liner 305-b in contact with side surface 505 and side surface 510 of top electrode 205-c and side surface 505 and side surface 510 of self-selecting memory component 220-c. For example, an amount of dielectric liner 305-b in contact with side surface 505 of top electrode 205-c and side surface 515 of self-selecting memory component 220-c may be different than an amount of dielectric liner 305-b in contact with side surface 510 of top electrode 205-c and side surface 520 of self-selecting memory component 220-c. That is, the amount of dielectric liner 305-b in contact with side surface 505 of top electrode 205-c and side surface 515 of self- selecting memory component 220-c may be greater than the amount of dielectric liner 305-b in contact with side surface 510 of top electrode 205-c and side surface 520 of self-selecting memory component 220-c. Alternatively, the amount of dielectric liner 305-b in contact with side surface 505 of top electrode 205-c and side surface 515 of self-selecting memory component 220-c may be less than the amount of dielectric liner 305-b in contact with side surface 510 of top electrode 205-c and side surface 520 of self-selecting memory component 220-c.[0098] From the perspective of the digit line, dielectric liner 305-b may be in contact with side surface 560 and side surface 565 of top electrode 205-c. Additionally, dielectric liner 305-b may be in contact with side surface 550 and side surface 555 of digit line 115-d in the digit line direction. Dielectric liner 305-b may also be in contact with side surface 570 and side surface 575 of self-selecting memory component 220-c. Dielectric liner 305-b may be in contact with side surfaces 550, 555, 560, 565, 570, and 575, or a combination thereof. Length 595 may include length 580 of top electrode 205-c and dielectric liner 305-b in contact with side surfaces 550, 555, 560, 565, 570, and 575. In some cases, length 595 may be greater than length 580 of top electrode 205-c. In some examples, length 595 may be greater than length 585 of self-selecting memory component 220-c from the perspective of the digit line.[0099] Length 595 may vary depending on the length of dielectric liner 305-b in contact with side surfaces 560 and 565 of top electrode 205-c, side surfaces 570 and 575 of self- selecting memory component 220-c, and side surfaces 550 and 555 of digit line 115-d. For example, an amount of dielectric liner 305-b in contact with side surface 560 of top electrode 205-c, side surface 550 of digit line 115-d, and side surface 570 of self-selecting memory component 220-c may be different than an amount of dielectric liner 305-b in contact with side surface 565 of top electrode 205-c, side surface 555 of digit line 115-d, and side surface 575 of self-selecting memory component 220-c. That is, the amount of dielectric liner 305-b in contact the side surface 560 of top electrode 205-c, side surface 550 of digit line 115-d, and side surface 570 of self-selecting memory component 220-c may be greater than the amount of dielectric liner 305-b in contact with side surface 565 of top electrode 205-c, side surface 555 of digit line 115-d, and side surface 575 of self-selecting memory component 220-c.[0100] Alternatively, the amount of dielectric liner 305-b in contact with side surface 560 of top electrode 205-c, side surface 550 of digit line 115-d, and side surface 570 of self- selecting memory component 220-c may be less than the amount of dielectric liner 305-b in contact with side surface 565 of top electrode 205-c, side surface 555 of digit line 115-d, and side surface 575 of self-selecting memory component 220-c.[0101] Self-selecting memory component 220-c also includes bottom surface 315-b in contact with bottom electrode 210-C. In some case, the area of contact between bottom electrode 210-C and bottom surface 315-b of self-selecting memory component 220-c may be an electrode interface. In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-c and top electrode 205-c and bottom electrode 210-C. Bottom electrode 210-C may include bottom length 545 and top length 540 in the word line direction and length 590 in the digit line direction. In some cases, bottom length 545 may be greater than top length 540. That is, the cross-section of bottom electrode 2l0-c may be a trapezoid in the word line direction and illustrate a tapered profile. In some cases, length 590 may be equal when measured along the top surface and bottom surface of bottom electrode 210-C in the digit line direction. That is, the cross-section of bottom electrode 210-C may be a rectangle in the digit line direction and illustrate a straight profile.[0102] In some cases, top length 540 and bottom length 545 of bottom electrode 210-C may be less than length 530 of self-selecting memory component 220-c in the word line direction. From the perspective of the digit line, length 590 of bottom electrode 210-C may be less than length 585 of self-selecting memory component 220-c. Such a configuration of the bottom electrode 210-C affects the size of the interface between the bottom electrode 210-C and the self-selecting memory component 220-c. The area of the interface may be less than the area of the bottom surface 315-b of the self-selecting memory component 220-c.[0103] In some cases, bottom electrode 210-C may illustrate a tapered profile in the word line direction, the digit line direction or both. For example, bottom electrode 210-C may taper from a bottom surface in contact with word line 1 lO-d to a top surface in contact with self- selecting memory component 220-c. The cross section of bottom electrode 210-C may be a trapezoid. Alternatively, bottom electrode 210-C may illustrate an inverted taper profile in the word line direction, the digit line direction, or both. That is, bottom electrode 210-C may taper from a top surface in contact with self-selecting memory component 220-c to a bottom surface in contact with word line 1 lO-d. The cross section of bottom electrode 210-C may be an inverted trapezoid.[0104] Bottom electrode 210-C may form different geometric shapes. For example, bottom electrode 210-C may be in the shape of a trapezoidal prism, and a cross-section of bottom electrode 210-C may include a trapezoid in the word line direction and a rectangle in the digit line direction. Alternatively, bottom electrode 210-C may be in the shape of an inverted trapezoidal prism, and a cross section of bottom electrode 210-C may include an inverted trapezoid in the word line direction and a rectangle in the digit line direction. In some cases, bottom electrode 210-C may be a frustum.[0105] Top electrode 205-c may be in electronic communication with bottom electrode 210-C through self-selecting memory component 220-c. In some cases, length 525 of top electrode 205-c may be greater than top length 540 and bottom length 545 of bottom electrode 210-C in the word line direction. Alternatively, length 580 of top electrode 205-c may be greater than length 590 of bottom electrode 210-C in the digit line direction. Length 535 may be greater than top length 540 and bottom length 545 of bottom electrode 210-C in the word line direction. In some cases, length 595 may be greater than length 590 of bottom electrode 210-C in the digit line direction.[0106] The area of contact (e.g., the interface) between top surface 3 l0-b of self-selecting memory component 220-c and top electrode 205-c may be determined by the dimensions of length 525 and length 580 of top electrode 205-c. The area of contact (e.g., the interface) between bottom surface 315-b of self-selecting memory component 220-c and bottom electrode 210-C may be determined by the dimensions of top length 540 and length 590 of bottom electrode 210-C. In some cases, the area of contact between top surface 3 l0-b of self- selecting memory component 220-c and top electrode 205-c and the area of contact between bottom surface 315-b of self-selecting memory component 220-c and bottom electrode 210-C may be different to achieve asymmetrical electrode interfaces between top electrode 205-c and bottom electrode 210-C. For example, the area of contact between top surface 3 l0-b of self-selecting memory component 220-c and top electrode 205-c may be greater than the area of contact between bottom surface 315-b of self-selecting memory component 220-c and bottom electrode 210-C in the word line and digit line directions. [0107] Self-selecting memory component 220-c may mimic a tapered profile 502 due to the asymmetrical electrode interfaces. From the perspective of the word line and digit line, self-selecting memory component 220-c may mimic a tapered profile 502 such that the area of contact between top surface 3 lO-b of self-selecting memory component 220-c and top electrode 205-c is greater than the area of contact between bottom surface 315-b of self- selecting memory component 220-c and bottom electrode 210-C. The tapered profile 502 may be from top surface 3 l0-b to bottom surface 315-b of self-selecting memory component 220-c.[0108] Memory cells may be read by applying a voltage across self-selecting memory component 220-c. The voltage may be applied across self-selecting memory component 220-c in a predetermined polarity (e.g., a positive polarity). The voltage may be applied to top surface 3 l0-b or bottom surface 315-b of the self-selecting memory component 220-c. In some cases, the positive polarity voltage may be applied to the surface of self-selecting memory component 220-c with a greater area in contact with top electrode 205-c or bottom electrode 210-C. For example, the positive polarity voltage may be applied to top surface 3 lO-b in contact with top electrode 205-c.[0109] The threshold voltage of self-selecting memory component 220-c and/or resulting current through self-selecting memory component 220-c may depend on the location of a high resistivity region and low resistivity region within self-selecting memory component 220-c due to the distribution of ions within self-selecting memory component 220-c that may be affected by ion migration. The resistivity of the region may be based on the composition of self-selecting memory component 220-c. For example, self-selecting memory component 220-c may be a chalcogenide material.[0110] FIG. 6 illustrates cross-sectional views 600-a and 600-b of a memory device 602 that support memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. Self-selecting memory component 220-d may have asymmetric electrode interfaces with top electrode 205-d and bottom electrode 2l0-d in a word line direction (e.g., first direction). For example, a length of the bottom electrode 2l0-d may be less than a length of the top electrode 205-d, thereby causing the bottom electrode interface with the self-selecting memory component 220-d to be smaller than the bottom electrode interface with the self-selecting memory component 220-d. Top electrode 205-d may be coupled to digit line 115-e and bottom electrode 2l0-d may be coupled to word line l lO-e.[0111] Self-selecting memory component 220-d includes top surface 310-C and bottom surface 315-C opposite the top surface 310-C. Self-selecting memory component 220-d may also include length 630 in the word line direction and length 655 in the digit line direction. Length 630 and length 655 may determine the dimensions and area of top surface 310-C and bottom surface 315-C. In some cases, length 630 may be equal when measured along top surface 3 lO-c and bottom surface 3 l5-c in the word line direction. That is, the cross-section of self-selecting memory component 220-d may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 655 may be equal when measured along top surface 3 lO-c and bottom surface 3 l5-c in the digit line direction. That is, the cross-section of self-selecting memory component 220-d may be a rectangle in the digit line direction and illustrate a straight profile. The area of top surface 3 lO-c and the area of bottom surface 3 l5-c may also be equal.[0112] In some cases, length 630 may be unequal when measured along top surface 3 lO-c and bottom surface 3 l5-c in the word line direction. That is, the cross-section of self- selecting memory component 220-d may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile. In some cases, length 655 may be unequal when measured along top surface 3 lO-c and bottom surface 315-C in the digit line direction. That is, the cross-section of self-selecting memory component 220-d may be a trapezoid or an inverted trapezoid in the digit line direction and illustrate a tapered profile. The area of top surface 3 lO-c and the area of bottom surface 3 l5-c may also be unequal.[0113] Self-selecting memory component 220-d includes top surface 310-C in contact with top electrode 205-d. In some case, the area of contact between top electrode 205-d and top surface 310-C of self-selecting memory component 220-d may be an electrode interface.In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-d and top electrode 205-d and bottom electrode 2l0-d. Top electrode 205-d may include length 625 in the word line direction and length 650 in the digit line direction. Length 625 and length 650 may determine the dimensions and area of the top surface and bottom surface of top electrode 205-d. In some cases, length 625 may be equal when measured along the top surface and bottom surface of top electrode 205-d in the word line direction. That is, the cross-section of top electrode 205-d may be a rectangle in the word line direction and illustrate a straight profile. In some cases, length 650 may be equal when measured along the top surface and bottom surface of top electrode 205-d in the digit line direction. That is, the cross-section of top electrode 205-d may be a rectangle in the digit line direction and illustrate a straight profile. The area of the top surface and the area of bottom surface of top electrode 205-d may also be equal.[0114] In some cases, length 625 may be unequal when measured along the top surface and bottom surface of top electrode 205-d in the word line direction. That is, the cross-section of top electrode 205-d may be a trapezoid or an inverted trapezoid and illustrate a curved or slanted geometric profile (e.g., a tapered profile or a stepped profile. In some cases, length 650 may be unequal when measured along the top surface and bottom surface of top electrode 205-d in the digit line direction. That is, the cross-section of top electrode 205-d may be a trapezoid or an inverted trapezoid rectangle in the digit line direction and illustrate a tapered profile. The area of the top surface and the area of bottom surface of top electrode 205-d may also be unequal.[0115] In some cases, length 625 of top electrode 205-d may be equal to length 630 of self-selecting memory component 220-d in the word line direction. In other examples, length 650 of top electrode 205-d may be equal to length 655 of self-selecting memory component 220-d in the digit line direction. That is, top electrode 205-d may be the same size as self- selecting memory component 220-d. Such a configuration of the top electrode 205-d affects the size of the interface between the top electrode 205-d and the self-selecting memory component 220-d. The area of the interface may be equal to the area of the top surface 3 lO-c of the self-selecting memory component 220-d.[0116] From the perspective of the word line, a dielectric liner 305-c may be in contact with one or more surfaces of top electrode 205-d and self-selecting memory component 220-d. For example, dielectric liner 305-c may be in contact with side surface 605 and side surface 610 of top electrode 205-d. Dielectric liner 305-b may also be in contact with side surface 615 and side surface 620 of self-selecting memory component 220-d. In some examples, dielectric liner 305-c may be in contact with side surface 605, side surface 610, side surface 615, side surface 620, or a combination thereof. Dielectric liner 305-c may be a dielectric material compatible with the material of the self-selecting memory component 220-d. For example, dielectric liner 305-c may be an electrically neutral material. [0117] Dielectric liner 305-c may be disposed along one or more surfaces of memory device 602 to create space between the dimension of bottom electrode 2l0-d and the dimension of self-selecting memory component 220-d. For example, length 635 may include length 625 of top electrode 205-d and dielectric liner 305-c in contact with side surface 605 and side surface 610. In some cases, length 635 may be greater than length 625 of top electrode 205-d. In some examples, length 635 may be greater than length 630 of self- selecting memory component 220-d from the perspective of the word line.[0118] Further, length 670 may be measured between inner surfaces 675 and 680 of dielectric liner 305-c in the word line direction. Inner surfaces 675 and 680 of dielectric liner 305-c may be in contact with side surfaces 605 and 610 of top electrode 205-d. In addition, inner surfaces 675 and 680 of dielectric liner 305-c may also be in contact with side surfaces 615 and 620 of self-selecting memory component 220-d. In some cases, length 670 may be greater than length 640 and 645 of bottom electrode 2l0-d.[0119] In some examples, length 635 may vary depending on the length of dielectric liner 305-c in contact with side surfaces 605 and 610 of top electrode 205-d and side surfaces 605 and 610 of self-selecting memory component 220-d. For example, an amount of dielectric liner 305-c in contact with side surface 605 of top electrode 205-d and side surface 615 of self-selecting memory component 220-d may be different than an amount of dielectric liner 305-c in contact with side surface 610 of top electrode 205-d and side surface 620 of self- selecting memory component 220-d. That is, the amount of dielectric liner 305-c in contact with side surface 605 of top electrode 205-d and side surface 615 of self-selecting memory component 220-d may be greater than the amount of dielectric liner 305-c in contact with side surface 610 of top electrode 205-d and side surface 620 of self-selecting memory component 220-d. Alternatively, the amount of dielectric liner 305-c in contact with side surface 605 of top electrode 205-d and side surface 615 of self-selecting memory component 220-d may be less than the amount of dielectric liner 305-c in contact with side surface 610 of top electrode 205-d and side surface 620 of self-selecting memory component 220-d. From the perspective of the digit line, dielectric liner 305-c may be absent from memory device 602.[0120] Self-selecting memory component 220-d also includes bottom surface 315-C in contact with bottom electrode 2l0-d. In some case, the area of contact between bottom electrode 2l0-d and bottom surface 315-C of self-selecting memory component 220-d may be an electrode interface. In some cases, an asymmetrical electrode interface may be present between self-selecting memory component 220-d and top electrode 205-d and bottom electrode 2l0-d. Bottom electrode 2l0-d may include bottom length 645 and top length 640 in the word line direction and length 660 in the digit line direction. In some cases, bottom length 645 may be greater than top length 640. That is, the cross-section of bottom electrode 2l0-d may be a trapezoid in the word line direction and illustrate a tapered profile. In some cases, length 660 may be equal when measured along the top surface and bottom surface of bottom electrode 2l0-d in the digit line direction. That is, the cross-section of bottom electrode 2l0-d may be a rectangle in the digit line direction and illustrate a straight profile.[0121] In some cases, top length 640 and bottom length 645 of bottom electrode 2l0-d may be less than length 630 of self-selecting memory component 220-d in the word line direction. From the perspective of the digit line, length 660 of bottom electrode 2l0-d may be greater than length 655 of self-selecting memory component 220-d. Such a configuration of the bottom electrode 2l0-d affects the size of the interface between the bottom electrode 2l0-d and the self-selecting memory component 220-d. The area of the interface may be less than the area of the bottom surface 3 l5-c of the self-selecting memory component 220-d.[0122] In some cases, bottom electrode 2l0-d may illustrate a tapered profile in the word line direction, the digit line direction or both. For example, bottom electrode 2l0-d may taper from a bottom surface in contact with word line 1 lO-e to a top surface in contact with self- selecting memory component 220-d. The cross section of bottom electrode 2l0-d may be a trapezoid. Alternatively, bottom electrode 2l0-d may illustrate an inverted taper profile in the word line direction, the digit line direction, or both. That is, bottom electrode 2l0-d may taper from a top surface in contact with self-selecting memory component 220-d to a bottom surface in contact with word line 1 lO-e. The cross section of bottom electrode 2l0-d may be an inverted trapezoid.[0123] Bottom electrode 2l0-d may form different geometric shapes. For example, bottom electrode 2l0-d may be in the shape of a trapezoidal prism, and a cross-section of bottom electrode 2l0-d may include a trapezoid in the word line direction and a rectangle in the digit line direction. Alternatively, bottom electrode 2l0-d may be in the shape of an inverted trapezoidal prism, and a cross section of bottom electrode 2l0-d may include an inverted trapezoid in the word line direction and a rectangle in the digit line direction. In some cases, bottom electrode 2l0-d may be a frustum. [0124] Top electrode 205-d may be in electronic communication with bottom electrode 2l0-d through self-selecting memory component 220-d. In some cases, length 625 of top electrode 205-d may be greater than top length 640 and bottom length 645 of bottom electrode 2l0-d in the word line direction. Alternatively, length 650 of top electrode 205-d may be less than length 660 of bottom electrode 2l0-d in the digit line direction. Length 635 may be greater than top length 640 and length 645 of bottom electrode 2l0-d in the word line direction.[0125] The area of contact (e.g., the interface) between top surface 310-C of self-selecting memory component 220-d and top electrode 205-d may be determined by the dimensions of length 625 and length 650 of top electrode 205-d. The area of contact (e.g., the interface) between bottom surface 315-C of self-selecting memory component 220-d and bottom electrode 2l0-d may be determined by the dimensions of top length 640 and length 660 of bottom electrode 2l0-d. In some cases, the area of contact between top surface 3 lO-c of self- selecting memory component 220-d and top electrode 205-d and the area of contact between bottom surface 315-C of self-selecting memory component 220-d and bottom electrode 2l0-d may be different to achieve asymmetrical electrode interfaces between top electrode 205-d and bottom electrode 2l0-d. For example, the area of contact between top surface 3 lO-c of self-selecting memory component 220-d and top electrode 205-d may be greater than the area of contact between bottom surface 3 l5-c of self-selecting memory component 220-d and bottom electrode 2l0-d in the word line direction.[0126] Self-selecting memory component 220-d may mimic a tapered profile 665 due to the asymmetrical electrode interfaces. From the perspective of the word line, self-selecting memory component 220-d may mimic a tapered profile 665 such that the area of contact between top surface 310-C of self-selecting memory component 220-d and top electrode 205-d is greater than the area of contact between bottom surface 3 l5-c of self-selecting memory component 220-d and bottom electrode 2l0-d. The tapered profile 665 may be from top surface 310-C to bottom surface 315-C of self-selecting memory component 220-d.[0127] Memory cells may be read by applying a voltage across self-selecting memory component 220-d. The voltage may be applied across self-selecting memory component 220-d in a predetermined polarity (e.g., a positive polarity). The voltage may be applied to top surface 310-C or bottom surface 315-C of the self-selecting memory component 220-d. In some cases, the positive polarity voltage may be applied to the surface of self-selecting memory component 220-d with a greater area in contact with top electrode 205-d or bottom electrode 2l0-d. For example, the positive polarity voltage may be applied to top surface 3 lO-c in contact with top electrode 205-d.[0128] The threshold voltage of self-selecting memory component 220-d and/or resulting current through self-selecting memory component 220-d may depend on the location of a high resistivity region and low resistivity region within self-selecting memory component 220-d due to the distribution of ions within self-selecting memory component 220-d that may be affected by ion migration. The resistivity of the region may be based on the composition of self-selecting memory component 220-d. For example, self-selecting memory component 220-d may be a chalcogenide material.[0129] FIG. 7 illustrates an example process flow for forming a self-selecting memory device that supports memory cells with asymmetrical electrode interfaces, which may include steps 700-a, 700-b, and 700-c, in accordance with examples of the present disclosure. The resulting memory device may be an example of the memory cells and architecture that include memory devices described with reference to FIGs. 1-6. In some cases, processing steps 700-a, 700-b, and 700-c may occur in the word line direction, the digit line direction, or both.[0130] Processing step 700-a includes formation of a stack including top electrode 205-e, bottom electrode 2l0-e, and self-selecting memory component 220-e. Various techniques may be used to form materials or components shown in processing step 700-a. These may include, for example, chemical vapor deposition (CVD), metal-organic vapor deposition (MOCVD), physical vapor deposition (PVD), sputter deposition, atomic layer deposition (ALD), or molecular beam epitaxy (MBE), among other thin film growth techniques.[0131] At processing step 700-a, self-selecting memory component 220-e may be deposited above bottom electrode 2l0-e. Top electrode 205-e may then be deposited above self-selecting memory component 220-e such that self-selecting memory component 220-e is located between bottom electrode 2l0-e and top electrode 205-e. Hard mask material 705 may then be deposited on top surface 710 of top electrode 205-e. Self-selecting memory component 220-e may include a chalcogenide material.[0132] In some examples, additional interface materials may be deposited between top electrode 205-e and self-selecting memory component 220-e, and between self-selecting memory component 220-e and top electrode 205-e. At processing step 700-a, top electrode 205-e may be etched to length 715 (e.g., first length) in the word line direction (e.g., first direction). By etching the top electrode 205-e, a size of an interface between the top electrode 205-e and the self-selecting memory component 220-e may be determined. In some cases, top electrode 205-e may be partially etched in the word line direction through top electrode 205-e. That is, the etching may stop before a top surface of self-selecting memory component 220-e.[0133] At processing step 700-b, a deposition of dielectric liner 305-d may occur after the top electrode 205-e is deposited and etched. Dielectric liner 305-d may serve as a spacer for subsequent etch steps. In some examples, dielectric liner 305-d may be in contact with side surface 730 and side surface 720 of top electrode 205-e. In some cases, dielectric liner may also be in contact with one or more side surfaces of hard mask material 705 and a top surface of self-selecting memory component 220-e. Length 725 (e.g., second length in the first direction) may include dielectric liner 305-d in contact with side surfaces 730 and 720 and length 715 of top electrode 205-e. In some cases, length 725 may be greater than length 715 (e.g., first length) of top electrode 205-e.[0134] Dielectric liner 305-d may be deposited using in-situ or ex-situ techniques. For example, processing steps 700-a, 700-b, and 700-c may occur in one processing chamber (e.g., first chamber). Alternatively, processing steps 700-a, 700-b, and 700-c may occur in two or more processing chambers (e.g., first, second chambers, etc.). Dielectric liner 305-d may be deposited using in-situ techniques. For example, top electrode 205-e may first be etched to length 715 (e.g., processing step 700-a) in a processing chamber. The etching process of top electrode 205-e may stop, and then dielectric liner 305-d may be deposited (e.g., processing step 700-b) in the same processing chamber. For example, dielectric liner 305-d may be deposited inside a first chamber. After dielectric liner 305-d is deposited, the etching process may resume in the same processing chamber.[0135] Alternatively, dielectric liner 305-d may be deposited using ex-situ techniques.For example, top electrode 205-e may first be etched to length 715 in the word line direction (e.g., processing step 700-a) in a first processing chamber. For example, the stack including top electrode 205-e, bottom electrode 2l0-e, and self-selecting memory component 220-e may be etched to form a line inside the first processing chamber. The etching process of top electrode 205-e may stop, and the stack (including the etched top electrode 205-e) may be transferred to a second processing chamber. The second processing chamber may be different than the first processing chamber. Dielectric liner 305-d may then be deposited (e.g., processing step 700-b) in the second processing chamber. After dielectric liner 305-d is deposited, the stack, including dielectric liner 305-d deposited on top electrode 205-e, may be transported back to the first processing chamber to complete the etching process.[0136] At processing step 700-c, the stack including top electrode 205-e, bottom electrode 2l0-e, and self-selecting memory component 220-e may be etched through dielectric liner 305-d, self-selecting memory component 220-e, bottom electrode 2l0-e, and word line 1 lO-f to form a line. The line may include top electrode 205-e, bottom electrode 2l0-e, and self-selecting memory component 220-e. Processing step 700-c may also include the removal of dielectric liner 305-d from a top surface of hard mask material 705.[0137] The etch through dielectric liner 305-d, self-selecting memory component 220-e, bottom electrode 2l0-e, and word line 1 lO-f to form a line may result in a memory device with asymmetrical electrode interfaces (e.g., memory device 302 and 402 described with reference to FIGs. 3 and 4). For example, the area of contact (e.g., interface) between top electrode 205-e and self-selecting memory component 220-e may be less than the area of contact (e.g., interface) between bottom electrode 2l0-e and self-selecting memory component 220-e. That is, the interface between top electrode 205-e and self-selecting memory component 220-e may be narrower than the interface between bottom electrode 2l0-e and self-selecting memory component 220-e.[0138] In some examples, etching through dielectric liner 305-d, self-selecting memory component 220-e, bottom electrode 2l0-e, and word line 1 lO-f may form a line or a pillar comprising dielectric liner 305-d, self-selecting memory component 220-e, bottom electrode 2l0-e, and top electrode 205-e. The line or pillar may have a length in the digit line direction (not shown) that is greater than length 715 (e.g., first length) of top electrode 205-e.[0139] The material removed at processing step 700-c may be removed using a number of techniques, which may include, for example, chemical etching (also referred to as“wet etching”), plasma etching (also referred to as“dry etching”), or chemical-mechanical planarization. One or more etching steps may be employed. Those skilled in the art will recognize that, in some examples, steps of a process described with a single exposure and/or etching step may be performed with separate etching steps and vice versa. [0140] FIG. 8 illustrates an example process flow for forming a self-selecting memory device that supports memory cells with asymmetrical electrode interfaces, which may include steps 800-a, 800-b, and 800-c, in accordance with examples of the present disclosure. The resulting memory device may be an example of the memory cells and architecture that include memory devices described with reference to FIGs. 1-6. In some cases, processing steps 800-a, 800-b, and 800-c may occur in the word line direction, the digit line direction, or both.[0141] Processing step 800-a includes formation of a stack including top electrode 205-f, bottom electrode 2l0-f, and self-selecting memory component 220-f. Various techniques may be used to form materials or components shown in processing step 800-a. These may include, for example, chemical vapor deposition (CVD), metal-organic vapor deposition (MOCVD), physical vapor deposition (PVD), sputter deposition, atomic layer deposition (ALD), or molecular beam epitaxy (MBE), among other thin film growth techniques.[0142] At processing step 800-a, self-selecting memory component 220-f may be deposited on bottom electrode 2lO-f. Top electrode 205-f may then be deposited on self- selecting memory component 220-f such that self-selecting memory component 220-f is located between bottom electrode 2l0-f and top electrode 205-f. Hard mask material 705-a may then be deposited on top surface 810 of top electrode 205-f. Self-selecting memory component 220-f may include a chalcogenide material.[0143] At processing step 800-a, top electrode 205-f may be etched to length 805 (e.g., first length) in the word line direction. In some cases, the self-selecting memory component 220-f may be etched along with the top electrode 205-f to length 805 in the word line direction. In some cases, top electrode 205-f and self-selecting memory component 220-f may be partially etched in the word line direction through top electrode 205-f and self- selecting memory component 220-f. That is, self-selecting memory component may be etched from top surface 825 to bottom surface 820.[0144] At processing step 800-b, a deposition of dielectric liner 305-e may occur after the top electrode 205-e is deposited and etched. Dielectric liner 305-e may serve as a spacer for subsequent etch steps. In some examples, dielectric liner 305-e may be in contact with one or more side surfaces of top electrode 205-f. In some cases, dielectric liner may also be in contact with one or more side surfaces of hard mask material 705-a, side surfaces of self- selecting memory component 220-f, and a top surface of bottom electrode 2lO-f. [0145] Dielectric liner 305-e may be deposited using in-situ or ex-situ techniques. For example, processing steps 800-a, 800-b, and 800-c may occur in one processing chamber (e.g., first chamber). Alternatively, processing steps 800-a, 800-b, and 800-c may occur in separate processing chambers (e.g., first, second chambers, etc.). Dielectric liner 305-e may be deposited using in-situ techniques. For example, top electrode 205-f and self-selecting memory component 220-f may first be etched to length 805 (e.g., processing step 800-a) in a processing chamber. The etching process of top electrode 205-f and self-selecting memory component 220-f may stop, and then dielectric liner 305-e may be deposited (e.g., processing step 800-b) in the same processing chamber. For example, dielectric liner 305-e may be deposited inside a first chamber. After dielectric liner 305-e is deposited, the etching process may resume in the same processing chamber.[0146] Alternatively, dielectric liner 305-e may be deposited using ex-situ techniques.For example, top electrode 205-f and self-selecting memory component 220-f may first be etched to length 805 (e.g., processing step 800-a) in a first processing chamber. For example, the stack including top electrode 205-f, bottom electrode 2l0-f, and self-selecting memory component 220-f may be etched to form a pillar inside the first processing chamber. The etching process may stop, and the pillar may be transferred to a second processing chamber. The second processing chamber may be different than the first processing chamber.Dielectric liner 305-e may then be deposited (e.g., processing step 800-b) in the second processing chamber. After dielectric liner 305-e is deposited, the stack, including the dielectric liner 305-e, may be transported back to the first processing chamber to complete the etching process.[0147] At processing step 800-c, the stack including top electrode 205-f, bottom electrode 2l0-f, and self-selecting memory component 220-f may be etched through dielectric liner 305-e, bottom electrode 2l0-f, and word line 110-g to form a line or pillar. The line or pillar may include top electrode 205-f, bottom electrode 2l0-f, and self-selecting memory component 220-f. Processing step 800-c may also include the removal of dielectric liner 305-e from a top surface of hard mask material 705-a.[0148] The etch through dielectric liner 305-e, bottom electrode 2l0-f, and word line 1 lO-g to form a line or pillar may result in a memory device with asymmetrical electrode interfaces (e.g., memory device 501 and 602 described with reference to FIGs.5 and 6, respectively). For example, the area of contact between top electrode 205-f and self-selecting memory component 220-f may be greater than the area of contact between bottom electrode 2l0-f and self-selecting memory component 220-f. That is, the interface between bottom electrode 2l0-f and self-selecting memory component 220-f may be narrower than the interface between top electrode 205-f and self-selecting memory component 220-f.[0149] As illustrated in processing step 800-c, the line or pillar may include dielectric liner in contact with side surfaces 830 and 835 of top electrode 205-f and side surfaces 840 and 845 of self-selecting memory component 220-f. Length 850 may include dielectric liner 305-e in contact with side surfaces 830 and 835 and length 805 of top electrode 205-f. In some cases, length 850 may be greater than length 805 of top electrode 205-f.[0150] At processing step 800-c, a taper may be formed from bottom surface 860 to top surface 855 of bottom electrode 2lO-f. For example, top length 865 may be less than bottom length 870 of bottom electrode 2lO-f. The cross section of bottom electrode 2l0-f may be a trapezoid. Alternatively, bottom electrode 2l0-f may illustrate an inverted taper profile in the word line direction, the digit line direction, or both. That is, bottom electrode 2l0-f may taper from top surface 855 to bottom surface 860. The cross section of bottom electrode 2l0-f may be an inverted trapezoid. In some cases, bottom electrode 2l0-f may be formed by applying isotropic etch steps.[0151] The material removed at processing step 800-c may be removed using a number of techniques, which may include, for example, chemical etching (also referred to as“wet etching”), plasma etching (also referred to as“dry etching”), or chemical-mechanical planarization. One or more etching steps may be employed. Those skilled in the art will recognize that, in some examples, steps of a process described with a single exposure and/or etching step may be performed with separate etching steps and vice versa.[0152] FIG. 9 shows an example block diagram 900 of a memory array lOO-a that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. Memory array lOO-a may be referred to as an electronic memory apparatus, and may be an example of a component of a memory controller 140 as described with reference to FIG. 1.[0153] Memory array lOO-a may include one or more memory cells l05-b, memory controller l40-a, a word line signal 920 communicated using the word line (not shown), sense component l25-a, digit line signal 925 communicated using a digit line (not shown), and latch 915. These components may be in electronic communication with each other and may perform one or more of the functions described herein. In some cases, memory controller l40-a may include biasing component 905 and timing component 910. Memory controller l40-a may be in electronic communication with a word line, a digit line, and sense component l25-a, which may be examples of word line 110, digit line 115, and sense component 125, described with reference to FIGs. 1 and 2. In some cases, sense component l25-a and latch 915 may be components of memory controller l40-a.[0154] Memory cell l05-b may include a memory cell with asymmetrical electrode interfaces. For example, the self-selecting memory component may be an example of a self- selecting memory component 220 described with reference to FIGs. 2-8.[0155] In some examples, digit line is in electronic communication with sense component l25-a and memory cell l05-b. A logic state may be written to memory cell l05-b. Word line may be in electronic communication with memory controller l40-a and memory cell l05-b. Sense component l25-a may be in electronic communication with memory controller l40-a, a digit line, and latch 915. These components may also be in electronic communication with other components, both inside and outside of memory array lOO-a, in addition to components not listed above, via other components, connections, or busses.[0156] Memory controller l40-a may be configured to send a word line signal 920 or digit line signal 925 by applying voltages to those various nodes. For example, biasing component 905 may be configured to apply a voltage to operate memory cell l05-b to read or write memory cell l05-b as described above. In some cases, memory controller l40-a may include a row decoder, column decoder, or both, as described with reference to FIG. 1. This may enable the memory controller l40-a to access one or more memory cells l05-b. Biasing component 905 may provide a voltage for the operation of sense component l25-a.[0157] In some cases, memory controller l40-a may perform its operations using timing component 910. For example, timing component 910 may control the timing of the various word line selections or plate biasing, including timing for switching and voltage application to perform the memory functions, such as reading and writing, discussed herein. In some cases, timing component 910 may control the operations of biasing component 905.[0158] Upon determining a logic state of memory cell l05-b, sense component l25-a may store the output in latch 915, where it may be used in accordance with the operations of an electronic device that memory array lOO-a is a part. Sense component l25-a may include a sense amplifier in electronic communication with the latch and memory cell l05-b.[0159] Memory controller l40-a, or at least some of its various sub-components may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions of the memory controller l40-a and/or at least some of its various sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure.[0160] The memory controller l40-a and/or at least some of its various sub-components may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical devices. In some examples, memory controller l40-a and/or at least some of its various sub- components may be a separate and distinct component in accordance with various examples of the present disclosure. In other examples, memory controller l40-a and/or at least some of its various sub-components may be combined with one or more other hardware components, including but not limited to a receiver, a transmitter, a transceiver, one or more other components described in the present disclosure, or a combination thereof in accordance with various examples of the present disclosure.[0161] FIG. 10 shows an example diagram of a system 1000 including a device 1005 that supports memory cells with asymmetrical electrode interfaces in accordance with various examples of the present disclosure. Device 1005 may be an example of or include the components of memory controller 140 as described above, with reference to FIG. 1. Device 1005 may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including memory array lOO-b that includes memory controller l40-b and memory cells 105-C, basic input/output system (BIOS) component 1015, processor 1010, I/O controller 1025, and peripheral components 1020. These components may be in electronic communication via one or more busses (e.g., bus 1030). [0162] Memory cells 105-C may store information (i.e., in the form of a logical state) as described herein. Memory cells 105-C may be self-selecting memory cells with a self- selecting memory component as described with reference to FIGs. 2-8, for example.[0163] BIOS component 1015 may be a software component that includes BIOS operated as firmware, which may initialize and run various hardware components. BIOS component 1015 may also manage data flow between a processor and various other components, for example, peripheral components, input/output control component, etc. BIOS component 1015 may include a program or software stored in read only memory (ROM), flash memory, or any other non-volatile memory.[0164] Processor 1010 may include an intelligent hardware device, (e.g., a general- purpose processor, a DSP, a central processing unit (CPU), a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, processor 1010 may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into processor 1010. Processor 1010 may be configured to execute computer-readable instructions stored in a memory to perform various functions (e.g., functions or tasks supporting programming enhancement in self-selecting memory).[0165] I/O controller 1025 may manage input and output signals for device 1005. I/O controller 1025 may also manage peripherals not integrated into device 1005. In some cases, I/O controller 1025 may represent a physical connection or port to an external peripheral. In some cases, I/O controller 1025 may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system.[0166] Peripheral components 1020 may include any input or output device, or an interface for such devices. Examples may include disk controllers, sound controller, graphics controller, Ethernet controller, modem, universal serial bus (USB) controller, a serial or parallel port, or peripheral card slots, such as peripheral component interconnect (PCI) or accelerated graphics port (AGP) slots.[0167] Input 1035 may represent a device or signal external to device 1005 that provides input to device 1005 or its components. This may include a user interface or an interface with or between other devices. In some cases, input 1035 may be managed by I/O controller 1025, and may interact with device 1005 via a peripheral component 1020.[0168] Output 1040 may also represent a device or signal external to device 1005 configured to receive output from device 1005 or any of its components. Examples of output 1040 may include a display, audio speakers, a printing device, another processor or printed circuit board, etc. In some cases, output 1040 may be a peripheral element that interfaces with device 1005 via peripheral component s) 1020. In some cases, output 1040 may be managed by I/O controller 1025.[0169] The components of device 1005 may include circuitry designed to carry out their functions. This may include various circuit elements, for example, conductive lines, transistors, capacitors, inductors, resistors, amplifiers, or other active or inactive elements, configured to carry out the functions described herein. Device 1005 may be a computer, a server, a laptop computer, a notebook computer, a tablet computer, a mobile phone, a wearable electronic device, a personal electronic device, or the like. Or device 1005 may be a portion or component of such a device.[0170] FIG. 11 shows a flowchart illustrating a method 1100 to form a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0171] At block 1105 the method may include forming a stack comprising a bottom electrode, a top electrode, and a self-selecting memory component between the bottom electrode and the top electrode.[0172] At block 1110 the method may include etching the top electrode to a first length in a first direction based at least in part on forming the stack.[0173] At block 1115 the method may include depositing a dielectric liner in contact with two side surfaces of the top electrode based at least in part on etching the top electrode. In some examples, the dielectric liner may be deposited using an in-situ technique or an ex-situ technique.[0174] At block 1120 the method may include etching the stack to form a pillar comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner, the pillar having a second length in the first direction greater than the first length of the top electrode. [0175] In some cases, an apparatus is described for performing a method, such as method 1100. For example, the apparatus may include means for forming a stack comprising a bottom electrode, a top electrode, and a self-selecting memory component between the bottom electrode and the top electrode. The apparatus may also include means for etching the top electrode to a first length in a first direction based at least in part on forming the stack.The apparatus may also include means for depositing a dielectric liner in contact with two side surfaces of the top electrode based at least in part on etching the top electrode. The apparatus may also include means for etching the stack to form a line comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner, the line having a second length in the first direction greater than the first length of the top electrode.[0176] In some examples, the apparatus may include means for depositing a hard mask material on a top surface of the top electrode where a portion of the hard mask material may be removed when the line is formed. In some examples, the dielectric liner may be deposited using an in-situ technique or an ex-situ technique. In some examples, the apparatus may include means for etching the stack to form the line inside a first chamber, where depositing the dielectric liner occurs inside the first chamber.[0177] In some examples, the apparatus may include means for etching the stack to form the line inside a first chamber. The apparatus may also include means for transferring the stack from the first chamber to a second chamber, wherein depositing the dielectric liner occurs inside the second chamber. In some examples, the apparatus may also include means for etching the stack to form a pillar comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner, the pillar having a second length in a second direction greater than the first length of the top electrode.[0178] FIG. 12 shows a flowchart illustrating a method 1200 to form a memory device that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure.[0179] At block 1205 the method may include forming a stack comprising a bottom electrode, a top electrode, and a self-selecting memory component between the bottom electrode and the top electrode. [0180] At block 1210 the method may include etching the top electrode based at least in part on forming the stack.[0181] At block 1215 the method may include etching from a top surface to a bottom surface of the self-selecting memory component based at least in part on etching the top electrode.[0182] At block 1220 the method may include depositing a dielectric liner in contact with two side surfaces of the top electrode and two side surfaces of the self-selecting memory component based at least in part on etching from the top surface to the bottom surface of the self-selecting memory component. In some examples, the dielectric liner may be deposited using an in-situ technique or an ex-situ technique.[0183] At block 1225 the method may include etching the stack to form a pillar comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner.[0184] At block 1230 the method may include forming a taper from a bottom surface to a top surface opposite the bottom surface of the bottom electrode.[0185] While the examples described earlier focus on tapered profiles that may monotonically increase or decrease in a given direction, this is not required. For example, the desired profile/shape of a self-selecting memory component may be an hourglass shape, a barrel shape, or any other shape.[0186] In some cases, the self-selecting memory component may be a barrel-like tapered profile. For example, when a memory cell is programmed using a given polarity, anions may drift towards one surface (e.g., a top or bottom surface) of a self-selecting memory component and cations may drift towards the opposite surface (e.g., a bottom or top surface) of the self-selecting memory component. As compared with symmetrically shaped memory cells, a self-selecting memory component that includes or mimics a barrel-like tapered profile, or another asymmetric profile in which the widths of the top and bottom surfaces of the self-selecting memory component are narrower than the width of a middle portion of the self-selecting memory component, may cause an increase in the concentrations of the cations and/or anions at the respective surfaces, by having narrow contact areas at each electrode and a larger, bulk ion reservoir at the middle of the self-selecting memory component, for example. [0187] In some cases, an apparatus is described for performing a method, such as method 1200. For example, the apparatus may include means for forming a stack comprising a bottom electrode, a top electrode, and a self-selecting memory component between the bottom electrode and the top electrode. The apparatus may also include means for etching the top electrode based at least in part on forming the stack. The apparatus may also include means for etching from a top surface to a bottom surface of the self-selecting memory component based at least in part on etching the top electrode. The apparatus may also include means for depositing a dielectric liner in contact with two side surfaces of the top electrode and two side surfaces of the self-selecting memory component based at least in part on etching from the top surface to the bottom surface of the self-selecting memory component. The apparatus may also include means for etching the stack to form a pillar comprising the bottom electrode, the top electrode, the self-selecting memory component, and the dielectric liner. The apparatus may also include means for forming a taper from a bottom surface to a top surface opposite the bottom surface of the bottom electrode.[0188] In some examples, the dielectric liner may be deposited using an in-situ technique or an ex-situ technique. In some examples, the apparatus may also include means for etching the stack to form the pillar inside a first chamber, wherein depositing the dielectric liner occurs inside the first chamber.[0189] In some examples, the apparatus may also include means for etching the stack to form the pillar inside a first chamber. The apparatus may also include means for transferring the stack from the first chamber to a second chamber, wherein depositing the dielectric liner occurs inside the second chamber.[0190] FIG. 13 illustrates example memory cells l05-d, l05-e that supports memory cells with asymmetrical electrode interfaces in accordance with examples of the present disclosure. Memory cells l05-d, l05-e provide examples of asymmetric geometries in which the widths of the top and bottom surfaces of the self-selecting memory component are narrower than the width of a middle portion of the self-selecting memory component.Memory cells l05-d and l05-e have self-selecting memory component profiles that may result in anion crowding at one surface of the self-selecting memory component and cation crowding at the opposite surface, or vice versa, depending on the polarity of the operation.[0191] The self-selecting memory component 220-g of memory cell l05-d may be a barrel-like tapered profile, with a wider width 1305 near the middle of the self-selecting memory component 220-g, and narrower widths 1310, 1315 near the surfaces of the self- selecting memory component 220-g that are coupled with electrodes 205-g, 205-h. In some cases, the width 1310 is similar to the width 1315. In some cases, the width 1310 is different than the width 1315. Self-selecting memory component 220-g may be coupled to access lines via electrodes 205-g, 205-h, for example.[0192] The self-selecting memory component 220-h of memory cell l05-e may be a stepped profile having a first (middle) portion 1320 with a wider width 1325 relative to second portion and third portions 1330, 1335 that have narrower widths 1340, 1345 near the top and bottom surfaces of self-selecting memory component 220-h. In this example, the second and third portions 1330, 1335 have different widths 1340, 1345. In other examples, the second and third portions 1330, 1335 may have the same widths 1340, 1345. Self- selecting memory component 220-h may be coupled to access lines via electrodes 205-i, 205-j, for example.[0193] As used herein, the term“virtual ground” refers to a node of an electrical circuit that is held at a voltage of approximately zero volts (0V) but that is not directly connected with ground. Accordingly, the voltage of a virtual ground may temporarily fluctuate and return to approximately 0V at steady state. A virtual ground may be implemented using various electronic circuit elements, such as a voltage divider consisting of operational amplifiers and resistors. Other implementations are also possible.“Virtual grounding” or “virtually grounded” means connected to approximately 0V.[0194] The term“electronic communication” and“coupled” refers to a relationship between components that supports electron flow between the components. This may include a direct connection between components or may include intermediate components.Components in electronic communication or coupled to one another may be actively exchanging electrons or signals (e.g., in an energized circuit) or may not be actively exchanging electrons or signals (e.g., in a de-energized circuit) but may be configured and operable to exchange electrons or signals upon a circuit being energized. By way of example, two components physically connected via a switch (e.g., a transistor) are in electronic communication or may be coupled regardless of the state of the switch (i.e., open or closed).[0195] The term“isolated” refers to a relationship between components in which electrons are not presently capable of flowing between them; components are isolated from each other if there is an open circuit between them. For example, two components physically connected by a switch may be isolated from each other when the switch is open.[0196] As used herein, the term“shorting” refers to a relationship between components in which a conductive path is established between the components via the activation of a single intermediary component between the two components in question. For example, a first component shorted to a second component may exchange electrons with the second component when a switch between the two components is closed. Thus, shorting may be a dynamic operation that enables the flow of charge between components (or lines) that are in electronic communication.[0197] The devices discussed herein, including memory array 100, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0198] Chalcogenide materials may be materials or alloys that include at least one of the elements sulfur (S), selenium (Se), and tellurium (Te). Phase change materials discussed herein may be chalcogenide materials. Chalcogenide materials and alloys may include, but not limited to, Ge-Te, In-Se, Sb-Te, Ga-Sb, In-Sb, As-Te, Al-Te, Ge-Sb-Te, Te-Ge-As, In- Sb-Te, Te-Sn-Se, Ge-Se-Ga, Bi-Se-Sb, Ga-Se-Te, Sn-Sb-Te, In-Sb-Ge, Te-Ge-Sb-S, Te-Ge- Sn-O, Te-Ge-Sn-Au, Pd-Te-Ge-Sn, In-Se-Ti-Co, Ge-Sb-Te-Pd, Ge-Sb-Te-Co, Sb-Te-Bi-Se, Ag-In-Sb-Te, Ge-Sb-Se-Te, Ge-Sn-Sb-Te, Ge-Te-Sn-Ni, Ge-Te-Sn-Pd, or Ge-Te-Sn-Pt. The hyphenated chemical composition notation, as used herein, indicates the elements included in a particular compound or alloy and is intended to represent all stoichiometries involving the indicated elements. For example, Ge-Te may include GexTey, where x and y may be any positive integer. Other examples of variable resistance materials may include binary metal oxide materials or mixed valence oxide including two or more metals, e.g., transition metals, alkaline earth metals, and/or rare earth metals. Embodiments are not limited to a particular variable resistance material or materials associated with the memory elements of the memory cells. For example, other examples of variable resistance materials can be used to form memory elements and may include chalcogenide materials, colossal magnetoresistive materials, or polymer-based materials, among others.[0199] A transistor or transistors discussed herein may represent a field-effect transistor (FET) and comprise a three-terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-dopedsemiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be“on” or“activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be“off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0200] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term“exemplary” used herein means“serving as an example, instance, or illustration,” and not“preferred” or“advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples.[0201] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0202] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0203] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a digital signal processor (DSP) and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0204] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims,“or” as used in a list of items (for example, a list of items prefaced by a phrase such as“at least one of’ or“one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase“based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as“based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase“based on” shall be construed in the same manner as the phrase “based at least in part on.”[0205] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0206] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
In one embodiment, a host controller is to couple to an interconnect to which a plurality of devices may be coupled. The host controller may include: a first driver to drive first information onto the interconnect; and a first receiver to receive second information comprising parameter information of at least one of the plurality of devices from the interconnect. The host controller may further include an integrity control circuit to receive the parameter information of the at least one of the plurality of devices and dynamically update at least one capability of the host controller based at least in part on the parameter information. Other embodiments are described and claimed. |
What is claimed is:1. An apparatus comprising:a host controller to couple to an interconnect to which a plurality of devices may be coupled, the host controller including:a first driver to drive first information onto the interconnect; a first receiver to receive second information comprising parameter information of at least one of the plurality of devices from the interconnect; andan integrity control circuit to receive the parameter information of the at least one of the plurality of devices and dynamically update at least one capability of the host controller based at least in part on the parameter information. 2. The apparatus of claim 1, wherein the integrity control circuit is to dynamically update a configuration of a first current source to couple to the first driver based at least in part on the parameter information. 3. The apparatus of claim 1, wherein the at least one capability of the host controller comprises one or more of a delay configuration, a buffer impedance, and a slew rate. 4. The apparatus of claim 1, wherein the integrity control circuit is to dynamically update the at least one capability of the host controller when at least one device of the plurality of devices is coupled to the interconnect or de-coupled from the interconnect. 5. The apparatus of claim 1, wherein the integrity control circuit is to access a table based at least in part on the parameter information and obtain control information to update the at least one capability of the host controller. 6. The apparatus of claim 1, wherein the integrity control circuit is to dynamically calculate the at least one capability of the host controller based at least in part on the parameter information. 7. The apparatus of claim 1, wherein the integrity control circuit is to receive an indication of a new device to couple to the interconnect and, in response to parameter information of the new device, to prevent the new device from being coupled to the interconnect. 8. The apparatus of claim 1, wherein a first device of the plurality of devices is to be always connected to the interconnect and powered on during operation of a system. 9. The apparatus of claim 8, wherein a second device of the plurality of devices is to be always connected to the interconnect and dynamically power controlled during operation of the system. 10. The apparatus of claim 1, wherein the parameter information comprises parasitic information of the at least one device. 11. A method comprising:obtaining, via a host controller, device information from one or more devices coupled to an interconnect;calculating one or more configuration values for the host controller based on the device information; anddynamically updating one or more configuration parameters of the host controller based on the one or more configuration values. 12. The method of claim 11, further comprising:identifying a new device to be coupled to the interconnect; andobtaining device information of the new device. 13. The method of claim 12, further comprising:determining whether the new device is allowed to be coupled to the interconnect, based at least in part on the device information of the new device; andresponsive to determining that the new device is allowed to be coupled to the interconnect, sending a message to the new device to enable the new device to be coupled to the interconnect.14. The method of claim 12, further comprising:determining to prevent the new device from being coupled to the interconnect, based at least in part on the device information of the new device; andresponsive to determining that the new device is prevented from being coupled to the interconnect, sending a message to the new device to prevent the new device from being coupled to the interconnect. 15. The method of claim 11, wherein dynamically updating one or more configuration parameters of the host controller comprises sending control signals to one or more switches of the host controller, to cause a first current source coupled between a supply voltage and an output driver of the host controller to be dynamically configured. 16. The method of claim 11, further comprising sending a first message from the host controller to a first device to request the device information from the first device, the first device storing the first information in at least one register of the first device. 17. The method of claim 11, further comprising accessing a lookup table, via the host controller, based at least in part on the device information to obtain the one or more configuration parameters. 18. A computer-readable storage medium including computer-readable instructions, when executed, to implement a method as claimed in any one of claims 11 to 17. 19. An apparatus comprising means to perform a method as claimed in any one of claims 11 to 17. 20. A system comprising:a first device coupled to a host controller via a bus, wherein the first device includes at least one first storage to store first device information regarding one or more parasitic loading parameters of the first device;a second device coupled to the host controller via the bus, wherein the second device includes a power controller to couple the second device to the bus when the second device is active and otherwise to de-couple the second device from the bus, the second device further including least one second storage to store second device information regarding one or more parasitic loading parameters of the second device; andthe host controller having a control circuit to receive the first device information and the second device information and dynamically update at least one configuration parameter of the host controller based thereon. 21. The system of claim 20, wherein the host controller comprises:a first driver to drive first information onto the bus, the first driver to couple to a first current source; anda first receiver to receive the first device information and the second device information, the first receiver to couple to a second current source, wherein the host controller is to dynamically control a configuration of at least one of the first current source and the second current source based at least in part on the first device information and the second device information. 22. The system of claim 20, further comprising a third device to dynamically couple to the bus, wherein the host controller is to determine whether to allow the third device to be coupled to the bus, based at least in part on third device information regarding one or more parasitic loading parameters of the third device. 23. An apparatus comprising:host control means for coupling to an interconnect to which a plurality of devices may be coupled, the host control means including:a first driver means for driving first information onto the interconnect;a first receiver means for receiving second information comprising parameter information of at least one of the plurality of devices from the interconnect; andan integrity control means for receiving the parameter information of the at least one of the plurality of devices and dynamically updating at least one capability of the host control means based at least in part on the parameter information.24. The apparatus of claim 23, wherein the integrity control means is to dynamically update a configuration of a first current source to couple to the first driver means based at least in part on the parameter information. 25. The apparatus of claim 23, wherein the at least one capability of the host control means comprises one or more of a delay configuration, a buffer impedance, and a slew rate, and the integrity control means is to dynamically update the at least one capability of the host control means when at least one device of the plurality of devices is coupled to the interconnect or de-coupled from the interconnect. |
METHOD, APPARATUS AND SYSTEM FORDYNAMIC OPTIMIZATION OF SIGNAL INTEGRITY ON A BUSTechnical Field[0001] Embodiments relate to optimization of bus structures. Background[0002] Many different types of known buses and other interfaces are used to connect different components using a wide variety of interconnection topologies. For example, on- chip buses are used to couple different on-chip components of a given integrated circuit (IC) such as a processor, system on a chip or so forth. External buses can be used to couple different components of a given computing system either by way of interconnect traces on a circuit board such as a motherboard, wires and so forth.[0003] One recent interface technology is an DC bus according to an DC Specification, expected to become available from the Mobile Industry Processor Interface (MIPI) Alliance™ (www.mipi.org). This interface is expected to be used to serially connect devices, such as internal or external sensors or so forth, to a host processor, applications processor or standalone device via a host controller or input/output controller. Typically, characteristics of the controller and the bus itself are designed for a worst case scenario, which assumes that each device coupled to the bus exhibits maximum characteristics for various parameters including parasitic capacitance loading, leakage current and so forth. As a result, many systems are over-designed by a system designer, unnecessarily consuming greater power consumption, extra circuitry, real estate, and potentially reduced performance.Brief Description of the Drawings[0004] FIG. 1 is a block diagram of a system in accordance with an embodiment of the present invention.[0005] FIG. 2 is a flow diagram of a method in accordance with an embodiment of the present invention.[0006] FIG. 3 is a flow diagram of a method in accordance with another embodiment of the present invention. [0007] FIG. 4 is an embodiment of a fabric composed of point-to-point links that interconnect a set of components.[0008] FIG. 5 is an embodiment of a system-on-chip design in accordance with an embodiment.[0009] FIG. 6 is a block diagram of a system in accordance with an embodiment of the present invention.Detailed Description[0010] In various embodiments, techniques are provided to optimize bus speed, bus data throughput, and/or power for a bus structure such as a multi-drop bus. Although the scope of the present invention is not limited in this regard, example buses may include a multi-drop bus such as a bus in accordance with the forthcoming DC specification.[0011] To enable optimization techniques herein, a master such as a host controller, bus master, and/or main master, may be provided with knowledge of certain device information, e.g., including parasitic information (such as leakage and load capacitance) of each device coupled to the bus. Additionally, embodiments provide techniques to re-optimize characteristics such as speed, output swing, rise/fall times, duty cycle, and/or device power as topology changes due to dynamically added/removed devices. Embodiments are applicable to internal buses and external buses such as an external connector.[0012] To obtain device information, in an embodiment a host controller (e.g., bus master) on a multi-drop bus may be configured to read device information, e.g., from registers included within slaves devices coupled the bus. In an embodiment, such devices may include one or more configuration registers or other registers to store information about parasitic loading such as leakage current and pin capacitance. The host controller may receive such information from all connected devices and based at least in part thereon, determine one or more parameters of the host controller itself and/or bus to optimize its capabilities. Such capabilities may include, but are not limited to, power, bus rate, slew-rate, delay, etc.[0013] Referring now to FIG. 1, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 1, a portion of a system 100 includes a host controller 110 coupled to a plurality of devices 140A - 140c via a multi-drop bus 130. Devices 140 (also referred to herein as "slaves") may have different parasitic and other characteristics and also may have different capabilities of being added/removed from bus 130. Different combinations of connected/active devices 140 may change the bus total capacitance and leakage.[0014] Host controller 110 may be configured to control data and clock signal integrity, as well as use (e.g.,) internal current sources to hold the bus when all devices are off. In some cases, host controller 110 may be a relatively simple host controller for a low complexity bus or other multi-drop bus, such as in accordance with an I2C or 13 C Specification. Other multidrop interfaces such as Serial Peripheral Interface and/or Microwire also may be present in particular embodiments.[0015] As will be described herein, host controller 110 may be configured as a bus master, in at least certain operational phases. As bus master, host controller 110 may receive parameter information from one or more of devices 140 during dynamic operation, e.g., as a bus is reset, and/or as one or more devices are added onto the bus dynamically. Based at least in part on such parameter information, host controller 110 may optimize various aspects of its own configuration, e.g., to improve signal integrity on bus 130, reduce power consumption within host controller 110 or otherwise optimize operation, as will be described in detail herein. Understand that while described with a limited number of characteristics/parameters, techniques can be scaled beyond signal integrity and bus behavior to other device/system parameters and bus behavior, such as output swing, rise/fall time, duty cycle, bus speed and device power.[0016] Before discussing such optimization activities, note that bus 130 is implemented as a two-wire bus in which a single serial line forms a data interconnect and another single serial line forms a clock interconnect. As such, data and clock communications can occur, e.g., in bidirectional manner.[0017] At the high level illustrated in FIG. 1, assume that different types of devices 140 are present. Specifically, device 140A may be always powered on and present as being coupled to bus 130. As an example, device 140A may be a given type of sensor, such as an accelerometer or other sensor which may be incorporated in a given system (such as a smartphone or other mobile platform). As illustrated, device 140A may be powered by connection to a supply voltage (Vddl). For purposes of discussion herein, assume that device 140A operates as a slave to host controller 110.[0018] Referring now to second device 140B, such device may be power controlled via a power controller 145B. In different cases, power controller 145Bmay be incorporated within device 140Bor may be managed as a separate power controller (such as a platform power controller (e.g., a power management integrated circuit (PMIC)), which may control device 140Bto be powered (via connection to Vdd2) when device 140Bis to be active. As an example, assume that device 140Bis another type of sensor, such as a camera device. In such example, device 140Bmay be powered on only when a camera functionality of the system is active.[0019] In turn, device 140c may be power controlled via a power controller 145c, which may be incorporated within device 140cor a separate power controller, to control device 140cto be powered (via connection to Vdd3), e.g., when device 140c is coupled to bus 130. That is, device 140c may be a slave device that can be physically added/removed via a hot plug or hot unplug operation. As examples device 140c may be a cable, card, or external peripheral device.[0020] As one example configuration parameter, host controller 110 may dynamically control/configure one or more current source devices to hold bus 130 at a logic high level against a total leakage current of all devices coupled to bus 130. By dynamically controlling such current source(s) (which may be external or integrated within host controller 110) power consumption can be optimized based at least in part on the actual total leakage current of the sum of the devices. Additional configuration parameters for host controller 110 may further include dynamic control of pull-up and/or pull-down resistors. In addition, input parameters of receive functionality within host controller 110, such as hysteresis, voltage input levels, clamping voltage levels, and so forth, also may be dynamically controlled based on the device information. Such input parameters of host controller 110 may be updated based on slave device configuration and/or register settings. Still further, host controller 110 may dynamically optimize the current source(s) to compensate for addition/removal of devices. It could also disallow a potential hot-plug device from attaching to bus 130 (e.g., by not initializing it with an address) if the device leakage from the to-be attached device is too large for the existing system. [0021] As another example configuration parameter, host controller 110 may dynamically control/configure an output buffer impedance and/or slew rate based at least in part on total system capacitance loading. To this end, host controller 110 may be configured to optimize one or more buffer parameters, output swing and impedance, output rise/fall time, data duty cycle, bus speed and device power for an initial power setting, and further to dynamically update such parameters after addition/removal of devices in the bus topology.[0022] As a still further example configuration parameter, host controller 110 may dynamically control/configure one or more delay paths, such as by way of addition/removal of delay elements in read/write paths for optimization based at least in part on different parasitic loading presented by devices coupled to bus 130. Host controller 110 may be configured to optimize the delays for an initial power setting, and further to dynamically update such parameters after addition/removal of devices in the bus topology.[0023] As illustrated in FIG. 1, host controller 110 includes a processing circuit 112. Understand that many different types of host controllers can be provided. As examples, host controller 110 may be an interface circuit of a multicore processor or other system on chip (SoC), application processor or so forth. In other cases, host controller 110 may be a standalone host controller for bus 130. And of course other implementations are possible. In different implementations, processing circuit 112 may represent one or more cores or other hardware processing logic of a particular device or it may simply be part of an interface circuit to act as transmitter and receiver for host controller 110. In turn, processing circuit 112 couples to a driver 113 that drives data onto bus 130 and a receiver 114 that receives incoming data via a data line of bus 130.[0024] To this end, to enable data to be driven and received, a first current source Ii couples to bus 130 at a trace of host controller 110. Current source Ii may couple to a given supply voltage as an open drain connection. In an embodiment, current source Ii may implemented as a controllable resistance (such as a parallel set of resistors) controllably selectable, e.g., via switches such as metal oxide semiconductor field effect transistors (MOSFETs). By way of control as described herein a given programmable resistance may thus couple between a voltage rail and, e.g., driver 113. In one embodiment driver 113 may be implemented to include a MOSFET having a gate driven by internal logic within host controller 110 to control the output voltage, a drain coupled to bus 130 and a source coupled to ground (details of this connection are not shown for ease of illustration in FIG. 1).[0025] To provide a clock signal (and/or to receive a clock signal, in implementations for certain buses), a clock control circuit 115 couples to a clock line of bus 130 via corresponding driver 116 and receiver 117. In turn, another current source I2may be similarly configured to enable programmable control of parameters on the clock line of bus 130.[0026] Host controller 110 further includes an integrity control circuit 120. In various embodiments, integrity control circuit 120 may be configured to perform the dynamic optimization of host controller configuration. Still further, as described herein integrity control circuit 120 may be configured as a bus topology arbiter to determine whether a newly added device on bus 130 is allowed to be maintained on bus, to ensure correct operation as to a current or available configuration. Integrity control circuit 120 may receive parameter information from devices on bus 130 and, based at least in part on this information, determine control parameters for host controller 110. In an embodiment, based on analysis of the received parameter information, integrity control circuit 120 may access one or more lookup tables to identify appropriate settings. In other cases, integrity control circuit 120 may execute one or more algorithms to dynamically calculate optimized settings. Understand while shown at this high level in the embodiment of FIG. 1, many variations and alternatives are possible.[0027] Referring now to FIG. 2, shown is a flow diagram of a method in accordance with an embodiment of the present invention. More specifically method 200 shown in FIG. 2 is a method for performing dynamic signal integrity control of a multi-drop bus in accordance with an embodiment. In various embodiments, method 200 may be performed by hardware, software and/or firmware (or combinations thereof) such as an integrity control circuit of a host controller. As illustrated, method 200 begins by reading (or otherwise obtaining), by the host controller, device information (e.g., one or more parameters) from a new device on the bus (block 210). Although the scope of the present invention is not limited in this regard such device information may relate to, e.g., pin leakage information, pin capacitance information, drive capabilities or so forth. As an example, a host controller as bus master may issue a request to a newly joined device to obtain such information. More specifically, each such device may include one or more configuration storages such as registers to store this information as fused into the device (such as a given sensor, camera or other IP block) by a manufacturer.[0028] Still with reference to FIG. 2, it can be determined at diamond 220 whether additional devices are on the bus that the host controller has not read. If so, control passes back to block 210. Otherwise, at block 230 the host controller may determine optimal bus signal integrity. Different manners of dynamically calculating one or more configuration values to realize optimal bus signal integrity can occur in different embodiments. For example, with regard to pin capacitance, assume that based on such information from all identified devices a total pin capacitance on the bus is at a first level (e.g., five picofarads (pF)). As such, this value may be used to access configuration parameters stored in a lookup table. Such configuration parameters may include, e.g., current source settings to enable an appropriate current source to be provided. In other embodiments, instead of a lookup table operation, dynamic calculations may be performed to determine appropriate settings by way of using obtained parameter information in one or more equations or other heuristic models.[0029] Next with reference to FIG. 2, one or more configuration parameters within the host controller may be updated to optimize platform signal integrity (block 240). As examples, one or more current sources can be configured for open-drain pull up resistance control, and/or output buffer impedance/slew rate can be adjusted. Still further in some cases delays on write/read paths can be updated among others.[0030] Thus at this point an optimization of signal integrity on a multi-drop bus has been realized, along with potential optimization of host controller operation, including reductions in power consumption. Note that during system operation, dynamic changes may occur on the bus, including the addition and/or removal of one or more devices. As such, during operation it can be determined at diamond 250 whether any such changes have occurred on the bus, namely whether one or more devices have been added or removed with regard to the bus topology. If so, control passes back to block 210 discussed above. Otherwise no further action occurs. Understand while shown at this high level in the embodiment of FIG. 2, many variations and alternatives are possible. [0031] As an example, an integrity control circuit as described herein may further, in bus mastering mode, determine when a device is dynamically added to the bus whether such device is allowed to be actively connected to the bus, e.g., based on various metrics. Referring now to FIG. 3, shown is a flow diagram of a method in accordance with another embodiment of the present invention. In various embodiments, method 300 may be performed by hardware, software and/or firmware (or combinations thereof) such as an integrity control circuit of a host controller. As illustrated, it may be determined at diamond 310 whether a new device is added to the bus. Note that this determination may thus correspond to diamond 250 of FIG. 2. If a new device is added, device information may be obtained (block 320), as described above. Thereafter at block 330 the integrity control circuit (and more specifically a bus topology arbiter circuit) may determine whether the new device is allowed to actively connect to the bus. Note that this determination may be based at least in part on the device information provided by the new device, in consideration of the one or more devices already present on the bus. For example, with the additional pin capacitance, pin leakage, drive capabilities or other parameters of the new device, signal integrity may be harmed.[0032] If a determination is made that the new device is not to be allowed (as determined at diamond 340), control passes to block 350 where the device may be caused to be disconnected from the bus. As an example, the integrity signal circuit may send a disconnect command to cause this new slave device to become disconnected from the bus, at least for a given time period (before the slave device may again request to join the bus). Or in other cases, the signal integrity circuit may autonomously send a join invitation to this added device, e.g., when another device exits from the bus, or due to other changes to dynamic bus operation.[0033] Otherwise if it is determined that the new device is allowed to join the bus, configuration operations to determine an optimal signal integrity for the bus and update one or more parameters of the host controller (e.g., as discussed above at blocks 230 and 240) may be performed (block 360). Then at block 370, the new device is enabled to coupled to the bus, e.g., by providing the device with an available address. Understand while shown at this high level in the embodiment of FIG. 3, of course many variations and alternatives are possible. [0034] Embodiments thus provide techniques to dynamically optimize a host controller and/or bus based on actual bus topology. As such, embodiments provide a system designer flexibility for multi-drop buses to optimize a signal integrity design for dynamically changing parasitic loads. In this way, over-designing for potential hot-plugged devices is eliminated. Embodiments may be particularly applicable to low power computing systems such as smartphones or other small mobile devices, to reduce power consumption by optimizing host controller parameters (such as buffer power) when one or more slave devices are powered down (or completely removed) from the multi-drop bus.[0035] Embodiments may be implemented in a wide variety of interconnect structures. Referring to FIG. 4, an embodiment of a fabric composed of point-to-point links that interconnect a set of components is illustrated. System 400 includes processor 405 and system memory 410 coupled to controller hub 415. Processor 405 includes any processing element, such as a microprocessor, a host processor, an embedded processor, a co-processor, or other processor. Processor 405 is coupled to controller hub 415 through front-side bus (FSB) 406. In one embodiment, FSB 406 is a serial point-to-point interconnect. In another embodiment, link 406 includes a parallel serial, differential interconnect architecture that is compliant with different interconnect standards, and which may couple with one or more host controllers to perform dynamic signal integrity control as described herein.[0036] System memory 410 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 400. System memory 410 is coupled to controller hub 415 through memory interface 416. Examples of a memory interface include a double-data rate (DDR) memory interface, a dual- channel DDR memory interface, and a dynamic RAM (DRAM) memory interface.[0037] In one embodiment, controller hub 415 is a root hub, root complex, or root controller in a PCIe interconnection hierarchy. Examples of controller hub 415 include a chipset, a memory controller hub (MCH), a northbridge, an interconnect controller hub (ICH), a southbridge, and a root controller/hub. Often the term chipset refers to two physically separate controller hubs, i.e. a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include the MCH integrated with processor 405, while controller 415 is to communicate with I/O devices, in a similar manner as described below. In some embodiments, peer-to-peer routing is optionally supported through root complex 415.[0038] Here, controller hub 415 is coupled to switch/bridge 420 through serial link 419. Input/output modules 417 and 421, which may also be referred to as interfaces/ports 417 and 421, include/implement a layered protocol stack to provide communication between controller hub 415 and switch 420. In one embodiment, multiple devices are capable of being coupled to switch 420.[0039] Switch/bridge 420 routes packets/messages from device 425 upstream, i.e., up a hierarchy towards a root complex, to controller hub 415 and downstream, i.e., down a hierarchy away from a root controller, from processor 405 or system memory 410 to device 425. Switch 420, in one embodiment, is referred to as a logical assembly of multiple virtual PCI-to-PCI bridge devices. Device 425 includes any internal or external device or component to be coupled to an electronic system, such as an I/O device, a Network Interface Controller (NIC), an add-in card, an audio processor, a network processor, a hard-drive, a storage device, a CD/DVD ROM, a monitor, a printer, a mouse, a keyboard, a router, a portable storage device, a Firewire device, a Universal Serial Bus (USB) device, a scanner, and other input/output devices and which may be coupled via an I3C bus, as an example. Often in the PCIe vernacular, such a device is referred to as an endpoint. Although not specifically shown, device 425 may include a PCIe to PCI/PCI-X bridge to support legacy or other version PCI devices. Endpoint devices in PCIe are often classified as legacy, PCIe, or root complex integrated endpoints.[0040] Graphics accelerator 430 is also coupled to controller hub 415 through serial link 432. In one embodiment, graphics accelerator 430 is coupled to an MCH, which is coupled to an ICH. Switch 420, and accordingly I/O device 425, is then coupled to the ICH. I/O modules 431 and 418 are also to implement a layered protocol stack to communicate between graphics accelerator 430 and controller hub 415. A graphics controller or the graphics accelerator 430 itself may be integrated in processor 405.[0041] Turning next to FIG. 5, an embodiment of a SoC design in accordance with an embodiment is depicted. As a specific illustrative example, SoC 500 may be configured for insertion in any type of computing device, ranging from portable device to server system. Here, SoC 500 includes 2 cores— 506 and 507. Cores 506 and 507 may conform to an Instruction Set Architecture, such as an Intel® Architecture Core™-based processor, an Advanced Micro Devices, Inc. (AMD) processor, a MlPS-based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores 506 and 507 are coupled to cache control 508 that is associated with bus interface unit 509 and L2 cache 510 to communicate with other parts of system 500 via an interconnect 512.[0042] Interconnect 512 provides communication channels to the other components, such as a Subscriber Identity Module (SIM) 530 to interface with a SIM card, a boot ROM 535 to hold boot code for execution by cores 506 and 507 to initialize and boot SoC 500, a SDRAM controller 540 to interface with external memory (e.g., DRAM 560), a flash controller 545 to interface with non-volatile memory (e.g., flash 565), a peripheral controller 550 (e.g., an eSPI interface) to interface with peripherals, video codecs 520 and video interface 525 to display and receive input (e.g., touch enabled input), GPU 515 to perform graphics related computations, etc. Any of these interconnects/interfaces may incorporate aspects described herein, including dynamic control of configurations and capabilities to improve signal integrity as described herein. In addition, the system illustrates peripherals for communication, such as a Bluetooth module 570, 3G modem 575, GPS 580, and WiFi 585. Also included in the system is a power controller 555.[0043] Referring now to FIG. 6, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 6, multiprocessor system 600 includes a first processor 670 and a second processor 680 coupled via a point-to-point interconnect 650. As shown in FIG. 6, each of processors 670 and 680 may be many core processors including representative first and second processor cores (i.e., processor cores 674a and 674b and processor cores 684a and 684b).[0044] Still referring to FIG. 6, first processor 670 further includes a memory controller hub (MCH) 672 and point-to-point (P-P) interfaces 676 and 678. Similarly, second processor 680 includes a MCH 682 and P-P interfaces 686 and 688. As shown in FIG. 6, MCH's 672 and 682 couple the processors to respective memories, namely a memory 632 and a memory 634, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 670 and second processor 680 may be coupled to a chipset 690 via P-P interconnects 662 and 664, respectively. As shown in FIG. 6, chipset 690 includes P- P interfaces 694 and 698.[0045] Furthermore, chipset 690 includes an interface 692 to couple chipset 690 with a high performance graphics engine 638, by a P-P interconnect 639. As shown in FIG. 6, various input/output (I/O) devices 614 may be coupled to first bus 616, along with a bus bridge 618 which couples first bus 616 to a second bus 620. Various devices may be coupled to second bus 620 including, for example, a keyboard/mouse 622, communication devices 626 and a data storage unit 628 such as a disk drive or other mass storage device which may include code 630, in one embodiment. Further, an audio I/O 624 may be coupled to second bus 620. Any of the devices shown in FIG. 6 may be configured to perform dynamic signal integrity control for one or more of the interconnect structures, as described herein.[0046] The following Examples pertain to further embodiments.[0047] In one example, an apparatus includes a host controller to couple to an interconnect to which a plurality of devices may be coupled. In this example, the host controller includes: a first driver to drive first information onto the interconnect; a first receiver to receive second information comprising parameter information of at least one of the plurality of devices from the interconnect; and an integrity control circuit to receive the parameter information of the at least one of the plurality of devices and dynamically update at least one capability of the host controller based at least in part on the parameter information.[0048] In an example, the integrity control circuit is to dynamically update a configuration of a first current source to couple to the first driver based at least in part on the parameter information.[0049] In an example, the at least one capability of the host controller comprises one or more of a delay configuration, a buffer impedance, and a slew rate.[0050] In an example, the integrity control circuit is to dynamically update the at least one capability of the host controller when at least one device of the plurality of devices is coupled to the interconnect or de-coupled from the interconnect. [0051] In an example, the integrity control circuit is to access a table based at least in part on the parameter information and obtain control information to update the at least one capability of the host controller.[0052] In an example, the integrity control circuit is to dynamically calculate the at least one capability of the host controller based at least in part on the parameter information.[0053] In an example, the integrity control circuit is to receive an indication of a new device to couple to the interconnect and, in response to parameter information of the new device, to prevent the new device from being coupled to the interconnect.[0054] In an example, a first device of the plurality of devices is to be always connected to the interconnect and powered on during operation of a system.[0055] In an example, a second device of the plurality of devices is to be always connected to the interconnect and dynamically power controlled during operation of the system.[0056] In an example, the parameter information comprises parasitic information of the at least one device.[0057] In another example, a method includes: obtaining, via a host controller, device information from one or more devices coupled to an interconnect; calculating one or more configuration values for the host controller based on the device information; and dynamically updating one or more configuration parameters of the host controller based on the one or more configuration values.[0058] In an example, the method further comprises: identifying a new device to be coupled to the interconnect; and obtaining device information of the new device.[0059] In an example, the method further comprises: determining whether the new device is allowed to be coupled to the interconnect, based at least in part on the device information of the new device; and responsive to determining that the new device is allowed to be coupled to the interconnect, sending a message to the new device to enable the new device to be coupled to the interconnect.[0060] In an example, the method further comprises: determining to prevent the new device from being coupled to the interconnect, based at least in part on the device information of the new device; and responsive to determining that the new device is prevented from being coupled to the interconnect, sending a message to the new device to prevent the new device from being coupled to the interconnect.[0061] In an example, dynamically updating one or more configuration parameters of the host controller comprises sending control signals to one or more switches of the host controller, to cause a first current source coupled between a supply voltage and an output driver of the host controller to be dynamically configured.[0062] In an example, the method further comprises sending a first message from the host controller to a first device to request the device information from the first device, the first device storing the first information in at least one register of the first device.[0063] In an example, the method further comprises accessing a lookup table, via the host controller, based at least in part on the device information to obtain the one or more configuration parameters.[0064] In another example, a computer readable medium including instructions is to perform the method of any of the above examples.[0065] In another example, a computer readable medium including data is to be used by at least one machine to fabricate at least one integrated circuit to perform the method of any one of the above examples.[0066] In another example, an apparatus comprises means for performing the method of any one of the above examples.[0067] In another embodiment, a system includes: a first device coupled to a host controller via a bus, where the first device includes at least one first storage to store first device information regarding one or more parasitic loading parameters of the first device; and a second device coupled to the host controller via the bus, where the second device includes a power controller to couple the second device to the bus when the second device is active and otherwise to de-couple the second device from the bus. The second device may further include least one second storage to store second device information regarding one or more parasitic loading parameters of the second device. In turn, the host controller may have a control circuit to receive the first device information and the second device information and dynamically update at least one configuration parameter of the host controller based thereon.[0068] In an example, the host controller includes: a first driver to drive first information onto the bus, the first driver to couple to a first current source; and a first receiver to receive the first device information and the second device information, the first receiver to couple to a second current source, where the host controller is to dynamically control a configuration of at least one of the first current source and the second current source based at least in part on the first device information and the second device information.[0069] In an example, the system further includes a third device to dynamically couple to the bus, where the host controller is to determine whether to allow the third device to be coupled to the bus, based at least in part on third device information regarding one or more parasitic loading parameters of the third device.[0070] Understand that various combinations of the above examples are possible.[0071] Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.[0072] Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations. Still further embodiments may be implemented in a computer readable storage medium including information that, when manufactured into a SoC or other processor, is to configure the SoC or other processor to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0073] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
Embodiments of a system and method for generating an image configured to program a parallel machine from source code are disclosed. One such parallel machine includes a plurality of state machine elements (SMEs) grouped into pairs, such that SMEs in a pair have a common output. One such method includes converting source code into an automaton comprising a plurality of interconnected states, and converting the automaton into a netlist comprising instances corresponding to states in the automaton, wherein converting includes pairing states corresponding to pairs of SMEs based on the fact that SMEs in a pair have a common output. The netlist can be converted into the image and published. |
1.A computer-implemented method for generating an image from source code configured to program a parallel machine, the method comprising:Convert the source code into an automaton containing multiple interconnected states;Converting the automaton to a netlist, the netlist containing instances corresponding to the state of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the automaton is converted to a netlist Including grouping states together based on the physical design of the parallel machine; andConvert the netlist to the image.2.The method of claim 1, wherein the instance includes an SME instance corresponding to a state machine element SME hardware element and an SME group instance corresponding to a hardware element including a group of SMEs, and wherein grouping includes grouping states into SME groups Group instance.3.The method according to claim 2, wherein the physical design includes a physical design of the hardware element including a group of SMEs.4.The method of claim 3, wherein the physical design includes one of input or output restrictions on the SME in the hardware element that includes a group of SMEs.5.The method of claim 4, wherein the physical design includes a limitation of the shared output of the SMEs in the hardware element containing a group of SMEs.6.The method of claim 2, wherein the SME group instance includes a paired group GOT instance containing two SME instances, and wherein the physical design includes the SME in each GOT coupled to a common output.7.The method of claim 6, wherein converting the automaton to a netlist comprises:Determine which of the states can be grouped together in a GOT instance; andThe states are paired based on the determination.8.The method according to claim 7, wherein when neither the first state nor the second state is the final state of the automaton, and one of the first state and the second state is not driven differently from the In the first state or any state of the second state, the first state and the second state may be paired together in a GOT instance.9.The method according to claim 7, wherein when neither the first state nor the second state is the final state of the automaton, and both the first state and the second state drive the same external state, the The first state and the second state may be paired together in the GOT instance.10.The method according to claim 7, wherein when one of the first state and the second state is the final state of the automaton, and the other of the first state and the second state is not driven In any external state, the first state and the second state may be paired together in a GOT instance.11.The method according to claim 7, wherein when both the first state and the second state are the final states of the automaton, and both the first state and the second state drive the same external state, the The first state and the second state may be paired together in the GOT instance.12.The method of claim 7, wherein determining which of the states can be grouped together in a GOT instance includes: using graph theory to determine which of the states can be grouped together in a GOT instance.13.The method of claim 12, wherein using graph theory to determine which of the states can be grouped together in a GOT instance comprises: using graph theory to identify a maximum match to determine which of the states can be grouped together in a GOT instance in.14.The method of claim 1, further comprising:Publish the image.15.The method of claim 1, wherein the instance includes a general-purpose instance and a dedicated instance, wherein the general-purpose instance corresponds to a general-purpose state of the automaton, and the special-purpose instance corresponds to a specific-state of the automaton.16.The method according to claim 15, wherein the hardware elements corresponding to the general instance include a state machine element SME and a paired group GOT, and wherein the hardware elements corresponding to the dedicated instance include a counter and logic element.17.A computer-readable medium that includes instructions that, when executed by the computer, cause the computer to perform operations including:Convert the source code into an automaton containing multiple interconnected states;Converting the automaton to a netlist, the netlist containing instances corresponding to the state of the automaton, wherein the instances correspond to hardware elements of a parallel machine, wherein converting the automaton to a netlist includes based on The physical design of the parallel machine to group states together; andConvert the netlist to the image.18.The computer-readable medium of claim 17, wherein the automaton is a homogeneous automaton.19.The computer-readable medium of claim 17, wherein converting the automaton to a netlist includes mapping each of the states of the automaton to instances and determinations corresponding to the hardware elements Connectivity between the examples.20.The computer-readable medium of claim 17, wherein the netlist further includes a plurality of connections between the instances representing conductors between the hardware elements.21.The computer-readable medium of claim 17, wherein converting the automaton to a netlist comprises: converting the automaton to a netlist containing instances, the instances corresponding to the start state of the automaton Outside state.22.The computer-readable medium of claim 17, wherein the instructions cause the computer to perform operations including:The position of the hardware element corresponding to the instance of the netlist in the parallel machine is determined.23.The computer-readable medium of claim 22, wherein grouping states together includes grouping states together based on a physical design of a hardware element that includes a group of common elements.24.The computer-readable medium of claim 22, wherein the instructions cause the computer to perform operations including:Determine which conductors of the parallel machine will be used to connect the hardware elements; andThe settings of the programmable switch of the parallel machine are determined, wherein the programmable switch is configured to selectively couple the hardware elements together.25.A computer that contains:Memory with software stored on it; andA processor communicatively coupled to the memory, wherein the software, when executed by the processor, causes the processor to:Convert the source code into an automaton containing multiple interconnected states;Converting the automaton to a netlist, the netlist containing instances corresponding to the state of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the instances include multiple first instances And a group instance containing two or more first instances, where converting the automaton into a netlist includes grouping states together in a group instance based on many unused first instances; andConvert the netlist to the image.26.The computer of claim 25, wherein the group instance includes a paired group GOT instance, and wherein the group state includes pairing states based on which states are driven by the paired state.27.The computer according to claim 26, wherein grouping states in a group based on many unused first instances includes:It is determined whether the first state and the second state can be paired based on the following conditions:Neither the first state nor the second state is the final state in the automaton, and one of the first state and the second state does not drive differently than the first state or the Any state of the second state;Neither the first state nor the second state is the final state in the automaton, and both the first state and the second state drive the same external state;Either the first state or the second state is the final state, and the first state or the second state that is not the final state does not drive except the first state or the second state Any state other than; andBoth the first state and the second state are final states, and both the first state and the second state drive the same external state.28.The computer of claim 25, wherein converting the automaton to a netlist includes:Model the state as a graph, where the vertices of the graph correspond to the states, and the edges of the graph correspond to the possible pairs of the states;Determine matching vertices of the graph; andPair the states corresponding to the matching vertices.29.The computer of claim 28, wherein converting the automaton to a netlist includes:Determine the maximum match for the graph.30.The computer of claim 29, wherein converting the automaton to a netlist includes:Pair each set of states corresponding to matching vertices; andEach state corresponding to an unmatched vertex is mapped to a GOT instance, where one SME instance in the GOT instance will be unused.31.A system that includes:The computer, which is configured to:Convert the source code into an automaton containing multiple interconnected states;Converting the automaton to a netlist, the netlist containing instances corresponding to the state of the automaton, wherein the instances correspond to hardware elements of the parallel machine, wherein the instances include multiple first instances And a group instance containing two or more first instances, where converting the automaton into a netlist includes grouping states together in a group instance based on many unused first instances; andConvert the netlist into an image; andA device that is configured to load the image onto a parallel machine.32.The system of claim 31, wherein grouping states together includes:The states are paired according to which states are driven by the paired states.33.The system of claim 31, wherein grouping states together in the group instance based on many unused first instances includes:It is determined whether the first state and the second state can be paired based on the following conditions:Neither the first state nor the second state is the final state in the automaton, and one of the first state and the second state does not drive differently than the first state or the Any state of the second state;Neither the first state nor the second state is the final state in the automaton, and both the first state and the second state drive the same external state;Either the first state or the second state is the final state, and the first state or the second state that is not the final state does not drive except the first state or the second state Any state other than; andBoth the first state and the second state are final states, and both the first state and the second state drive the same external state.34.The system of claim 31, wherein grouping states together in the group instance based on many unused first instances includes:Model the state as a graph, where the vertices of the graph correspond to the states, and the edges of the graph correspond to the possible pairs of the states;Determine matching vertices of the graph; andPair the states corresponding to the matching vertices.35.The system of claim 34, wherein the states are grouped together in group instances based on many unused first instances:Determine the maximum match for the graph.36.The system of claim 35, wherein grouping states together in group instances based on many unused first instances includes:Pair each set of states corresponding to matching vertices; andEach state corresponding to an unmatched vertex is mapped to a GOT instance, where one SME instance in the GOT instance will be unused.37.The system of claim 31, wherein the device is configured to implement each pair of states as a group of two hardware elements in the parallel machine.38.A parallel machine programmed by the image generated by the process according to claim 1. |
State grouping for component utilizationPriority claimThis patent application claims the benefit of the priority of US Provisional Patent Application No. 61 / 436,075, entitled "STATE GROUPING FORE ELEMENT UTILIZATION", filed on January 25, 2011, as stated The entire text of the US provisional patent application is hereby incorporated by reference.Technical fieldBackground techniqueA compiler for a parallel machine converts source code into machine code (e.g., an image) for configuring (e.g., programming) the parallel machine. The machine code may implement a finite state machine on the parallel machine. One stage of the process of converting source code into machine code involves forming a netlist. The netlist describes the connectivity between the instances of the hardware elements of the parallel machine. The netlist may describe the connections between the hardware elements so that the hardware elements implement the functionality of the source code.Summary of the inventionBRIEF DESCRIPTIONFIG. 1 illustrates an example of a parallel machine according to various embodiments of the present invention.2 illustrates an example of the parallel machine of FIG. 1 implemented as a finite state machine engine according to various embodiments of the invention.3 illustrates an example of the blocks of the finite state machine engine of FIG. 2 according to various embodiments of the invention.4 illustrates an example of the row of the block of FIG. 3 according to various embodiments of the invention.FIG. 5 illustrates an example of a paired group of rows of FIG. 4 according to various embodiments of the present invention.6 illustrates an example of a method for a compiler to convert source code into an image configured to program the parallel machine of FIG. 1 according to various embodiments of the present invention.7A and 7B illustrate example automata according to various embodiments of the invention.8A and 8B illustrate example netlists according to various embodiments of the invention.9 illustrates an example computer for executing the compiler of FIG. 6 according to various embodiments of the invention.detailed descriptionThe following description and drawings sufficiently illustrate specific embodiments that enable those skilled in the art to practice. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Parts and features of some embodiments may be included in other embodiments or may replace parts and features of other embodiments. The embodiments set forth in the claims cover all available equivalents of those claims.In particular, this document describes a compiler that generates a netlist based on the physical design of the parallel machine. In an example, the physical design of the parallel machine may include connectivity limitations between state machine elements of the parallel machine. For example, the state machine elements in the parallel machine can be grouped into pairs that share a common output. Therefore, the compiler can generate a netlist based on a physical design in which SME pairs share a common output.FIG. 1 illustrates an example parallel machine 100. The parallel machine 100 may receive input data and provide an output based on the input data. The parallel machine 100 may include a data input port 110 for receiving input data and an output port 114 for providing output to another device. The data input port 110 provides an interface for inputting data to the parallel machine 100.The parallel machine 100 includes a plurality of programmable elements, including general elements 102 and dedicated elements 112. The universal element 102 may include one or more inputs 104 and one or more outputs 106. The universal element 102 can be programmed to one of multiple states. The state of the universal element 102 determines which output (s) the universal element 102 will provide based on a given input. That is, the state of the universal element 102 determines how the programmable element will react based on a given input. The data input to the data input port 110 may be provided to the plurality of common elements 102 to cause the common elements 102 to take action on them. Examples of general elements 102 may include a state machine element (SME) discussed in detail below and a configurable logic block. In an example, the SME can be set to a given state to provide a certain output (e.g., a high or "1" signal) when a given input is received at the data input port 110. When an input different from the given input is received at the data input port 110, the SME may provide a different output (e.g., a low or "0" signal). In an example, a configurable logic block may be set to perform a Boolean logic function (eg, "AND", "OR", "or" based on the input received at the data input port 110 Not (NOR) "etc.).The parallel machine 100 may also include a programming interface 111 to load programs (eg, images) onto the parallel machine 100. The image can program (e.g., set) the state of the universal element 102. That is, the image can configure the universal element 102 to react to a given input in a certain way. For example, a universal element 102 may be set to output a high signal when the character "a" is received at the data input port 110. In some examples, the parallel machine 100 may use a clock signal to control the timing of the operation of the universal element 102. In some examples, the parallel machine 100 may include dedicated elements 112 (e.g., RAM, logic gates, counters, look-up tables, etc.) for interacting with the common elements 102 and for performing dedicated functions. In some embodiments, the data received at the data input port 110 may include a fixed set of data received over time or together or a stream of data received over time. The data may be received from any source coupled to the parallel machine 100 or generated by any source coupled to the parallel machine 100, such as a database, sensor, network, or the like.The parallel machine 100 also includes a plurality of programmable switches for selectively coupling together different elements of the parallel machine 100 (for example, the general element 102, the data input port 110, the output port 114, the programming interface 111, and the dedicated element 112) 108. Therefore, the parallel machine 100 includes a programmable matrix formed between the elements. In an example, the programmable switch 108 can selectively couple two or more elements to each other, such that the input 104 of the general element 102, the data input port 110, the programming interface 111, or the dedicated element 112 can pass through one or one The above programmable switch 108 is coupled to the output 106 of the universal element 102, the output port 114, the programming interface 111, or the dedicated element 112. Therefore, the routing of signals between the elements can be controlled by setting programmable switches 108. Although FIG. 1 illustrates a certain number of conductors (e.g., wires) between a given element and the programmable switch 108, it should be understood that in other examples, different numbers of conductors may be used. Moreover, although FIG. 1 illustrates that each universal element 102 is individually coupled to a programmable switch 108, in other examples, multiple universal elements 102 may be used as a group (e.g., block 802, as illustrated in FIG. 8) And is coupled to a programmable switch 108. In an example, the data input port 110, the data output port 114, and / or the programming interface 111 may be implemented as a register, so that writing to the register provides data to or from the corresponding element.In one example, a single parallel machine 100 is implemented on a physical device, however, in other examples, two or more parallel machines 100 may be implemented on a single physical device (e.g., a physical chip). In an example, each of the plurality of parallel machines 100 may include a different data input port 110, a different output port 114, a different programming interface 111, and a different set of general components 102. In addition, each group of universal elements 102 can react to the data at its corresponding input data port 110 (e.g., output a high or low signal). For example, the first set of universal elements 102 corresponding to the first parallel machine 100 may react to the data at the first data input port 110 corresponding to the first parallel machine 100. The second set of universal elements 102 corresponding to the second parallel machine 100 can react to the second data input port 110 corresponding to the second parallel machine 100. Therefore, each parallel machine 100 includes a set of general elements 102, wherein different sets of general elements 102 can react to different input data. Similarly, each parallel machine 100 and each group of corresponding universal elements 102 can provide a different output. In some examples, the output port 114 from the first parallel machine 100 may be coupled to the input port 110 of the second parallel machine 100 so that the input data for the second parallel machine 100 may include the output data from the first parallel machine 100 .In one example, the image for loading onto the parallel machine 100 includes multiple information bits for setting the state of the general element 102, programming the programmable switch 108, and configuring the dedicated element 112 within the parallel machine 100. In one example, the image can be loaded onto the parallel machine 100 to program the parallel machine 100 to provide the desired output based on certain inputs. The output port 114 may provide output from the parallel machine 100 based on the reaction of the general-purpose component 102 to the data at the data input port 110. The output from the output port 114 may include a single bit indicating a match for a given pattern, a word containing multiple bits indicating a match and a mismatch with multiple patterns, and corresponding to all or some common elements 102 at a given moment State vector of the state.Example uses of parallel machine 100 include pattern recognition (eg, speech recognition, image recognition, etc.), signal processing, imaging, computer vision, cryptography, and others. In some examples, the parallel machine 100 may include a finite state machine (FSM) engine, a field programmable gate array (FPGA), and variants thereof. In addition, the parallel machine 100 may be a component in a larger device, such as a computer, pager, cellular phone, personal organizer, portable audio player, network device (eg, router, firewall, switch, or Any combination of them), control circuits, cameras, etc.2 to 5 illustrate another parallel machine implemented as a finite state machine (FSM) engine 200. In one example, FSM engine 200 includes a hardware implementation of a finite state machine. Therefore, the FSM engine 200 implements multiple selectively coupled hardware elements (e.g., programmable elements) corresponding to multiple states in the FSM. Similar to the state in FSM, the hardware element can analyze the input stream and activate downstream hardware elements based on the input stream.The FSM engine 200 includes a plurality of programmable components, including general components and special components. Common components can be programmed to perform many different functions. These general elements include SMEs 204, 205 (shown in Figure 5), which are hierarchically organized into rows 206 (shown in Figures 3 and 4) and block 202 (shown in Figures 2 and 3). To route signals between hierarchically organized SMEs 204 and 205, a hierarchy of programmable switches is used, which includes inter-block switches 203 (shown in FIGS. 2 and 3), and intra-block switches 208 (shown in FIG. 3 and 4) and in-line switch 212 (shown in FIG. 4). The SMEs 204, 205 may correspond to the state of the FSM implemented by the FSM engine 200. As described below, the SMEs 204, 205 can be coupled together by using programmable switches. Therefore, FSM can be implemented on FSM engine 200 by programming SMEs 204, 205 to correspond to the functionality of the states and by selectively coupling SMEs 204, 205 together to correspond to the transitions between states in the FSM.FIG. 2 illustrates a full view of the example FSM engine 200. The FSM engine 200 includes multiple blocks 202 that can be selectively coupled with a programmable inter-block switch 203. In addition, block 202 may be selectively coupled to an input block 209 (e.g., data input port) for receiving signals (e.g., data) and providing data to block 202. Block 202 may also be selectively coupled to an output block 213 (e.g., output port) for providing the signal from block 202 to an external device (e.g., another FSM engine 200). The FSM engine 200 may also include a programming interface 211 for loading programs (e.g., images) onto the FSM engine 200. The image can program (eg, set) the status of SMEs 204 and 205. That is, the image may configure SMEs 204, 205 to react to a given input at input block 209 in a certain manner. For example, SME 204 may be set to output a high signal when the character "a" is received at input block 209.In an example, the input block 209, output block 213, and / or programming interface 211 may be implemented as registers, such that writes to registers provide data to or from corresponding elements. Therefore, the bits from the image stored in the register corresponding to the programming interface 211 can be loaded on the SMEs 204,205. Although FIG. 2 illustrates a certain number of conductors (eg, wires, traces) between block 202, input block 209, output block 213, and inter-block switch 203, it should be understood that in other examples, fewer or More conductors.FIG. 3 illustrates an example of block 202. Block 202 may include multiple rows 206 that may be selectively coupled with switches 208 within the programmable block. In addition, row 206 may be selectively coupled to another row 206 within another block 202 through an inter-block switch 203. In one example, a buffer 201 is included to control the timing of signals to / from the inter-block switch 203. The row 206 includes a plurality of SMEs 204, 205, which organizes pairs of elements referred to herein as a group of two (GOT) 210. In an example, block 202 includes sixteen (16) rows 206.FIG. 4 illustrates an example of line 206. GOT 210 can be selectively coupled to other GOT 210 and any other element 224 in row 206 through programmable in-row switch 212. The GOT 210 may also be coupled to other GOTs 210 in other rows 206 through the intra-block switch 208, or to other GOTs 210 in other blocks 202 through the inter-block switch 203. In one example, GOT 210 has a first input 214 and a second input 216, and an output 218. The first input 214 is coupled to the first SME 204 of the GOT 210, and the second input 214 is coupled to the second SME 204 of the GOT 210.In an example, the row 206 includes a first plurality of row interconnect conductors 220 and a second plurality of row interconnect conductors 222. In one example, the inputs 214, 216 of the GOT 210 can be coupled to one or more row interconnect conductors 220, 222, and the output 218 can be coupled to one row interconnect conductor 220, 222. In an example, the first plurality of row interconnect conductors 220 can be coupled to each SME 204 of each GOT 210 within the row 206. The second plurality of row interconnect conductors 222 may be coupled to one SME 204 of each GOT 210 in the row 206, but not to another SME 204 of the GOT 210. In an example, the first half of the second plurality of row interconnect conductors 222 can be coupled to the first half of the SME 204 (one SME 204 from each GOT 210) within a row 206, and the second plurality of row interconnects The second half of conductor 222 may be coupled to the second half of SME 204 in a row 206 (another SME 204 from each GOT 210). The limited connectivity between the second plurality of row interconnect conductors 222 and SMEs 204, 205 is referred to herein as "equivalence."In an example, the row 206 may also include dedicated elements 224, such as counters, programmable Boolean logic elements, field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), programmable processors (eg, microprocessors) And other components. In addition, in an example, the dedicated element 224 is different in different rows 206. For example, four of the rows 206 in a block 202 may include Boolean logic as a dedicated element 224, and the other eight rows 206 in a block 202 may include a counter as a dedicated element 224.In an example, the dedicated element 224 includes a counter (also referred to herein as a counter 224). In one example, the counter 224 includes a 12-bit programmable down counter. The 12-bit programmable counter 224 has a count input, a reset input, and a zero count output. The count input decrements the value of counter 224 by one when asserted. The reset input, when asserted, causes the counter 224 to load the initial value from the associated register. For the 12-bit counter 224, the number of up to 12 bits can be loaded as an initial value. When the value of the counter 224 decrements to zero (0), the zero count output is asserted. The counter 224 also has at least two modes, pulse and hold. When the counter 224 is set to the pulse mode, the zero count output is asserted during the first frequency cycle when the counter 224 decrements to zero, and at the following frequency cycle, even if the count input is asserted, the zero count is no longer asserted Output. This state continues until the counter 224 is reset by being asserted by the reset input. When the counter 224 is set to the hold mode, the zero count output is asserted during the first frequency cycle when the counter 224 decrements to zero, and remains asserted when the count input is asserted until it is reset by the reset input being asserted Counter 224.Figure 5 illustrates an example of GOT210. The GOT 210 includes a first SME 204 and a second SME 205 having inputs 214, 216 and having their outputs 226, 228 coupled to an OR gate 230 and a 3 to 1 multiplexer 242. The 3 to 1 multiplexer 242 can be set to couple the output 218 of the GOT 210 to the first SME 204, the second SME 205, or the OR gate 230. OR gate 230 may be used to couple outputs 226, 228 together to form a common output 218 of GOT210. In an example, as discussed above, the first SME 204 and the second SME 205 exhibit equivalence, where the input 214 of the first SME 204 can be coupled to some of the row interconnect conductors 222 and the input 216 of the second SME 205 can be coupled To the other row interconnect conductor 222. In one example, the two SMEs 204, 205 within the GOT 210 can be cascaded and / or cycled back to itself by setting either or both of the switches 240. The SMEs 204, 205 can be cascaded by coupling the outputs 226, 228 of the SMEs 204, 205 to the inputs 214, 216 of another SME 204, 205. The SME 204, 205 can be cycled back to itself by coupling the outputs 226, 228 to its own inputs 214, 216. Therefore, the output 226 of the first SME 204 may not be coupled to any one of the input 214 of the first SME 204 and the input 216 of the second SME 205, or may be coupled to one or both of the inputs.In one example, a state machine element 204, 205 includes a plurality of memory cells 232 coupled in parallel with the sense line 234, such as those commonly used in dynamic random access memory (DRAM). One such memory cell 232 includes memory cells that can be set to a data state, such as a state corresponding to a high or low value (e.g., 1 or 0). The output of the memory unit 232 is coupled to the detection line 234, and the input of the memory unit 232 receives a signal based on the data on the data stream line 236. In one example, the input on the data stream line 236 is decoded to select one of the memory cells 232. The selected memory cell 232 provides its stored data state to the detection line 234 as an output. For example, the data received at the data input port 209 may be provided to a decoder (not shown), and the decoder may select one of the data stream lines 236. In one example, the decoder can convert ACSII characters to 256-bit ones.Therefore, when the memory unit 232 is set to a high value and the data on the data stream line 236 corresponds to the memory unit 232, the memory unit 232 outputs a high signal to the detection line 234. When the data on the data stream line 236 corresponds to the memory unit 232 and the memory unit 232 is set to a low value, the memory unit 232 outputs a low signal to the detection line 234. The output from the memory unit 232 on the detection line 234 is sensed by the detection circuit 238. In one example, the signals on the input lines 214, 216 set the corresponding detection circuit 238 to the active or non-active state. When set to the inactive state, the detection circuit 238 outputs a low signal on the corresponding outputs 226, 228 regardless of the signal on the corresponding detection line 234. When set to the active state, when a high signal from one of the memory cells 234 of the corresponding SME 204, 205 is detected, the detection circuit 238 outputs a high signal on the corresponding output line 226, 228. When in the active state, when the signals from all memory cells 234 of the corresponding SMEs 204, 205 are low, the detection circuit 238 outputs a low signal on the corresponding output lines 226, 228.In one example, SME 204, 205 includes 256 memory cells 232, and each memory cell 232 is coupled to a different data stream line 236. Therefore, SMEs 204, 205 may be programmed to output a high signal when a selected one or more of data stream lines 236 has a high signal on it. For example, SME 204 may set the first memory cell 232 (e.g., bit 0) high and all other memory cells 232 (e.g., bits 1-255) low. When the corresponding detection circuit 238 is in the active state, when the data stream line 236 corresponding to bit 0 has a high signal above it, the SME 204 outputs a high signal on the output 226. In other examples, SME 204 may be set to output a high signal when one of the plurality of data stream lines 236 has a high signal by setting the appropriate memory cell 232 to a high value.In one example, the memory cell 232 can be set to a high or low value by reading bits from the associated register. Therefore, the SME 204 can be programmed by storing the image generated by the compiler into a register and loading the bits in the register into the associated memory unit 232. In one example, the image produced by the compiler includes a binary image of high and low (e.g., 1 and 0) bits. The image can program the FSM engine 200 to operate as FSM by cascading SMEs 204, 205. For example, the first SME 204 can be set to the active state by setting the detection circuit 238 to the active state. The first SME 204 may be set to output a high signal when there is a high signal on the data stream line 236 corresponding to bit 0. The second SME 205 can be initially set to an inactive state, but can be set to output a high signal when there is a high signal on the data stream line 236 corresponding to bit 1 while in active. The first SME 204 and the second SME 205 can be cascaded by setting the output 226 of the first SME 204 to be coupled to the input 216 of the second SME 205. Therefore, when a high signal is sensed on the data stream line 236 corresponding to bit 0, the first SME 204 outputs a high signal on the output 226 and sets the detection circuit 238 of the second SME 205 to the active state. When a high signal is sensed on the data stream line 236 corresponding to bit 1, the second SME 205 outputs a high signal on the output 228 to activate another SME 205 or for output from the FSM engine 200.6 illustrates an example of a method 600 for a compiler to convert source code into an image configured to program a parallel machine. Method 600 includes: parsing the source code into a syntax tree (block 602); converting the syntax tree into an automaton (block 604); optimizing the automaton (block 606); converting the automaton Form a netlist (block 608); place the netlist on hardware (block 610); wire the netlist (block 612); and publish the resulting image (block 614).In one example, the compiler includes an application programming interface (API) that allows software developers to produce images for implementing FSM on the FSM engine 600. The compiler provides a method to convert a set of input regular expressions in the source code into an image configured to program the FSM engine 600. The compiler can be implemented by instructions for a computer with von Neumann architecture. These instructions enable the processor on the computer to implement the functions of the compiler. For example, the instructions, when executed by the processor, may cause the processor to execute source code accessible by the processor as in blocks 602, 604, 606, 608, 610, 612, and 614 Described actions. An example computer with von Neumann architecture is shown in FIG. 9 and described below.In one example, the source code describes a search string used to identify the pattern of symbols within a group of symbols. In order to describe the search string, the source code may include a plurality of regular expressions (regex). The regular expression may be a character string describing the symbol search pattern. Regular expressions are widely used in various computer fields, such as programming languages, text editors, network security, and other fields. In one example, the regular expressions supported by the compiler include search criteria for the search of unstructured data. Unstructured data may include data that is free-form and has no index applied to the words within the data. Words can include any combination of bytes in the data, printable and non-printable. In one example, the compiler can support many different source code languages for implementing regular expressions, including Perl (e.g., Perl Compatible Regular Expressions (PCRE)), PHP, Java, and .NET languages.Referring back to FIG. 6, at block 602, the compiler may parse the source code to form an arrangement of relationally connected operators, where different types of operators correspond to different functions implemented through the source code (eg, through the Different functions implemented by regular expressions). Profiling the source code can produce a general representation of the source code. In one example, the general representation includes a coded representation of regular expressions in the source code in the form of a tree diagram called a syntax tree. The examples described herein refer to the arrangement as a syntax tree (also referred to as an "abstract syntax tree"). However, in other examples, specific syntax trees or other arrangements may be used.As mentioned above, the compiler can support source code in multiple languages, so no matter what language it is, parsing the source code into a non-language specific representation, such as a syntax tree. Therefore, other processing (blocks 604, 606, 608, 610) performed by the compiler can work from the common input structure, regardless of the language of the source code.As described above, the syntax tree includes multiple operators connected in a relation. The syntax tree can include many different types of operators. That is, different operators may correspond to different functions implemented through regular expressions in the source code.At block 604, the syntax tree is converted into an automaton. Automata (also known as finite state automata, finite state machine (FSM) or simply state machine) are representations of states, transitions between states and actions, and can be classified as deterministic or non-deterministic. A deterministic automaton has a single execution path at a given time, while a non-deterministic automaton has multiple simultaneous execution paths. The automaton contains multiple states. In order to convert the syntax tree into an automaton, the operators and the relationships between the operators in the syntax tree are converted into states and transitions between states. In an example, the automaton can be converted based in part on the hardware of the FSM engine 200.In one example, input symbols for automata include symbols of letters, numbers 0-9, and other printable characters. In an example, the input symbol is represented by byte values 0 to 255 (inclusive). In an example, the automaton can be represented as a directed graph, where the nodes of the graph correspond to a set of states. In one example, the transition from state p to state q (i.e., δ (p, α)) when the symbol α is input is shown through a directed connection from node p to node q. In one example, the language accepted (e.g., matched) by the automaton is the set of all possible character strings that will reach the final state when sequentially entered into the automaton. Each character string in the language accepted by the automaton follows a path from the starting state to one or more final states.In one example, special transition symbols outside the input symbol range can be used in the automaton. These special transition symbols can be used, for example, to enable the use of dedicated elements 224. In addition, special transition symbols can be used to provide transitions that occur on others that are different from the input symbols. For example, a special transition symbol may indicate that the first state will be enabled (e.g., transitioned to) when the second state and the third state are enabled. Therefore, the first state is activated when the second state and the third state are activated, and the transition to the first state does not directly depend on the input symbol. Notably, a special transition symbol indicating that the first state will be enabled when the second state and the third state are enabled can be used as a dedicated element 224 to represent, for example, a Boolean AND function performed by Boolean logic. In an example, a special transition symbol may be used to indicate that the counter state has reached zero, and thus transition to a downstream state.In an example, the automaton includes a general state and a special state. The general state and the special state correspond to the general and special components supported by the target device, and the compiler generates machine code for the target device. Different types of target devices can support different types of universal components and one or more different types of dedicated components. General-purpose components can often be used to implement a wide range of functions, while dedicated components can generally be used to implement a narrower range of functions. However, in an example, a dedicated component can achieve, for example, greater efficiency within its narrow range of functions. Therefore, dedicated components can be used, for example, to reduce the machine cycle or machine resources required to implement certain functions in the target device. In some examples, the target device supports only dedicated components, where multiple different types of dedicated components are supported.In the example where the compiler generates machine code for the FSM engine 200, the general state may correspond to SME 204, 205, and the general state is therefore referred to herein as the "SME state". In addition, when the compiler generates machine code for the FSM engine 200, an instance of a dedicated state may correspond to the counter 224 and therefore is referred to herein as a "counter state." Another example of a dedicated state may correspond to a logic element (e.g., programmable logic, Boolean logic) and is therefore referred to herein as a "logic state." In one example, the SME state in the automaton is mapped 1: 1 to the SME in the FSM engine 200 (e.g., SME204, 205), except for the start state of the automaton that is not mapped to SME. The dedicated element 224 may or may not be mapped 1: 1 to the dedicated state.In one example, one of the standard techniques such as Glushkov's method can be used to construct the automaton. In one example, the automaton may be a homogeneous automaton without epsilon. Uniform automata are defined limits for general automata. The restriction requires that all transitions into a state must occur on the same input symbol (s). A homogeneous automaton satisfies the following conditions: For any two states, q 1 and q 2, if r ∈ δ (q 1) ∩δ (q 2), it means that S 1 = {a | a∈∑, r∈δ (q 1, a)}, S 2 = {a | a∈∑, r∈δ (q 2, a)}. S 1 is the set of symbols that allows q 1 to be converted to r; and S 2 is the set of symbols that allows q 2 to be converted to r. Here, S 1 = S 2, that is, if both the state q 1 and the state q 2 are changed to the state r, the homogeneity is limited to that the transition must occur on the same symbol (s).7A and 7B illustrate example automata generated from the syntax tree. FIG. 7A illustrates a homogeneous automaton 700, and FIG. 7B illustrates a non-homogeneous automaton 702.The homogeneous automaton 700 starts at the start state 704, and the start state 704 transitions to the state 706 when the symbol "a" is input. State 706 transitions to state 708 when symbol "b" is input, and state 708 transitions to state 710 when symbol "b" is input. When the symbol "c" is input, the state 710 transitions to the state 712. State 712 transitions to state 710 when symbol "b" is entered, and transitions to state 714 when symbol "d" is entered. State 714 is the final state and is recognized as the final state by the double circle. In an example, the final state may be important because the activation indication of the final state corresponds to the matching of the regular expression of the automaton. Automaton 700 is a homogeneous automaton because all internal transitions (e.g., transitions to states) of a given state occur on the same symbol (s). Notably, state 710 has two internal transitions (from state 708 and state 712), and both of these internal transitions occur on the same symbol "b".The non-homogeneous automaton 702 includes the same states 704, 706, 708, 710, 712, and 714 as the homogenous automaton 700, however, the state 712 transitions to the state 710 when the symbol "e" is input. Therefore, automaton 702 is non-homogeneous because state 710 has an internal transition between two different symbols; symbol "b" from state 708 and symbol "e" from state 712.At block 606, after the automaton is constructed, the automaton is optimized to reduce its complexity and size, among others. The automaton can be optimized by combining redundant states.At block 608, the automaton is converted into a netlist. Converting the automaton to a netlist maps the state of the automaton to instances of hardware elements of the FSM engine 200 (e.g., SME204, 205, GOT210, dedicated elements 224), and determines the connections between the instances. In one example, the netlist includes multiple instances, each of which corresponds (e.g., represents) a hardware element of FSM engine 200. Each instance may have one or more connection points (also referred to herein as "ports") for connecting to another instance. The netlist also includes multiple connections between the ports of the instance, the multiple connections corresponding (e.g., represented) to conductors used to couple hardware elements corresponding to the instance. In an example, the netlist contains different types of instances corresponding to different types of hardware elements. For example, the netlist may include generic instances corresponding to generic hardware elements and specialized instances corresponding to dedicated hardware elements. As an example, the general state can be converted into a general instance, and the special state can be converted into a special instance. In an example, the generic instance may include an SME instance for an SME 204, 205 and an SME group instance for a group of hardware elements including a group of SMEs. In one example, the SME group instance includes a GOT instance corresponding to GOT 210; however, in other examples, the SME group instance may correspond to a hardware element that includes a group of three or more SMEs. Specific examples may include counter examples for counter 224 and logic examples for logic element 224. Since the GOT210 includes two SMEs 204 and 205, the GOT instance contains two SME instances.To generate a netlist, the state in the automaton is converted to an instance in the netlist, except that the start state does not have a corresponding instance. The SME state is converted into a GOT instance, and the counter state is converted into a counter instance. In addition, a corresponding connection from the first instance to the second instance is generated for the transition from the state corresponding to the first instance to the state corresponding to the second instance. Since the SMEs 204, 205 in the FSM engine 200 are grouped into a pair called GOT210, the compiler can group the SME state into the pair in the GOT instance. Due to the physical design of GOT210, not all SME instances can be paired together to form GOT210. Therefore, the compiler determines which SME states can be mapped together in the GOT 210, and then pairs the SME states into the GOT instance based on the determination.As shown in FIG. 5, GOT210 has output restrictions on SME204, 205. Specifically, GOT210 has a single output 218 shared by two SMEs 204,205. Therefore, each SME 204, 205 in the GOT 210 cannot independently drive the output 218. This output limit restricts which SME states can be paired together in a GOT instance. Notably, two SME states that drive (e.g., transition to, activate) different sets of external SME states (e.g., SME states corresponding to SMEs outside the GOT instance) cannot be paired together in a GOT instance. However, this limitation does not limit whether the two SME states drive each other or self-loop, because GOT 210 can provide this functionality internally through switch 240. Although the FSM engine 200 is described as having a certain physical design corresponding to SME 204, 205, in other examples, SME 204, 205 may have other physical designs. For example, SMEs 204, 205 may be grouped together into three or more sets of SMEs 204, 205. Additionally, in some examples, there may be restrictions on inputs 214, 216 to SMEs 204, 205, and there may or may not be restrictions on outputs 226, 228 from SMEs 204, 205.However, in either case, the compiler determines which SME states can be grouped together based on the physical design of the FSM engine 200. Therefore, for the GOT instance, the compiler determines which SME states can be paired together based on the output limits of SMEs 204, 205 in GOT210. In an example, there are five situations where two SME states can be paired together to form GOT210 based on the physical design of GOT210.The first situation when the first SME state and the second SME state can be paired together in the GOT210 occurs when neither the first SME state nor the second SME state is the final state, and the first SME state and the second SME state When one of the SME states does not drive any state different from the first SME state or the second SME state. As an example, when the first state transitions to the second state, the first state is considered to drive the second state. When this first situation occurs, at most one of the first SME state and the second SME state drives one or more external states. Therefore, the first SME state and the second SME state can be paired together without being affected by the output limitation of GOT210. However, due to the ability of GOT 210 to internally couple SMEs 204, 205 to each other, the first and second SME states are allowed to drive each other and self-loop to drive itself. In automata, when neither q 1 or q 2 is in the final state, and δ (q 1)-{q 1, q 2} is empty, or when δ (q 2)-{q 1, q 2} When empty, the first SME state (corresponding to state q 1) and the second SME state (corresponding to state q 2) may be paired together.The second situation when the first SME state and the second SME state can be paired together in GOT210 occurs when the first or second SME state is not the final state in the automaton, and appears in the first SME state and When the second SME state drives the same external state. As used herein, the external state corresponds to a state external to the GOT instance, for example, regardless of whether the first SME state and the second SME state in the GOT instance drive each other or self-loop. Again, the output limit of GOT210 does not affect the first SME state and the second SME state, because the first SME state and the second SME state drive the same external state. Moreover, due to GOT210's ability to internally couple SMEs 204 and 205 to each other, the restriction on driving the same state does not include whether the first state and the second state drive each other or self-loop. Using automata, when neither q 1 or q 2 is the final state, and δ (q 1)-{q 1, q 2} = δ (q 2)-{q 1, q 2}, the first SME state (Corresponding to state q 1) and the second SME state (corresponding to state q 2) can be paired together.The third and fourth situations where the first SME state and the second SME state can be paired together in GOT210. One of the first SME state and the second SME state is the final state and the first SME state and the second When the other of the SME states does not drive any external state. That is, when q 1 is the final state and δ (q 2)-{q 1, q 2} is empty, or when q 2 corresponds to the final state and δ (q 1)-{q 1, q 2} When empty, the first SME state (corresponding to state q 1) and the second SME state (corresponding to state q 2) may be paired together. Since the final state outputs an indication that the regular expression matches, the SME state corresponding to the final state should be able to independently use the output 218 of GOT210 in order to indicate the match. Therefore, another SME state in GOT210 is not allowed to use output 218.The fifth situation when the first SME state and the second SME state can be paired together in the GOT210 occurs when the first SME state and the second SME state both correspond to the final state in an automaton When the second SME state drives the same external state. Using automata, when both q 1 and q 2 are final states, and δ (q 1)-{q 1, q 2} = δ (q 2)-{q 1, q 2}, the first state ( The state corresponding to state q 1) and the second SME state (corresponding to state q 2) may be paired together.Once the compiler determines whether one or more SME states can be paired together, the compiler pairs the SME states as a GOT instance. In one example, the compiler is determined to be able to pair SME states into GOT instances in the order in which the SME states are determined to be paired in order to form GOT instances. That is, once two specific SME states are determined to be able to pair together, the two SME states can be paired into a GOT instance. Once two SME states have been paired to form a GOT instance, these paired SME states cannot be used to pair with other SME states. This process can continue until there are no longer any SME states to be paired.In one instance, the compiler uses graph theory to determine which SMEs are paired together as GOT instances. Since only certain SMEs can be paired together, some SME pairings can cause other SMEs to be implemented in their own GOT instance, so that other SME positions in the GOT instance are unused and therefore wasted. Graph theory can be used to optimize SME utilization in GOT210 by reducing the number of unused SME instances in the GOT instances of the netlist (e.g., reducing the number of unused SMEs). To use graph theory, the compiler first determines all possible pairs between SME states based on the physical design of the FSM engine 200 discussed above. The compiler then generates a graph in which the vertices of the graph correspond to the SME state, and the edges of the graph correspond to the possible pairs of SME states. That is, if it is determined that the two SME states can be paired together in a GOT instance, an edge is used to connect the two corresponding vertices. Therefore, the graph contains all possible pairs of SME states.The compiler can then find matching vertices of the graph to identify which SME states are paired together in GOT210. That is, the compiler recognizes edges (and therefore vertex pairs) so that the two edges between matching vertices of the graph will not share a common vertex. In one example, the compiler can find the largest match for the graph. In another example, the compiler can find the largest match for the graph. The maximum match is the match with the largest possible number of edges. There can be many maximum matches. The problem of finding the maximum match of a general graph can be solved in polynomial time.Once all matching vertices have been identified (e.g., as the largest match), each pair of SME states corresponding to the matching vertices is mapped to a GOT instance. Map the SME state corresponding to unmatched vertices to its own GOT instance. That is, the SME state corresponding to the unmatched vertex is mapped to one of the SME positions in the GOT instance, and the other SME positions in the GOT instance are unused. Therefore, assuming a corresponding set of netlist N and its matching vertices M, the number of GOT instances of N used is equal to | Q | -1- | M |, where Q is the state set of the automaton and "-1" is Because the start state of the automaton does not correspond to the SME state in this example.In an example, the netlist N is constructed from the maximum matching M of G using the minimum number of GOT instances. This can be exemplified by the fact that if there is another netlist N 'using a smaller number of GOT instances, the corresponding match is represented as M'. Since the number of GOT instances of N ′ is equal to | Q | -1- | M ′ |, we get | M | <| M ′ |. This conflicts with the fact that M is the largest match. Therefore, the netlist N uses the minimum number of GOT instances.Once the SME states are paired as GOT instances, the GOT instance, counter instance, and logic instance are connected according to the transition between states in the automaton. Since each GOT 210 has a single output, each GOT instance in the netlist has a single output port for connection to other instances. Therefore, if any SME state in the first GOT instance drives an SME state in the second GOT instance, the output port of the first GOT instance is coupled to the input of the second GOT instance.8A and 8B illustrate example netlists 800, 802 generated from the homogeneous automaton 700 of FIG. 7A. SME instances 806, 808, 810, 812, and 814 correspond to states 706, 708, 710, 712, and 714 in the automaton 700. As discussed above, the start state 704 of the automaton does not correspond to an example.The netlist 800 is an example of a non-optimal netlist. Netlist 800 uses four GOT instances 816, while leaving three SME instances 818 unused. However, the netlist 802 is an example of the best netlist generated using graph theory to identify the largest match. Netlist 802 uses three GOT instances 816 and has a single unused SME instance 818. In the netlist 802, the instance 810 can be connected to the instance 812 through a connection within the GOT instance (e.g., via the switch 240).At block 610, once the netlist has been generated, the netlist is placed to select specific hardware elements of the target device (e.g., SME 204, 205, other elements 224) for each instance of the netlist. According to an embodiment of the invention, placement selects hardware components based on general input and output constraints for the hardware components.At block 612, the globally placed netlist is routed to determine the settings of programmable switches (e.g., inter-block switch 203, in-block switch 208, and in-line switch 212) to couple selected hardware elements together to achieve The connection described by the netlist. In one example, the setting of the programmable switch is determined by determining the specific conductor of the FSM engine 200 that will be used to connect the selected hardware element and the setting of the programmable switch. The wiring can adjust the specific hardware elements selected for some of the netlist instances during placement, such as to couple the hardware elements, given the physical design of the conductors and / or switches on the FSM engine 200.Once the netlist is placed and routed, the placed and routed netlist can be converted into multiple bits for programming the FSM engine 200. The multiple bits are referred to herein as images.At block 614, the image is released by compiling the program. The image includes multiple bits for programming specific hardware elements and / or programmable switches of the FSM engine 200. In embodiments where the image includes multiple bits (e.g., 0 and 1), the image may be referred to as a binary image. The bits can be loaded onto the FSM engine 200 to program the states of SMEs 204, 205, dedicated elements 224, and programmable switches so that the programmed FSM engine 200 implements FSM with the functionality described by the source code. Placement (block 610) and routing (block 612) may map specific hardware elements at specific locations in the FSM engine 200 to specific states in the automaton. Therefore, the bits in the image can program the specific hardware elements and / or programmable switches to implement the desired function. In an example, the image can be published by storing the machine code on a computer-readable medium. In another example, the image may be published by displaying the image on the display device. In yet another example, the image can be published by sending the image to another device (e.g., a programming device used to load the image onto the FSM engine 200). In yet another example, the image can be published by loading the image onto a parallel machine (e.g., FSM engine 200).In an example, the bit values from the image can be loaded directly into SME 204, 205 and other hardware elements 224 or by loading the image into one or more registers and then writing the bit values from the registers to SME 204, 205 And other hardware elements 224 to load the image onto the FSM engine 200. In one example, the state of the switches (e.g., inter-block switch 203, intra-block switch 208, and in-line switch 212) can be programmed. In an example, the hardware elements of the FSM engine 200 (eg, SME 204, 205, other elements 224, programmable switches 203, 208, 212) are memory mapped so that the programming device and / or computer can write the image to One or more memory addresses to load the image onto the FSM engine 200.The method examples described herein may be at least partially machine or computer implemented. Some examples may include computer-readable or machine-readable media encoded with instructions that are operable to configure an electronic device to perform the method as described in the above examples. Embodiments of these methods may include code, such as microcode, assembly language code, high-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form part of a computer program product. In addition, code may be tangibly stored on one or more volatile or non-volatile computer-readable media during execution or at other times. These computer-readable media may include (but are not limited to) hard disks, removable disks, removable disks (eg compact disks and digital video disks), magnetic tapes, memory cards or memory sticks, random access memory (RAM), read-only Memory (ROM) and the like.FIG. 9 generally illustrates an example of a computer 900 having a von Neumann architecture. After reading and understanding the content of the present invention, those of ordinary skill in the art will understand that in a computer-based system, a software program may be loaded from a computer-readable medium in such a manner as to perform a function defined by the software program. Those of ordinary skill in the art will further understand that various programming languages may be used to generate one or more software programs designed to implement and perform the methods disclosed herein. The program can be structured in an object-oriented format using an object-oriented language (e.g., Java, C ++, or one or more other languages). Alternatively, the program language (eg, assembly language, C, etc.) may be used to structure the program in a program-oriented format. Software components can be communicated using any of a number of schemes well known to those of ordinary skill in the art, such as application programming interfaces or interprocedural communication techniques, including remote procedure calls or others. The teachings of the various embodiments are not limited to any particular programming language or environment.Therefore, other embodiments can be realized. For example, a manufacturing object (eg, a computer, memory system, magnetic or optical disk, some other storage device, or any type of electronic device or system) may include one or more processors 902, the one or more processes The device 902 is coupled to a computer readable medium 922 having instructions 924 (e.g., computer program instructions) stored thereon, such as a memory (e.g., a removable storage medium, and any memory including electrical, optical, or electromagnetic conductors). Execution by the one or more processors 902 results in the execution of any of the actions described with respect to the above method.The computer 900 may take the form of a computer system having a processor 902 that is directly and / or using a bus 908 to couple to many elements. These elements may include main memory 904, static or non-volatile memory 906, and mass storage device 916. Other elements coupled to the processor 902 may include an output device 910 (e.g., video display), an input device 912 (e.g., keyboard), and a cursor control device 914 (e.g., mouse). The network interface device 920 used to couple the processor 902 and other elements to the network 926 may also be coupled to the bus 908. The instructions 924 may be further transmitted or received across the network 926 via the network interface device 920 using any of many well-known transport protocols (e.g., HTTP). Any of these elements coupled to the bus 908 may not exist, exist alone, or exist in plural, depending on the particular embodiment to be implemented.In one example, one or more of the processor 902, the memory 904, 906, or the storage device 916 may each include instructions 924 that, when executed, may cause the computer 900 to perform the methods described herein Any one or more than one. In alternative embodiments, the computer 900 operates as a separate device or can be connected (eg, a network connection) to other devices. In a network environment, the computer 900 can operate with the capabilities of a server or client device in a server-client network environment, or as a peer device in a peer-to-peer (or distributed) network environment. The computer 900 may include a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a web appliance, a network router, a switch or a bridge, or be capable of executing a set of instructions (sequential Any other), the set of instructions specifies the action to be taken by the device. Additionally, although only a single computer 900 is illustrated, the term "computer" should also be considered to include individually or collectively executing a set (or sets) of instructions to perform any one or a of the methods discussed herein Any collection of devices above.The computer 900 may also include an output controller 928 for communicating with peripheral devices using one or more communication protocols (eg, universal serial bus (USB), IEEE1394, etc.). The output controller 928 may, for example, provide the image to a programming device 930 communicatively coupled to the computer 900. Programming device 930 may be configured to program parallel machines (eg, parallel machine 100, FSM engine 200). In other examples, programming device 930 may be integrated with computer 900 and coupled to bus 908 or may communicate with computer 900 via network interface device 920 or another device.Although the computer-readable medium 924 is shown as a single medium, the term "computer-readable medium" should be considered to include a single medium or multiple media (eg, centralized or distributed) that stores the set or sets of instructions 924 Databases, or associated caches and servers, and or various storage media, such as processor 902 registers, memories 904, 906, and storage devices 916). The term "computer-readable medium" should also be considered to include any medium capable of storing, encoding or carrying a set of instructions for execution by a computer and causing the computer to perform any one or more of the methods of the present invention Or a data structure that can be stored, encoded, or shipped for use by or associated with this set of instructions. The term "computer-readable medium" should therefore be considered to include, but is not limited to, tangible media, such as solid state memory, optical media, and magnetic media.Provide abstracts to comply with 37C.F.R. Section 1.72 (b), which requires abstracts, which will allow readers to determine the nature and gist of the technical disclosure. It is submitted that the abstract will not be used to limit or interpret the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim relying on itself as a separate embodiment.ExamplesExample 1 includes a computer-implemented method for generating an image that is configured to program a parallel machine from source code. The method includes: converting source code into an automaton containing multiple interconnected states; converting the automaton into a netlist, the netlist containing instances corresponding to states of the automaton, wherein the instances A hardware element corresponding to the parallel machine, wherein converting the automaton into a netlist includes grouping states together based on the physical design of the parallel machine; and converting the netlist into the image.Example 2 includes a computer-readable medium that includes instructions that when executed by the computer cause the computer to perform operations. The operations include: converting the source code into an automaton containing multiple interconnected states; converting the automaton into a netlist, the netlist containing instances corresponding to the state of the automaton, wherein the instances A hardware element corresponding to the parallel machine, wherein converting the automaton into a netlist includes grouping states together based on the physical design of the parallel machine; and converting the netlist into the image.Example 3 includes a computer that includes: a memory on which software is stored; and a processor that is communicatively coupled to the memory. Where the software, when executed by the processor, causes the processor to: convert the source code into an automaton containing multiple interconnected states; convert the automaton into a netlist, the netlist containing An instance of the state of the automaton, wherein the instance corresponds to a hardware element of the parallel machine, wherein the instance includes a plurality of first instances and a group of instances containing two or more first instances, Wherein converting the automaton into a netlist includes grouping states together in a group of instances based on many unused first instances; and converting the netlist into the image.Example 4 includes a system that includes a computer configured to: convert source code into an automaton that includes multiple interconnected states; and convert the automaton into a netlist, the netlist containing corresponding An instance of the state of the automaton, wherein the instance corresponds to a hardware element of the parallel machine, wherein the instance includes a plurality of first instances and a group of instances containing two or more first instances, Wherein converting the automaton into a netlist includes grouping states together in a group of instances based on many unused first instances; and converting the netlist into the image. The system also includes a device configured to load the image onto a parallel machine.In Example 5, the subject matter of any one of Examples 1 to 4 can optionally include where the examples include SME instances corresponding to state machine elements (SME) hardware elements and SMEs corresponding to hardware elements including a group of SME Group instances, and where grouping includes grouping states into SME group instances.In Example 6, the subject matter of any one of Examples 1 to 5 can optionally include where the physical design includes the physical design of the hardware element that includes a group of SMEs.In Example 7, the subject matter of any one of Examples 1 to 6 can optionally include, wherein the physical design includes one of input or output restrictions on the SME in the hardware element that contains a group of SMEs .In Example 8, the subject matter of any one of Examples 1 to 7 can optionally include, wherein the physical design includes a limitation of the SME shared output in the hardware element that includes a group of SMEs.In Example 9, the subject matter of any one of Examples 1 to 8 can optionally include, where the SME group instance includes a paired group (GOT) instance containing two SME instances, and wherein the physical design includes each The SME in a GOT is coupled to a common output.In Example 10, the subject matter of any one of Examples 1 to 9 can optionally include, wherein converting the automaton to a netlist includes: determining which of the states can be grouped together in a GOT instance; and based on The determination pairs the states.In Example 11, the subject matter of any one of Examples 1 to 10 can optionally include, wherein when neither the first state nor the second state is the final state of the automaton, and the first state and the first state When one of the two states does not drive any state different from the first state or the second state, the first state and the second state may be paired together in the GOT instance.In Example 12, the subject matter of any one of Examples 1 to 11 can optionally include, when neither the first state nor the second state is the final state of the automaton, and the first state and the first state When both states drive the same external state, the first state and the second state may be paired together in the GOT instance.In Example 13, the subject matter of any one of Examples 1 to 12 can optionally include, where when one of the first state and the second state is the final state of the automaton, and the first state and When the other of the second states does not drive any external state, the first state and the second state may be paired together in a GOT instance.In Example 14, the subject matter of any one of Examples 1 to 13 may optionally include, where both the first state and the second state are the final state of the automaton, and the first state and the first state When both states drive the same external state, the first state and the second state may be paired together in the GOT instance.In Example 15, the subject matter of any one of Examples 1 to 14 can optionally include, wherein determining which of the states can be grouped together includes in the GOT instance using graph theory to determine which of the states can be together Grouped in GOT instance.In Example 16, the subject matter of any one of Examples 1 to 15 can optionally include where graph theory is used to determine which of the states can be grouped together. GOT instances include using graph theory to identify the maximum match Which of these states can be grouped together in a GOT instance.In Example 17, the subject matter of any one of Examples 1 to 16 may optionally include publishing the image.In Example 18, the subject matter of any one of Examples 1 to 17 can optionally include, wherein the examples include general and special instances, wherein the general instance corresponds to the general state of the automaton, and the The dedicated instance corresponds to the dedicated state of the automaton.In Example 19, the subject matter of any one of Examples 1 to 18 can optionally include, wherein the hardware elements corresponding to the generalized example include a state machine element (SME) and a paired group (GOT), and The hardware elements corresponding to the dedicated instance include counters and logic elements.In Example 20, the subject matter of any one of Examples 1 to 19 can optionally include where the automaton is a homogeneous automaton.In Example 21, the subject matter of any one of Examples 1 to 20 can optionally include, wherein converting the automaton to a netlist includes mapping each of the states of the automaton to a corresponding Examples of the hardware elements and determining the connectivity between the examples.In Example 22, the subject matter of any one of Examples 1 to 21 can optionally include, wherein the netlist further includes multiple connections between the examples representing conductors between the hardware elements.In Example 23, the subject matter of any one of Examples 1 to 22 can optionally include, wherein converting the automaton to a netlist includes converting the automaton to a netlist containing instances, the examples corresponding to The state of the automaton other than the start state.In Example 24, the subject matter of any one of Examples 1 to 23 can optionally include determining the location of the hardware element corresponding to the instance of the netlist in the parallel machine.In Example 25, the subject matter of any one of Examples 1 to 24 can optionally include where grouping states together includes grouping states together based on the physical design of a hardware element that contains a group of common elements.In Example 26, the subject matter of any one of Examples 1 to 25 can optionally include determining which conductors of the parallel machine will be used to connect the hardware element; and determining the settings of the programmable switch of the parallel machine , Where the programmable switch is configured to selectively couple the hardware elements together.In Example 27, the subject matter of any one of Examples 1 to 26 can optionally include, wherein the group instance includes a paired group (GOT) instance, and wherein the group state includes which states are driven according to the paired state To pair the states.In Example 28, the subject matter of any one of Examples 1 to 27 can optionally include, wherein grouping states in a group instance based on many unused first examples includes: determining the first state and the second based on the following conditions Whether states can be paired: neither the first state nor the second state is the final state in the automaton, and one of the first state and the second state is not driven differently than the Any state of the first state or the second state; neither the first state nor the second state is the final state in the automaton, and both the first state and the second state are driven the same External state of either; either the first state or the second state is the final state, and the first state or the second state that is not the final state does not drive except the first state or all Any state other than the second state; and the first state and the second state are final states, and both the first state and the second state drive the same external state.In Example 29, the subject matter of any one of Examples 1 to 28 can optionally include, wherein converting the automaton into a netlist includes: modeling the state into a graph, where the vertices of the graph correspond to In the state, and the edges of the graph correspond to possible pairs of the state; determine the matching vertices of the graph; and pair the states corresponding to the matching vertices.In Example 30, the subject matter of any one of Examples 1 to 29 can optionally include, wherein converting the automaton to a netlist includes determining the maximum match for the graph.In Example 31, the subject matter of any one of Examples 1 to 30 can optionally include, wherein converting the automaton to a netlist includes: pairing each set of states corresponding to matching vertices; and will correspond to Each state of unmatched vertices is mapped to a GOT instance, where the SME instance in the GOT instance will be unused.In Example 32, the subject matter of any one of Examples 1 to 31 may optionally include where grouping states together includes: pairing states based on which states are driven by the paired states.In Example 33, the subject matter of any one of Examples 1 to 32 may optionally include, wherein grouping states together in a group instance based on many unused first examples includes: determining the first state and the first based on the following conditions Whether the two states can be paired: neither the first state nor the second state is the final state in the automaton, and one of the first state and the second state does not drive differently than all Any of the first state or the second state; neither the first state nor the second state is the final state in the automaton, and both the first state and the second state are driven The same external state; either the first state or the second state is the final state, and the first state or the second state that is not the final state does not drive except the first state or Any state other than the second state; and the first state and the second state are final states, and both the first state and the second state drive the same external state.In Example 34, the subject matter of any one of Examples 1 to 33 can optionally include, wherein grouping states together in a group instance based on many unused first examples includes: modeling the states as a graph , Where the vertices of the graph correspond to states, and the edges of the graph correspond to possible pairs of states; determine the matching vertices of the graph; and pair the states corresponding to the matching vertices.In Example 35, the subject matter of any one of Examples 1 to 34 may optionally include where the states are grouped together in a group instance based on many unused first examples: the maximum match for the graph is determined.In Example 36, the subject matter of any one of Examples 1 to 35 can optionally include where grouping states together in a group instance based on many unused first examples includes: making each group corresponding to a matching vertex The states are paired; and each state corresponding to an unmatched vertex is mapped to a GOT instance, where the SME instance in the GOT instance will be unused.In Example 37, the subject matter of any one of Examples 1 to 36 can optionally include where the device is configured to implement each pair of states as a group of two hardware elements in the parallel machine.Example 38 includes a parallel machine programmed by the image produced with the process of any of Examples 1 to 37. |
PROBLEM TO BE SOLVED: To provide a system and method of arbitrating cache requests.SOLUTION: Arbitration of cache requests is implemented in a graphics processing unit (GPU). An arbiter receives requests from a color processor and a depth processor, and determines which of the received requests has the highest priority. The request with the highest configurable priority is then provided to the cache. The arbiter determines priority, for example, based on whether a location in the cache associated with a request is available, a weight associated with the request, the number of requests of a particular type processed by the arbiter, or any combination thereof.SELECTED DRAWING: Figure 4 |
A color processor configured to process the image data to generate color data indicative of the color of pixels of the scene; and processing the image data to generate depth data indicative of a distance value of pixels in the scene A depth processor configured to store data; a cache configured to store data; an electronic hardware; receiving a cache depth request from the depth processor; receiving a cache color request from the color processor; And an arbiter configured to provide a cache depth request and the cache color request to the cache, wherein the arbiter is configured to provide the cache depth request and the cache color request received from the color processor and the depth processor Assign weights, wherein the weights are based on the type of the scene Determining which one of the received cache depth request and cache color request has the highest priority based at least in part on the weight assigned and determining that it has been determined to have the highest priority And to provide a cache request to the cache.The apparatus of claim 1, wherein the arbiter is configured to assign different weights to the cache depth request and the cache color request for scenes in different scene groups.The apparatus of claim 1, wherein the arbiter is configured to assign different weights to the cache depth request and the cache color request for different types of scenes.The apparatus of claim 1, wherein the arbiter is configured to assign different weights to the cache depth request and the cache color request for different scenes.The apparatus of claim 1, further comprising a driver, wherein the arbiter receives the weight from the driver, and the driver is configured to set a weight register of the arbiter.The apparatus of claim 1, wherein the weights are assigned based on the number of depth cache requests received from the depth processor and the number of color cache requests received from the color processor.The arbiter has a depth cache read request to have the highest weight, a depth cache write request to have the next highest weight of the depth cache read request, a highest weight next to the depth cache write request And to assign a color cache read request to have, and to assign a color cache write request to have the lowest weight.15. An electronically executed method of providing a selected cache request to a cache, the method comprising: receiving, by an arbiter, one or more first cache requests from a color processor; Receiving one or more second cache requests from the depth processor by the arbiter configured to process the image data of the scene and generate color data indicative of the color of the pixels of the scene; , Wherein the depth processor is configured to generate depth data indicative of distance values of pixels in the scene, based on the type of the scene, weighting each of the first and second cache requests Allocating at least one of said received cache requests based at least in part on said weight, Comprising determining a whether having position, the determined to have the highest priority, the received request, and to provide to the cache, the method.9. The method of claim 8, further comprising, by the arbiter, assigning different weights to the cache requests for scenes within different scene groups.9. The method of claim 8, further comprising, by the arbiter, assigning different weights to the cache requests for different types of scenes.The method of claim 8, further comprising assigning different weights to the cache request for different scenes by the arbiter.9. The method of claim 8, further comprising receiving, by the driver, the weight from the driver by the arbiter to set the weight register of the arbiter.9. The method of claim 8, further comprising assigning the weights by the arbiter based on the number of depth cache requests received from the depth processor and the number of color cache requests received from the color processor.The depth cache read request to have the highest weight by the arbiter, the depth cache write request to have the next highest weight of the depth cache read request, the next highest weight of the depth cache write request 9. The method of claim 8, further comprising assigning a color cache read request to have, and a color cache write request to have the lowest weight. |
System and method for arbitrating cache requests[0001]FIELD Embodiments of the present invention generally relate to electronics and specifically to arbitration of cache requests.[0002]As mobile devices such as smartphones are used for a wide variety of purposes, processors for mobile devices are designed to have increasingly increasing functionality. For example, a processor for a mobile device may include several components with separate functions such as a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), etc. GPUs are widely used to render two-dimensional (2D) and three-dimensional (3D) images for various applications. The GPU may be used to render still images and / or video images.[0003]In order to render the image, the GPU may include a color processor and a depth processor. The color processor can process the image data to generate color data indicative of the color of the pixels of the scene to be rendered on the display. The depth processor can process the image data to generate depth data indicative of distance values of pixels in the scene. The color processor and the depth processor can share memory to store color data and depth data. If there are multiple requests to access the shared memory, the order in which the requests are processed can be based on arbitration. Existing methods arbitrating among requests to access shared memory have resulted in suboptimal performance and bottleneck in the GPU pipeline.[0004]One aspect of the present disclosure is a device including a cache and an arbiter. The cache is configured to store data. Arbiter includes electronic hardware. The arbiter is configured to assign weights to different types of cache requests based on data received by the arbiter. Different types of cache requests include at least a first type of cache request and a second type of cache request. The arbiter is also configured to receive a first request to access the cache from the depth processor. The first request is a cache request of the first type. The arbiter is also configured to receive a second request to access the cache from the color processor. The second request is a second type of cache request. The arbiter is configured to determine which of the received requests has the highest priority based at least in part on the weight associated with the first type of request and the second type of request It is configured. The arbiter is further configured to provide the received request to the cache, which has been determined to have the highest priority.[0005]Another aspect of the present disclosure provides a computer program product comprising a computer readable medium having computer readable instructions embodied therein for causing a computer to perform at least one of: a cache configured to store data; a weight associated with different types of cache requests; And an arbitration means for determining a relative priority of different types of cache requests. The arbitration means is configured to provide different types of cache requests to the cache based on the relative priority. The apparatus also includes a color processor configured to provide a cache request to the arbitration means and a depth processor configured to provide the cache request to the arbitration means.[0006]Another aspect of the present disclosure is an electronic implementation that provides selected caching requests to the cache. The method includes receiving a plurality of different types of cache requests for accessing a cache shared by a depth processor and a color processor from a depth processor and a color processor; , Based on at least in part on one or more weights associated with cache requests of different types and one or more counts associated with different types of cache requests, the selected cache request of the received cache requests is received Determining that the cache request has the highest priority of the cache requests, providing the selected cache request to the cache before providing the cache with another cache request of the received cache request, including.[0007]Another aspect of the present disclosure is a non-transitory computer readable storage that includes instructions that, when executed, instruct the graphics processing unit to perform a method. The method includes receiving a cache request from a plurality of different types of cache requests for accessing the cache based at least in part on a weight associated with different types of cache requests and a count associated with different types of cache requests Still selecting different types of cache requests include providing selected caching requests to the cache, which are provided by the color processor and the depth processor.[0008]Certain aspects, advantages, and novel features of the invention are described herein for the purpose of summarizing the disclosure. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Accordingly, the present invention achieves or optimizes one advantage or group of benefits as taught herein, without necessarily achieving other advantages as may be taught or suggested herein May be embodied or performed in a method.[0009]Schematic block diagram of an illustrative graphics processing unit. [0010] FIG. 1 is a schematic block diagram including an illustrative arbiter configured to receive a cache request from a depth processor and a color processor and to provide a cache request to a cache in accordance with one embodiment. [0011] A schematic block diagram of an illustrative arbiter, according to one embodiment. [0012] Illustrative flow diagram of a process for providing a cache request selected from a plurality of different types of cache requests to a cache, according to one embodiment.[0013]In order to avoid duplication of description, components having the same or similar functions may be referred to by the same reference numerals.[0014]Although specific embodiments are described herein, other embodiments, including embodiments that do not provide all the advantages and features described herein, will be apparent to those skilled in the art I will.[0015]In general, aspects of the present disclosure relate to arbitration between depth requests and color requests for accessing a shared cache. The arbiter receives depth requests such as depth write and depth read from the depth processor as well as a color write and color read from the color processor, Such as color requests, can be received. The arbiter may have a configurable priority between the various requests to the shared cache. Therefore, the priority among the various requests can be adjusted to improve the performance of the system.[0016]The arbiter embodiments described herein may have configurable weights for determining priorities among various types of cache requests. These weights can be programmed by the driver and / or the hardware. Such an arbiter requests a shared cache in an order based on the relative priority between different types of cache requests in order to efficiently use the shared cache and / or to avoid bottlenecks in the pipeline Can be provided. In some instances, the relative priority between different types of cache requests may be chosen such that one or more of the different types of cache requests are not received by the arbiter for a relatively long period of time, (Eg 20 cache requests without receiving a particular type of cache request) as compared to the cache request of the same cache request. Cache request queues may not be required, depending on the particular implementation.[0017]Arbitration of cache requests can be determined based on the availability of the location in the shared cache (a location). Priority may be determined based on comparing the count value of a counter of a particular type of cache request to the respective weight stored in the weight register of a particular type of cache request. Once one or more conditions are detected, counters for certain types of cache requests may be cleared.[0018]Certain implementations of the subject matter described in this disclosure may be implemented to achieve one or more of the following potential advantages among others. The relative priority of different types of cache requests can be set based on different scenes to achieve better performance. If possible, cache requests with available destination locations can be served before other cache requests that do not have an available destination location to achieve better performance . The arbitration schemes described herein provide an appropriate priority rating for different types of cache requests, even if one or more types of cache requests are not received over a relatively long period of time ) Can be maintained. The arbiter described herein may be implemented with a relatively small amount of hardware. For example, one exemplary arbiter may assign priorities to four different types of cache requests with only one 24-bit flip-flop (with only 24 one bit flip-flops). In this example, the arbiter can consume an area less than about 1000 μm 2. The arbiter described herein can be implemented at relatively high speed. For example, when operating at a power supply voltage of 0.855 v with 28 nanometer processing technology, the longest path in the arbiter may be traversed at less than about 500 picoseconds.[0019]In some embodiments, the systems and devices described herein may be implemented as integrated (eg, a graphics processing unit (GPU)), etc., for implementing one or more of the functions described herein Including circuit. One non-limiting example of such a graphics processing unit is ADRENO® integration, part of the chipset's SNAPDRAGON® line, provided by Qualcomm of San Diego, California It is a graphics solution. In these embodiments, the GPU may include a memory that stores instructions for performing one or more of the functions described herein.[0020]FIG. 1 is a schematic block diagram of an illustrative graphics processing unit (GPU) 100. Such a GPU may be included in an integrated circuit designed for smartphones, for example. The depicted GPU 100 includes a shader system 110, color / depth blocks 120 a - 120 d, and memory arbitration (MARB) blocks 130 a - 130 d. It will be appreciated that GPU 100 may include more or fewer blocks than shown. GPU 100 may communicate with graphics memory (Gmem) 140 a - 140 d external to the GPU. GPU 100 may communicate with central processing unit (CPU) 150. Gmem 140 a - 140 d, and / or CPU 150 may be included in a chipset or processor that includes GPU 100.[0021]The shader system 110 can process the graphics data to generate an appropriate level of light and / or color in the image to be rendered. The shader system 110 may adjust the position, hue, saturation, brightness, contrast, etc., or some combination thereof, of some or all of the image to be rendered You can do. The shader system 110 can provide image data to the color / depth blocks 120 a - 120 d.[0022]Each color / depth block 120 a - 120 d includes color processors 220 a - 220 d and depth processors 210 a - 210 d. The color processors 220a-220d may process the image data to generate color data indicative of the color of the pixels of the scene to be rendered on the display. The depth processors 210a-210d can process the image data to generate depth data indicative of distance values of pixels within the scene. The depth processor and the color processor may be implemented by any suitable circuit. In some embodiments, the depth processor may be separate from the color processor. The depth processor and the color processor implement different functions, but in certain embodiments these processors can share some common circuits. Each of the color / depth blocks 120a-120d may correspond to different portions of the display. For example, the display may be divided into four quadrants, and each of the color / depth blocks 120 a - 120 d may correspond to one of its four quadrants. The GPU 100 shown in FIG. 1 includes four color / depth blocks 120 a - 120 d, each corresponding to a different part of the display, but any suitable number of color / depth blocks may be implemented in the GPU for a particular application As will be appreciated. For example, a single color / depth block may be implemented for a particular application. In some implementations, one MARB block may be shared among multiple RB blocks.[0023]As shown in FIG. 1, each color / depth block 120a-120d may communicate with a respective MARB block 130a-130d. In some other implementations, two or more of the color / depth blocks 120a-120d may communicate with a single MARB block 130a. Each MARB block includes an arbiter and a cache. Data from the cache of MARB block 130a may be provided to Gmem 140a. In FIG. 1, each MARB block 130 a - 130 d includes an arbiter and a cache.[0024]FIG. 2 is a schematic block diagram illustrating the data flow to and from the arbiter and cache contained in the MARB block 130. The MARB block 130 may be one of the MARB blocks 130 a - 130 d from FIG. 1. Data from the rasterizer may be provided to the depth processor 210. The depth processor 210 can generate a depth read request and a depth write request to access the cache 250. The depth processor 210 may provide a depth read request and a depth write request to the arbiter 240. The color processor 220 can receive data from the depth processor 210. In a particular implementation, the color processor 220 may receive some data from the depth processor 210 via the stream processor 230. The color processor 220 may generate a color read request and a color write request to access the cache 250. The color processor 220 may provide a color read request and a color write request to the arbiter 240. The depth processor 210 and the color processor 220 may be included in one of the color / depth blocks 120 a - 120 d of FIG. 1.[0025]The arbiter 240 may receive the cache request from the depth processor 210 and the color processor 220 and provide the cache request to the cache 250. Cache 250 may be shared by depth processor 210 and color processor 220. Both the depth processor 210 and the color processor 220 may send read and write requests to the cache 250, respectively. Thus, in certain embodiments, the cache 250 may be provided with four different types of cache requests: depth read, depth write, color read, and color write.[0026]For graphics pipelines, depth instructions may be preferred over color instructions. If no corresponding color request is supplied, the depth read or depth write request to the cache may be blocked. If the depth request is not supplied sufficiently, a bottleneck in which the idle instruction is executed may occur in the pipeline. Therefore, if there are multiple cache requests, the arbiter 240 determines which request should first be provided to the cache 250 in order to achieve higher performance compared to a device that does not establish such a procedure be able to.[0027]The arbiter 240 receives different types of cache requests from one or more depth processors 210 and one or more color processors 220 and determines which of the received requests has the highest priority can do. For example, the arbiter 240 may receive two or more of depth read, depth write, color read, and color write to determine which request has the highest priority. A cache request with the highest priority may then be provided to the cache 250 by the arbiter 240 before other received requests are provided to the cache 250. The arbiter 240 may receive multiple inputs and generate a single output to provide the selected cache request to the cache 250 at a particular point in time. Each of the plurality of inputs of the arbiter 240 may correspond to a different type of cache request. For example, as shown in FIG. 2, the four inputs of arbiter 240 may correspond to depth read, depth write, color read, and color write, respectively. The priority of the request may be configurable.[0028]The arbiter 240 can check the availability of the location in the cache 250 associated with the received cache request in determining which cache request has the highest priority. For example, if the cache information received from cache 250 indicates that the location of cache 250 associated with the selected cache request is not available, then the selected cache request is determined to have a lower priority obtain. If data other than the requested data is stored in the cache location, and / or valid data is not stored in the cache location, the cache location may not be available.[0029]As shown, the arbiter 240 can provide a single request to the cache 250 at one time. Cache 250 may also interface with Gmem 140, which may be one of the Gmems 140 a - 140 d of FIG. 1, and Ucache (a unified cache) 260. The cache 250 can provide a Gmem request to the Gmem 140 and receive Gmem data from the Gmem 140. Likewise, the cache 250 can provide a Ucache request to the Ucache 260 and receive Ucache data from the Ucache 260. Gmem 140 and Ucache 260 may communicate with system memory 270, respectively.[0030]FIG. 3 is a schematic block diagram of an illustrative example of arbiter 240, according to one embodiment. In FIG. 3, "Z" refers to depth and "C" refers to color. The arbiter 240 receives different types of cache requests, including depth read (Z read), depth write (Z write), color read (C read), and color write (C write) Based on the relative priority of the cache requests of the different types, to provide selected ones of the different types of cache requests. The arbiter 240 comprises electronic hardware and may be implemented by any suitable circuit such as a digital circuit. The arbiter 240 may include weight registers 320 - 326, input counters 330 - 336, an output counter 338, an arbitration circuit 340, and a multiplexer 360. It will be appreciated that in some implementations the arbiter 240 may include more or fewer components than shown.[0031]The arbiter 240 may include a weight register for each type of cache request. For example, as shown in FIG. 3, the weight register includes a depth read weight register 320, a depth write weight register 322, a color read weight register 324, and a color write weight register 326. Each of the weight registers may include one or more weights from which the relative priority of a particular type of cache request may be determined from one or more weights. The weight registers are configurable and can be set in various ways. For example, the weight of a particular weight register may be set by the driver. Alternatively, or in addition, the weights of one or more of the weight registers may be weighted by a weighting circuit (not shown) configured to generate data that is received by the arbiter 240 to assign weights to different types of cache requests a weighting circuit and the like. The weight register weights may be set based on pipeline information, such as information from a first in first out (FIFO) counter. By adjusting one or more weights in the weight registers 320 - 326, relative priorities between different types of cache requests can be changed. For example, if there are multiple depth requests to be executed than there are color requirements, according to one embodiment, the depth weight register may be assigned a higher weight than the color weight register. The weights can be, for example, assigned values corresponding to different scenes or types of scenes. Thus, priorities can be customized to specific scenes or groups of scenes.[0032]The arbiter 240 may include one input counter for each different type of cache request. The input counter can count the number of allowed cache requests. Thus, the input counter may be referred to as grant counters. As shown in FIG. 3, there may be four input counters, a depth read counter 330, a depth write counter 332, a color read counter 334, and a color write counter 336. Each of the input counters 330 - 336 may correspond to different types of cache requests. Each input counter 330 - 336 can count how many times a particular type of cache request has been processed. For example, each input counter 330 - 336 may keep track of how many times each type of cache request was received by the arbiter 240 and / or how many times it was provided to the cache 250. Input counters 330 - 336 can track the number of times a cache request was granted by incrementing and / or decrementing the count value. It will be appreciated that, in some other embodiments, different numbers of input counters 330 - 336 may be implemented and / or different numbers of types of cache requests may be processed. When one of the input counters 330 to 336 reaches a specific count value, the relative priority of the cache request can change. For example, if the count value of a particular input counter is greater than or equal to the weight in the corresponding weight register, the corresponding type of cache request may have a lower priority compared to other types of cache requests.[0033]The arbiter 240 may also include an output counter 338 for counting how many caches were processed by the arbiter 240 in total. For example, the output counter 338 can keep track of how many cache requests have been received in total by the arbiter 240 and / or provided by the arbiter 240 to the cache 250. In one embodiment (not shown), the adder circuit may add the count values of the input registers 330 - 336 to generate the total count value instead of the output counter 338.[0034]The arbiter's input counters 330 - 336 and / or the output counter 338 may be reset in response to the arbiter 240 detecting one or more conditions. One exemplary condition for resetting one or more counters of the arbiter 240 is that the specified number of cache requests have been processed by the arbiter 240. For example, the output counter 338 can count the number of cache requests provided to the cache 250, and the arbitration circuit 340 can respond to detecting that the output counter 338 has reached a particular count value, Multiple counters can be reset. Thus, the counter may be reset such that the cache request is balanced over a specified number of cache requests (eg, cache requests of 10, 15, 20, 30). Another exemplary condition for resetting one or more counters of the arbiter 240 is that one or more of the input counters 330 - 336 reach a threshold count value. Such threshold count values may be different for different input counters depending on the input counter. In one example, if each input counter 330 - 336 of the arbiter 240 has a count value that is greater than or equal to zero, or a weight in the corresponding priority register, all the counters in the arbiter 240 are cleared to an initial value such as 0 For example. Alternatively or additionally, if the total number of cache requests provided to the cache 250 is greater than or equal to the sum of all of the weights of the weight registers 320 - 326 of the arbiter 240 (eg, as counted by the output counter 338) , All the counters including the input counters 330 to 336 and the output counter 338 can be cleared to initial values such as 0. Accordingly, the arbiter 240 may cause the value in one or more of the counters 330 - 338 of the arbiter 240 to be cleared.[0035]The arbitration circuit 340 can determine relative priorities between different types of cache requests. In particular implementations, the arbitration circuit 340 may be implemented by digital circuitry. The arbitration circuit 340 can receive different types of cache requests. Different inputs of the arbitration circuit 340 can receive each type of cache request. For example, as shown in FIG. 3, different inputs of the arbitration circuit 340 receive a depth read request, a depth write request, a color read request, and a color write request, respectively. Arbitration circuit 340 assigns priorities to different types of cache requests based on information from weight registers 320 - 326, information from input counters 330 - 336, information from cache 250, or any combination thereof You can do. A cache request with the highest priority may be provided to the cache 250. The arbitration circuit 340 can generate a select signal indicating which type of cache request should be provided to the cache 250. Multiplexer 360 may receive the selection signal from arbitration circuit 340 and provide the selected cache request to cache 250 based on the selection signal.[0036]A priority queue may be initialized to set the initial priority. For example, a prioritized queue may be initialized to have the following relative priority: depth read with highest priority, depth write with next highest priority, next highest priority Color reading with and color writing with the lowest priority. The prioritized queue may be set based at least in part on the values assigned to the weight registers 320 - 326. Arbitration circuit 340 may include a state machine to implement prioritized queues and / or other functions.[0037]The arbitration circuit 340 can manage the priority queue. In one embodiment, a cache request received by the arbiter 240 of the type having the highest relative priority in the prioritized queue having a valid request, the available location associated with the request in the cache 250 (eg, cache 250) and a corresponding input counter having a count value less than the corresponding weight register value is provided to the cache 250. The prioritized queue of cache requests can then be changed by first moving the cache request of the type supplied at the end of the queue. The leading type of cache request can then be moved to the end of the queue if its input counter has a count value that is greater than or equal to the value in its corresponding weight register. The first request of the queue is that the input counter of the first type cache request in the queue has a count value in its corresponding input counter that is less than the value in its corresponding weight register, It can be moved to the end until it is determined that the count value of the counter is greater than or equal to the value in their corresponding weight register. It will be appreciated that, in some other embodiments, different comparisons between the values in the weight register and the values in the counter may be performed. For example, instead of checking whether the count value exceeds the corresponding weight register value, the arbitration circuit can check whether the count value is less than the corresponding weight value. A prioritized queue can move a head type cache request to the end of the queue, alternatively or additionally, if a location in the cache associated with the cache request is not available.[0038]FIG. 4 is an illustrative flow diagram of a process 400 for providing a cache request selected from a plurality of different types of cache requests to a cache, according to one embodiment. In process 400, different types of cache requests are received and selected cache requests with the highest priority are provided to the cache. The arbiter within the GPU may perform some or all of the process 400 to selectively provide a particular type of cache request to the cache shared by the depth processor and the color processor. Process 400 may be implemented, for example, by any of the devices described herein, for example, the devices of FIGS. 1, 2, and / or 3. In addition, any of the devices described herein may implement any combination of the features of process 400.[0039]At block 410, weights associated with different types of cache requests may be assigned. For example, the weight register may be programmed with values corresponding to color reading, color writing, depth reading, and depth writing, respectively. In this way, priorities between various types of cache requests can be initialized. The weight register may be programmed with instructions of hardware, firmware, or any combination thereof. The weights can be programmed with different values after initial programming. In some implementations, the value of the weight may be adjusted after receiving the cache request. By assigning different weight values it is possible to adjust the relative priority of different types of cache requests.[0040]At block 420, a cache request may be received. For example, the arbiter may receive a cache request from the depth processor and a cache request from the color processor. Thus, the arbiter can receive different types of cache requests from different processors and / or different types of cache requests from the same processor. Different types of cache requests can be received at different input contacts of the arbiter. When two or more different types of cache requests are received by the arbiter, the arbiter determines which type of request has the highest relative priority and then selects the type with the highest relative priority Can be initially provided to the cache.[0041]At block 430, it may be determined whether the cache location associated with the cache request is available. For example, the arbiter may receive information from a cache that indicates whether a cache location associated with reading or writing to the cache is available. More specifically, in certain embodiments, the arbiter receives information indicating whether a cache location associated with one or more of color read, color write, depth read, or depth write is available be able to. The arbiter may then determine whether a cache location associated with the particular cache request is available.[0042]At block 440, the number of processed cache requests for a particular type of cache request may be compared to a weight associated with a particular type. In one example, the counter can be tracked each time a particular type of cache request is processed, and the count value of the counter can be compared to the weight stored in the corresponding weight register of a particular type of cache request . Such a comparison may be performed for one or more of the different types of cache requests received by the arbiter. The counter may be cleared in response to the arbiter detecting, for example, the conditions described above.[0043]At block 450, cache requests of the type determined to have the highest priority may be provided to the cache. The highest priority is based on whether an associated location in the cache is available and / or a count of the number of particular types of cache requests processed and the weight of that particular type of cache request Can be determined based on comparison. The highest priority may be, for example, determining whether the request is valid, checking if the cache location associated with the request is available, and then determining the type of cache request at the head of the prioritized queue And comparing the associated count value with the corresponding value in the weight register. For example, the count value may first be compared with the corresponding value in the weight register of the cache request of the first type of the prioritized queue. Then, if such a comparison indicates that it does not provide that type of cache request to the cache, the count value is compared with the corresponding value in the weight register of the next different type cache request in the prioritized queue For example. This may be repeated until the comparison of the count value with the corresponding value in the weight register indicates that it provides a particular type of cache request to the cache.[0044]After the cache request is provided to the cache, the process 400 may include receiving a cache request, determining whether the location associated with such a request is available, comparing the number of particular types of requests to the corresponding weight , Provide cache requests to the cache, or continue any combination thereof.[0045]Some of the embodiments described above provide examples related to graphics processing units. The principles and advantages of the embodiments of the technology discussed herein are operable in numerous general purpose or special purpose computing system environments or configurations. Examples of such computing systems, environments, and / or configurations that may be suitable for use with the techniques described herein include, but are not limited to, personal computers, server computers, handhelds or laptops Devices, tablet computers, multiprocessor systems, processor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.[0046]Those skilled in the art will appreciate that the operations of the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the embodiments disclosed herein may be embodied as electronic hardware, computer software, or combinations of both As will be appreciated by those skilled in the art. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the design constraints imposed on the overall system and the specific application. Those skilled in the art can implement the described functionality in various ways for each particular application, but such implementation decisions are interpreted as causing a departure from the scope of the present disclosure Should not.[0047]The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented in any suitable general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any of those designed to perform the functions described herein In combination, it can be implemented or executed. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors associated with the DSP core, or any other such configuration obtain. In addition, the processor may have a single core or multiple cores. Further, the processor may be any dedicated processor, such as a graphics processor.[0048]In one or more exemplary embodiments, the functions and methods described may be implemented in hardware, software, firmware executing on a processor, or any combination thereof. When implemented in software, the functions may be stored in non-transitory computer readable storage. By way of example, and not limitation, such non-transitory computer-readable storage may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage device, Or any other non-transitory medium that can be used to store the desired program code in the form of a data structure and that can be accessed by the computer. Furthermore, it will be appreciated that the methods discussed herein are performed by at least a physical circuit. Accordingly, the claims are not intended to cover purely metal processes or abstract concepts. Indeed, the disclosed technique does not apply to spiritual steps, it is not carried out in the human mind or by a person writing on a piece of paper.[0049]Throughout the description and claims, unless the context clearly requires, the words "comprise", "comprising" and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense, In other words, it should be interpreted in the sense of "including, but not limited to." The terms "coupled", "connected", and the like, as commonly used in the present specification, refer to an element that can be directly connected or connected by one or more intermediate elements, It refers to two or more elements. Further, terms such as "herein", "above", "below", and similar meaning, as used herein, Shall refer to this specification as a whole, not any specific part of the document. Where the context permits, the terms in the above detailed description, which use singular or plural, may each include plural or singular number. The term "or" associated with a list of two or more items means that the term has the following interpretation of the term: any of the items in the list, all of the items in the list, and the list Any combination of items within, covering all of. All numerical values provided herein are intended to include similar values within the measurement error.[0050]Further, as used herein, the terms "can", "could", "might", "eg", "for example "Conditional languages such as" such as "are not generally understood to mean unless specifically stated otherwise, or when used, unless otherwise understood within the context, certain embodiments may , Specific features, elements, and / or states, but that other embodiments do not include them. Thus, such conditional languages generally require that features, elements, and / or states be required in one way or more for one or more embodiments, Whether or not one or more embodiments are included in any particular embodiment, with or without author input or prompting, whether these features, elements, and / or states are included in any particular embodiment, Or whether it should be executed in any particular embodiment, without departing from the scope of the present invention.[0051]The above detailed description of the embodiments is not intended to be exhaustive or to limit the invention to the precise form disclosed above. Certain embodiments of the present invention and examples of the present invention are described above for illustrative purposes, but for example, where a process or block is presented in a given order, but alternative embodiments May execute routines with actions in different orders, may use systems with blocks, and some processes or blocks may be deleted, moved, added, subdivided, combined, and / Or modified. Each of these processes or blocks can be implemented in a variety of different ways. Also, although processes or blocks are sometimes shown as being executed sequentially, these processes or blocks may alternatively be executed in parallel or at different times.[0052]Although specific embodiments have been described, these embodiments are presented by way of example only and are not intended to limit the scope of the disclosure. For example, as those skilled in the relevant art will recognize, various equivalent modifications are possible within the scope of the invention. Furthermore, elements and acts of the various embodiments described above may be combined to provide additional embodiments. Indeed, the methods, systems, devices, and products described herein may be embodied in a variety of other forms. In addition, various omissions, substitutions, and changes in the form of the methods, systems, devices, and products described herein can be made without departing from the spirit of the disclosure. Below, the invention described in the initial claims of the present application is added. [C1] a cache configured to store data and an arbiter comprising electronic hardware, wherein the arbiter assigns weights to different types of cache requests based on data received by the arbiter, Wherein the different types of cache requests include at least a first type of cache request and a second type of cache request, receiving a first request to access the cache from a depth processor, Is a cache request of said first type and a second request to access said cache from a color processor is received and said second request is a cache request of said second type , At least partially to said weight associated with said first type of request and said second type of request Determining which one of the received requests has the highest priority based on the received request, and providing the received request to the cache, the determined request having the highest priority . [C2] the arbiter is configured to determine the highest priority based at least in part on an indication of whether a location in the cache associated with the first request is available, The apparatus according to C1. [C3] The arbiter of claim 1, wherein the arbiter comprises a plurality of input counters, each of the plurality of input counters being configured to count the number of respective requests of the different types of cache requests processed by the arbiter. The apparatus according to C1. [C4] The arbiter of claim 1, wherein the arbiter is based at least in part on a comparison between a selected weight of the weights assigned to different types of cache requests and the number of requests counted by a corresponding input counter, The apparatus according to C 3, configured to determine the highest priority. [C5] The apparatus according to C3, wherein the arbiter comprises an output counter configured to keep track of how many requests have been provided to the cache. [C6] The apparatus according to C3, wherein the arbiter is configured to clear the input counter in response to detecting a condition. [C7] The apparatus according to C1, wherein the apparatus comprises a graphics processing unit, the graphics processing unit comprises the cache, the arbiter, the depth processor, and the color processor . [C 8] The apparatus according to C1, wherein the different types of cache requests comprise color reading, color writing, depth reading, and depth writing. [C 9] The apparatus of Claim 1, further comprising a weighting circuit configured to generate the data received by the arbiter for assigning weights to the different types of cache requests. [C 10] The apparatus according to C1, wherein the data received by the arbiter for assigning weights to the different types of cache requests is generated by a driver. Based on at least in part a cache configured to store data, a weight associated with different types of cache requests, and a count of requests for the different types of cache requests, Arbitration means for determining a relative priority of cache requests, said arbitration means being arranged to provide said different type cache request to said cache based on said relative priority, A color processor configured to provide a cache request to the means; and a depth processor configured to provide a cache request to the arbitration means. [C12] A method for electronically implementing a selected cache request in a cache, wherein a method accesses a cache shared by the depth processor and the color processor from a depth processor and from a color processor , Receiving one or more weights associated with said different type of cache request and one or more counts associated with said different type of cache request Determining, based at least in part, that a selected one of the received cache requests has the highest priority of the received cache requests; Before providing other cache requests to the cache, And providing the selected cache request to the cache. [C13] The method according to C12, wherein the plurality of different types of cache requests comprises color reading, color writing, depth reading, and depth writing. [C14] The method according to C12, further comprising setting the one or more weights. [C15] The method according to C14, wherein the setting is executed by an instruction of a driver. [C 16] The method of claim 1, wherein determining the relative priority of the different types of cache requests is based at least in part on whether the location in the cache associated with at least one of the received requests is available , C12. [C17] The method according to C12, further comprising generating the count of cache requests of each of the different types of cache requests with a counter for each of the different types of cache requests. The method of Claim 17, further comprising clearing the count of the number of different types of cache requests based at least in part on detecting a [C 18] condition. [C 19] The method according to C 18, wherein the condition indicates that a predetermined number of cache requests have been received. [C20] A non-transitory computer readable storage comprising instructions for instructing a graphics processing unit to execute a method when executed, the method comprising the steps of: determining a weight associated with a different type of cache request , Selecting a cache request from a plurality of said different types of cache requests for accessing the cache based at least in part on the counts associated with said different types of cache requests, And wherein the cache request comprises providing the selected cache request to the cache, the cache request being provided by a color processor and a depth processor. [C21] The non-transitory computer readable storage according to C20, wherein the method further comprises setting the weights in arbiter registers, the weights being configurable. [C 22] The non-transitory computer readable storage according to C 21, wherein the selecting is based at least in part on whether the location in the cache associated with the cache request is available. |
A method for detecting tunnel oxide encroachment on a memory device. In one method embodiment, the present invention applies a baseline voltage burst to a gate of the memory device. Next, the present embodiment generates a baseline performance distribution graph of bit line current as a function of gate voltage for the memory device. The present embodiment then applies a channel program voltage burst to the gate of the memory device. Moreover, the present embodiment generates a channel program performance distribution graph of bit line current as a function of gate voltage for the memory device. The present embodiment then applies a channel erase voltage burst to the gate of the memory device. Additionally, the present embodiment generates a channel erase performance distribution graph of bit line current as a function of gate voltage for the memory device. A comparison of the channel program performance distribution graph and the channel erase performance distribution graph with respect to said baseline performance distribution graph is then performed. In so doing, an asymmetric distribution of the channel program performance distribution graph and the channel erase performance distribution graph with respect to the baseline performance distribution indicates tunnel oxide encroachment. |
1. A method for detecting tunnel oxide encroachment on a memory device comprising:applying a baseline voltage burst to a gate of said memory device; generating a baseline performance distribution graph of bit line current as a function of gate voltage for said memory device; applying a channel program voltage burst to said gate of said memory device; generating a channel program performance distribution graph of bit line current as a function of gate voltage for said memory device; applying a channel erase voltage burst to said gate of said memory device; generating a channel erase performance distribution graph of bit line current as a function of gate voltage for said memory device; and comparing said channel program performance distribution graph and said channel erase performance distribution graph with respect to said baseline performance distribution graph, wherein an asymmetric distribution of said channel program performance distribution graph and said channel erase performance distribution graph with respect to said baseline performance distribution indicates tunnel oxide encroachment. 2. The method as recited in claim 1 wherein said detecting tunnel oxide encroachment on a memory device further comprises:analyzing said channel program performance distribution graph and said channel erase performance distribution graph with respect to said baseline performance distribution graph to quantify said tunnel oxide encroachment. 3. The method as recited in claim 1 wherein said baseline voltage burst is 1 volt applied to the gate for 100 milliseconds.4. The method as recited in claim 1 wherein said channel program voltage burst is a Fowler-Nordheim (FN) channel program with positive gate bias.5. The method as recited in claim 4 wherein said positive gate bias is 18 volts applied to the gate for 100 milliseconds.6. The method as recited in claim 1 wherein said channel erase voltage burst is a Fowler-Nordheim (FN) channel erase with negative gate bias.7. The method as recited in claim 6 wherein said negative gate bias is -17 volts applied to the gate for 100 milliseconds. |
FIELD OF THE INVENTIONThe present invention relates to the field of memory devices. Specifically, the present invention relates to detecting tunnel oxide encroachment on a memory device.BACKGROUND ARTPresently, electronic memories come in a variety of forms and serve a variety of purposes. For example, one type of memory is flash memory. Generally, flash memories are used for easy and fast information storage in devices such as digital cameras and home video consoles. It is used more as a hard drive than as random access memory (RAM). In fact, flash memory may be considered a solid state storage device (e.g., no moving parts-everything is electronic).In general, flash memory is a type of electrically erasable programmable read-only memory (EEPROM). It has a grid of columns and rows with a cell that has two transistors at each intersection. The two transistors are separated from each other by a thin tunnel oxide (TOX) layer. One of the transistors is a floating gate, and the other one is a control gate. The floating gate's only link to the row is through the control gate. As long as the link is in place, the cell has a value of one. To change the value to a zero requires a process called Fowler-Nordheim (FN) tunneling.FN tunneling is used to alter the placement of electrons in the floating gate. For example, an electrical charge is applied to the floating gate and drains to the ground. This charge causes the floating-gate transistor to act similar to an electron gun. That is, the electrons are pushed through and trapped on the other side of the TOX layer, giving it a negative charge. These negatively charged electrons act as a barrier between the control gate and the floating gate. A cell sensor then monitors the level of the charge passing through the floating gate. If the flow through the gate is greater than 50 percent of the charge, then it has a value of one. However, when the charge passing through the gate drops below the 50 percent threshold, the value changes to zero. Normally, a blank EEPROM has all of the gates fully open, giving each cell a value of one.The electrons in the cells of a flash-memory can be retuned normal (e.g., one) by the application of an electric field (e.g., a higher voltage charge). Furthermore, flash memory utilizes in-circuit wiring to apply the electric field either to the entire chip or to predetermined sections known as blocks. This electrical field erases the target area of the chip, which can then be rewritten. Therefore, flash memory works much faster than traditional EEPROMS because instead of erasing one byte at a time, it erases a block or the entire chip. In addition, flash memory will maintain its data without an external source of power. Thus, it is extremely useful with removable memory media such as digital cameras, digital music players, video consoles, computers, and the like.However, in order for a flash memory device to operate at peak performance, the TOX layer needs to be as flat as possible. Any variations, such as TOX thickening, TOX thinning, or the like, result in a TOX encroachment issue. That is, the TOX layer may have varying thickness from the center to the edges. This lack of uniformity can result in difficulty during programming or erasing of the memory. One cause of TOX encroachment is the post oxidation process (POP). For example, after an etching process, different defects such as segregated edge defects may be found in the memory device. In order to repair the defects, POP is applied. However, POP may cause TOX encroachment on the channel. Additionally, the amount of time required for POP is not standard. That is, the process varies between devices. Therefore, the amount/effects of TOX encroachment are not easily quantified.Thus, a need exists for a method and system for detecting tunnel oxide encroachment on a memory device. A further need exists for a method and system for detecting tunnel oxide encroachment on a memory device that can quantify the extent of the encroachment. Yet another need exists for a method and system for detecting tunnel oxide encroachment on a memory device which can be applied during the manufacturing process. A further need exists for a method which meets the above needs and which is compatible with existing memory manufacturing processes.SUMMARY OF INVENTIONThe present invention provides, in various embodiments, a method and system for detecting tunnel oxide encroachment on a memory device. Furthermore, the present invention provides a method and system for detecting tunnel oxide encroachment on a memory device that can quantify the extent of the encroachment. Additionally, the present invention provides a method and system for detecting tunnel oxide encroachment on a memory device during the manufacturing process. Moreover, the present invention provides a method which meets the above needs and which is compatible with existing memory manufacturing processes.Specifically, in one embodiment, the present invention applies a baseline voltage burst to a gate of the memory device. Next, the present embodiment generates a baseline performance distribution graph of bit line current as a function of gate voltage for the memory device. The present embodiment then applies a channel program voltage burst to the gate of the memory device. Moreover, the present embodiment generates a channel program performance distribution graph of bit line current as a function of gate voltage for the memory device. The present embodiment then applies a channel erase voltage burst to the gate of the memory device. Additionally, the present embodiment generates a channel erase performance distribution graph of bit line current as a function of gate voltage for the memory device. A comparison of the channel program performance distribution graph and the channel erase performance distribution graph with respect to said baseline performance distribution graph is then performed. In so doing, an asymmetric distribution of the channel program performance distribution graph and the channel erase performance distribution graph with respect to the baseline performance distribution indicates tunnel oxide encroachment.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.FIGS. 1A-1B are block diagrams of the exemplary effect of tunnel oxide encroachment on different sized memory devices.FIG. 2 is a block diagram of an exemplary method for detecting tunnel oxide encroachment on a memory device in accordance with an embodiment of the present invention.FIGS. 3A-3B are exemplary graphs in accordance with an embodiment of the present invention for detecting tunnel oxide encroachment on a memory device.FIG. 4 is a flowchart of steps performed in accordance with one embodiment of the present invention for detecting tunnel oxide encroachment on a memory device.FIG. 5 is a block diagram of an embodiment of an exemplary computer system used in accordance with the present invention.DETAILED DESCRIPTION OF THE INVENTIONReference will now be made in detail to embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, the present invention may be practiced without these specific details. In other instances well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention.Notation and NomenclatureSome portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within an electronic computing device and/or memory system. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, logic block, process, etc., is herein, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these physical manipulations take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system or similar electronic computing device. For reasons of convenience, and with reference to common usage, these signals are referred to as bits, values, elements, symbols, characters, terms, numbers, or the like with reference to the present invention.It should be borne in mind, however, that all of these terms are to be interpreted as referencing physical manipulations and quantities and are merely convenient labels and are to be interpreted further in view of terms commonly used in the art. Unless specifically stated otherwise as apparent from the following discussions, it is understood that throughout discussions of the present invention, discussions utilizing terms such as "partitioning", "receiving", "processing", "applying", "storing", "delivering", "accessing", "generating", "providing", "separating", "outputting", "performing", "comparing" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data. The data is represented as physical (electronic) quantities within the computing device's registers and memories and is transformed into other data similarly represented as physical quantities within the computing device's memories or registers or other such information storage, transmission, or display devices.With reference now to FIGS. 1A and 1B, a normal sized flash memory device 100 and a reduced size flash memory device 150 are shown. In addition, flash memory devices 100 and 150 both show the effects of TOX encroachment 110. In one embodiment, TOX encroachment 110 may be caused by the oxidation of the floating gate poly-one layer 105, which in turn leads to an uneven TOX thickness of the cells and a lack of uniformity. In another embodiment, TOX encroachment 110 may be caused by an error in the manufacturing line, materials utilized in the manufacturing process, manufacturing environment, temperature, or the like.As shown in FIG. 1A, when the TOX channel is longer, fixing the edges of the memory device using POP is acceptable. For example, the portion of the memory device 100 effected by TOX encroachment 110 are shown by portions A120. In most manufacturing lines, the total of all portions of A120 may be 20 percent of the overall length of TOX layer 108. Thus, although TOX encroachment 110 occurs, approximately 80 percent of TOX layer 108 utilized by memory device 100 is within tolerance.However, with reference now to FIG. 1B, as the memory device is reduced in size, and the channel length decreases, TOX encroachment 110, as measured by the total of all portions of B170, may be as large as 50 percent. In such a case, only 50 percent of TOX layer 108 utilized by memory device 150 is within tolerance. Therefore, as the size of memory device 100 is reduced, deleterious process induced TOX encroachment 110 dramatically increases.In one embodiment, TOX encroachment 110 results in a non-uniform layer of tunnel oxide. Therefore, the electrons cannot easily pass through TOX layer 108 due to the additional width of the TOX layer 108. This non-uniformity of the TOX layer 108 may cause programming of memory device 100 to fail due to the inability of the electrons to tunnel through the excessive length of TOX layer 108. For example, in one embodiment, a normal TOX layer 108 has a thickness of 100 angstroms plus or minus 10 angstroms. However, after TOX encroachment 110, TOX layer 108 may have a thickness of greater than 110 angstroms plus or minus 10 angstroms. This results in a much higher voltage requirement in order to pass electrons across TOX layer 108. Therefore, no programming (or very little programming) of the memory device may occur.With reference now to FIGS. 2, a block diagram of an exemplary method for detecting TOX encroachment 110 on a memory device 200 is shown. In one embodiment, memory device 200 is a memory device, such as flash memory device. Moreover, memory device 200 may be deleteriously effected by TOX encroachment 110 resulting in a degraded interface between floating gate 102 and TOX LAYER 108, where sharp "angular" shaped TOX disrupts an otherwise flat surface. In one embodiment, TOX encroachment 110 may be caused by poly floating gate oxidation. The present embodiment is utilized to test memory device 200 in order to detect unacceptable levels of TOX encroachment 110. In another embodiment, the test for TOX encroachment 110 on memory device 200 may be further utilized to quantify the level of TOX encroachment 110. Furthermore, memory device 200 may be tested while in the manufacturing process. Thus, with the system and method stated herein, TOX encroachment 110 may be detected during the manufacturing process thereby allowing rectification of the TOX encroachment 110 effected memory device 200 while still within the manufacturing environment.With reference still to FIG. 2 and now to step 401 of FIG. 4, the present embodiment applies a baseline voltage burst to floating gate 102 of a memory device 200. For example, a baseline voltage burst of one volt may be applied to floating gate 102 for 100 milliseconds. During the application of the baseline voltage burst, source 104, drain 106, and p-well 220 are grounded. The baseline voltage burst may be used to establish a neutral state within memory device 200. In general, the baseline voltage burst allows memory device 200 to be set to a known baseline. Although a specific voltage and time are stated herein, the present invention is well suited to the use of a higher or lower baseline voltage as well as an increase or decrease in the timeframe in which the voltage is applied. The specified voltage and time are used merely for purposes of brevity and clarity.Referring still to FIG. 2 and now to step 402 of FIG. 4, a baseline performance distribution graph of bit line current as a function of gate voltage is generated for the memory device 200. In one embodiment, a common gate voltage 210 is applied to memory device 200 in increasing increments. During the application of the common gate voltage 210, a common bit line current is measured as an output. The resulting data is plotted in a graphical format. FIGS. 3A and 3B show a plot of baseline performance distribution graph (e.g., 310 and 360) of bit line current as a function of gate voltage.With reference still to FIG. 2 and now to step 403 of FIG. 4, the present embodiment applies a channel program voltage burst to floating gate 102 of memory device 200. In one embodiment, channel program voltage burst is a Fowler-Nordheim (FN) channel program with positive gate bias. For example, a channel program voltage burst of 18 volts may be applied to floating gate 102 for 100 milliseconds. During the application of the channel program voltage burst, source 104, drain 106, and p-well 220 are grounded.The channel program voltage burst is used to establish a program state within memory device 200. In general, the channel program voltage burst allows memory device 200 to be set to a programmable state. Although a specific voltage and time are stated herein, the present invention is well suited to the use of a higher or lower channel program voltage as well as an increase or decrease in the timeframe in which the voltage is applied. The specified voltage and time are used merely for purposes of brevity and clarity.Referring still to FIG. 2 and now to step 404 of FIG. 4, a channel program performance distribution graph of bit line current as a function of gate voltage is generated for the memory device 200. In one embodiment, a common gate voltage 210 is applied to memory device 200 in increasing increments. During the application of the common gate voltage 210, a common bit line current is measured as an output. The resulting data is plotted in a graphical format. FIGS. 3A and 3B show a plot of channel program performance distribution graph (e.g., 320 and 370) of bit line current as a function of gate voltage.With reference still to FIG. 2 and now to step 405 of FIG. 4, the present embodiment applies a channel erase voltage burst to floating gate 102 of memory device 200. In one embodiment, channel erase voltage burst is a FN channel program with negative gate bias. For example, a channel erase voltage burst of -17 volts may be applied to floating gate 102 for 100 milliseconds. During the application of the channel erase voltage burst, source 104, drain 106, and p-well 220 are grounded.The channel erase voltage burst is used to establish an erase state within memory device 200. In general, the channel erase voltage burst allows memory device 200 to be set to a clean state. Although a specific voltage and time are stated herein, the present invention is well suited to the use of a higher or lower channel erase voltage as well as an increase or decrease in the timeframe in which the voltage is applied. The specified voltage and time are used merely for purposes of brevity and clarity.Referring still to FIG. 2 and now to step 406 of FIG. 4, a channel erase performance distribution graph of bit line current as a function of gate voltage is generated for the memory device 200. In one embodiment, a common gate voltage 210 is applied to memory device 200 in increasing increments. During the application of the common gate voltage 210, common bit line current is measured as an output. The resulting data is plotted in a graphical format. FIGS. 3A and 3B show a plot of channel erase performance distribution graph (e.g., 330 and 380) of bit line current as a function of gate voltage.With reference now to FIG. 3 and step 407 of FIG. 4, in one embodiment a comparison of the channel program performance distribution graph (e.g., 320 and 370) and the channel erase performance distribution graph (e.g., 330 and 370) with respect to the baseline performance distribution graph (e.g., 310 and 360) is performed.Examples of the comparison are shown in the bit line current as a function of gate voltage graphs of FIGS. 3A and 3B. In one embodiment, Graphs 300 and 350 show a common bit line current ranging from 1E-12 to 1E-4, and a gate voltage ranging from zero to 7 volts. As stated herein, as the gate voltage is increased throughout its range of voltages, the common bit line current is measured as an output. Although a specific range of voltages and current is stated herein, the present embodiment is well suited to the use of a wider or narrower range. The utilization of the herein-mentioned ranges is done merely for purposes of brevity and clarity.FIG. 3A shows a graphical analysis of a test performed on a memory device (e.g., 300) which is operating correctly (e.g., no TOX encroachment). In general, as shown in FIG. 3A, a symmetric distribution of channel program performance distribution graph 320 and channel erase performance distribution graph 330 with respect to baseline performance distribution 310 are indicators of a lack of TOX encroachment 110.However, FIG. 3B shows a test performed on a memory device (e.g., 300) which is not operating correctly (e.g., has TOX encroachment). In general, as shown in FIG. 3B, an asymmetric distribution of channel program performance distribution graph 370 and channel erase performance distribution graph 380 with respect to baseline performance distribution 360 indicates the presence of TOX encroachment 110. Furthermore, FIG. 3B shows that the presence of TOX encroachment 110 further reduces the erase speed. In one embodiment, the gaussean distribution starts to spread out during the channel erase as some electrons move early and others move later due to the non-homogeneity of TOX layer 108.Additionally, FIG. 3B shows that the presence of TOX encroachment further reduces the program speed. For example, a comparison of channel program 370 as shown in FIG. 3B is much slower than that of channel program 320 as shown in FIG. 3A. Thus, when a channel program state is selected for a device having TOX encroachment, some of the memory devices program and some do not. This effect, shown in channel program 370 of FIG. 3B, occurs due to the minimum voltage requirements of the electrons trying to cross TOX layer 108. Specifically, since TOX layer 108 is non-homogenous only a tail of the distribution is programmed. Therefore, fewer electrons are stored causing the capacitor to hold less charge. Although many performance effects are shown in graphs 300 and 350, it is the lack of symmetry between the three distributions that signals TOX encroachment.With reference now to FIG. 5, a block diagram of an embodiment of an exemplary computer system 500 used in accordance with the present invention. It should be appreciated that system 500 is not strictly limited to be a computer system. As such, system 500 of the present embodiment is well suited to be any type of computing device (e.g., server computer, portable computing device, desktop computer, mobile phone, pager, personal digital assistant, etc.). Within the following discussions of the present invention, certain processes and steps are discussed that are realized, in one embodiment, as a series of instructions (e.g., software program) that reside within computer readable memory units of computer system 500 and executed by a processor(s) of system 500. When executed, the instructions cause computer 500 to perform specific actions and exhibit specific behavior that is described in detail herein.Computer system 500 of FIG. 5 comprises an address/data bus 510 for communicating information, one or more central processors 502 coupled with bus 510 for processing information and instructions. Central processor unit(s) 502 may be a microprocessor or any other type of processor. The computer 500 also includes data storage features such as a computer usable volatile memory unit 504 (e.g., random access memory, static RAM, dynamic RAM, etc.) coupled with bus 510 for storing information and instructions for central processor(s) 502, a computer usable non-volatile memory unit 506 (e.g., read only memory, programmable ROM, flash memory, EPROM, EEPROM, etc.) coupled with bus 510 for storing static information and instructions for processor(s) 502. System 500 also includes one or more signal generating and receiving devices 508 coupled with bus 510 for enabling system 500 to interface with other electronic devices and computer systems. The communication interface(s) 508 of the present embodiment may include wired and/or wireless communication technology. For example, within the present embodiment, the communication interface 508 may be a serial communication port, a Universal Serial Bus (USB), an Ethernet adapter, a FireWire (IEEE 1394) interface, a parallel port, a small computer system interface (SCSI) bus interface, infrared (IR) communication port, Bluetooth wireless communication port, a broadband interface, or an interface to the Internet, among others.Optionally, computer system 500 may include an alphanumeric input device 514 including alphanumeric and function keys coupled to the bus 510 for communicating information and command selections to the central processor(s) 502. The computer 500 can include an optional cursor control or cursor directing device 516 coupled to the bus 510 for communicating user input information and command selections to the central processor(s) 502. The cursor-directing device 516 may be implemented using a number of well known devices such as a mouse, a track-ball, a track-pad, an optical tracking device, and a touch screen, among others. Alternatively, it is appreciated that a cursor may be directed and/or activated via input from the alphanumeric input device 514 using special keys and key sequence commands. The present embodiment is also well suited to directing a cursor by other means such as, for example, voice commands.The system 500 of FIG. 5 may also include one or more optional computer usable data storage devices 518 such as a magnetic or optical disk and disk drive (e.g., hard drive or floppy diskette) coupled with bus 510 for storing information and instructions. An optional display device 512 is coupled to bus 510 of system 500 for displaying video and/or graphics. It should be appreciated that optional display device 512 may be a cathode ray tube (CRT), flat panel liquid crystal display (LCD), field emission display (FED), plasma display or any other display device suitable for displaying video and/or graphic images and alphanumeric characters recognizable to a user.Thus, the present invention provides, in various embodiments, a method and system for detecting tunnel oxide encroachment on a memory device. Furthermore, the present invention provides a method and system for detecting tunnel oxide encroachment on a memory device and quantifying the extent of the encroachment. Additionally, the present invention provides a method and system for detecting tunnel oxide encroachment on a memory device which may be accomplished during manufacture. Moreover, the present invention provides a method which meets the above needs and which is compatible with existing memory manufacturing processes.The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents. |
Disclosed embodiments relate to a variable format, variable sparsity matrix multiplication (VFVSMM) instruction. In one example, a processor includes fetch and decode circuitry to fetch and decode a VFVSMM instruction specifying locations of A, B, and C matrices having (M * K), (K * N), and (M * N) elements, respectively, execution circuitry, responsive to the decoded VFVSMM instruction, to: routeeach row of the specified A matrix, staggering subsequent rows, into corresponding rows of a (M * N) processing array, and route each column of the specified B matrix, staggering subsequent columns,into corresponding columns of the processing array, wherein each of the processing units is to generate K products of A-matrix elements and matching B-matrix elements having the same row address as acolumn address of the A-matrix element, and to accumulate each generated product with a corresponding C-matrix element. |
1.A processor including:Cache for storing data;Multiple cores coupled to the cache, the cores of the multiple cores include:The execution module is configured to execute at least one instruction to perform a multiplication and accumulation operation on the first source matrix and the second source matrix according to the selected operation mode to generate a result matrix, the selected operation mode including the first operation mode and A second mode of operation. In the first mode of operation, at least the first source matrix is a sparse matrix with non-zero data elements located at a specific position. In the second mode of operation, the first source Both the matrix and the second source matrix are dense matrices,Wherein, when in the first operating mode, the first source matrix will be stored in a compressed format, the compressed format identifying the position of the non-zero data element, and the execution module further includes:A plurality of multiplication and accumulation modules are configured to multiply the non-zero data elements of the first source matrix by corresponding data elements in the second source matrix that are identified based on the position in the compression format to Generate multiple products, and add the multiple products and accumulated values to generate the result matrix.2.The processor according to claim 1, wherein in the case that the first source matrix is a sparse matrix, the first source matrix is to be stored in a compressed sparse format, the compressed sparse format includes an indicator, the An indicator accompanies each matrix element and specifies the logical position of the matrix element within the first source matrix.3.The processor according to claim 1, wherein the sparse matrix includes a matrix having a ratio of non-zero data elements less than or equal to a threshold.4.The processor of claim 3, wherein the threshold includes a value of one.5.The processor of claim 1, wherein the instruction is used to indicate the selected mode of operation.6.The processor according to claim 5, wherein the instructions include a sparse matrix multiplication instruction for instructing the first operation mode or a dense matrix multiplication instruction for instructing the second operation mode.7.The processor according to any one of claims 1 to 6, wherein when in the second mode of operation, the multiple multiply-accumulate modules are configured to combine any zero data elements of the first source matrix The data elements inside are multiplied by the data elements of the second source matrix.8.The processor according to any one of claims 1 to 6, wherein the execution module is operable to perform the multiplication and accumulation operation in combination with a plurality of different data types, and the plurality of different data types are used for Encoding data elements of the first source matrix, the second source matrix, and the result matrix.9.8. The processor of claim 8, wherein the data type includes one or more of the following: 16-bit floating point, 32-bit floating point, 8-bit integer, and 16-bit integer.10.The processor according to claim 7, wherein the instruction includes a plurality of fields, including a first field, a second field, a third field, and a fourth field, and the first field is used to specify the first operation mode or Operation code of the second operation mode, the second field is used to identify the result matrix, the third field is used to identify the first source matrix, and the fourth field is used to identify the second source matrix .11.The processor of any of claims 1-6, 9 and 10, wherein the first source matrix and the second source matrix comprise input values for machine learning applications.12.One method includes:Store data in the cache;The result matrix is generated by the cores of the plurality of cores coupled to the cache, performing multiplication and accumulation operations on the first source matrix and the second source matrix according to the selected operation mode, and the selected operation mode includes the first An operation mode and a second operation mode. In the first operation mode, at least the first source matrix is a sparse matrix with non-zero data elements located at a specific position, and in the second operation mode, The first source matrix and the second source matrix are both dense matrices,Wherein, when in the first operation mode, the first source matrix will be stored in a compressed format, the compressed format identifying the position of the non-zero data element, and the multiplication and accumulation operation further includes:Multiply the non-zero data elements of the first source matrix by corresponding data elements in the second source matrix identified based on the position in the compression format to generate multiple products, andThe multiple products and the accumulated value are added to generate the result matrix.13.The method according to claim 12, wherein when the first source matrix is a sparse matrix, the first source matrix will be stored in a compressed sparse format, the compressed sparse format includes an indicator, and the indication The symbol accompanies each matrix element and specifies the logical position of the matrix element in the first source matrix.14.The method according to claim 12, wherein the sparse matrix includes a matrix having a ratio of non-zero data elements less than or equal to a threshold value.15.The method of claim 14, wherein the threshold includes a value of one.16.The method according to any one of claims 12 to 15, wherein when in the second mode of operation, the multiply-accumulate operation is used to combine any zero data elements of the first source matrix The data elements are multiplied by the data elements of the second source matrix.17.The method according to any one of claims 12 to 15, wherein the multiply-accumulate operation is performed in combination with a plurality of different data types, and the plurality of different data types are used to compare the first source matrix, The data elements of the second source matrix and the result matrix are encoded.18.18. The method of claim 17, wherein the data type includes one or more of the following: 16-bit floating point, 32-bit floating point, 8-bit integer, and 16-bit integer.19.The method according to any one of claims 12-15 and 18, wherein the first source matrix and the second source matrix comprise input values for machine learning applications.20.A machine-readable medium storing program code, which when executed by a machine causes the machine to execute the method according to any one of claims 12 to 19. |
Variable format, variable sparse matrix multiplication instructionsDivision descriptionThis application is a divisional application of the invention patent application with the filing date of May 22, 2019, the application number being 201910431218.5, and the title of "variable format, variable sparse matrix multiplication instruction".Technical fieldThe field of the present invention relates generally to computer processor architectures, and in particular to variable format, variable sparse matrix multiplication instructions.Background techniqueMachine learning architectures such as deep neural networks have been applied in fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, and drug design. Deep learning is a type of machine learning algorithm. Maximizing the flexibility and cost efficiency of deep learning algorithms and calculations can help meet the needs of deep learning processors (for example, those that perform deep learning in data centers).Matrix multiplication is a key performance/power limitation of many algorithms including machine learning. Some traditional matrix multiplication methods are dedicated, for example, they lack the flexibility to use wide accumulators to support various data formats (signed and unsigned 8b/16b integer, 16b floating point), and the flexibility to support dense and sparse matrices .Summary of the inventionAccording to an aspect of the present application, there is provided a processor including: a cache for storing data; a plurality of cores coupled to the cache, the cores of the plurality of cores include: an execution module for executing at least one instruction, The result matrix is generated by performing multiplication and accumulation operations on the first source matrix and the second source matrix according to the selected operation mode, the selected operation mode includes the first operation mode and the second operation mode, in the first operation mode, At least the first source matrix is a sparse matrix with non-zero data elements located at a specific position. In the second operation mode, the first source matrix and the second source matrix are both dense matrices, wherein when in the first operation mode When the first source matrix is stored in a compressed format, the compressed format identifies the position of the non-zero data element. The execution module further includes: a plurality of multiplication and accumulation modules for combining the non-zero data elements of the first source matrix with the second source Corresponding data elements identified in the matrix based on the position in the compressed format are multiplied to generate multiple products, and the multiple products are added to the accumulated value to generate a result matrix.According to another aspect of the present application, there is provided a method, which includes: storing data in a cache; and executing processing for the first source matrix and the second source matrix according to a selected operation mode by a core of a plurality of cores coupled to the cache. The multiplication and accumulation operation of the two source matrices generates the result matrix. The selected operation mode includes a first operation mode and a second operation mode. In the first operation mode, at least the first source matrix has non-zero data located at a specific position A sparse matrix of elements. In the second operation mode, the first source matrix and the second source matrix are both dense matrices. When in the first operation mode, the first source matrix will be stored in a compressed format. To identify the position of the non-zero data element, the multiplication and accumulation operation further includes: multiplying the non-zero data element of the first source matrix by the corresponding data element identified based on the position in the compressed format in the second source matrix to generate multiple products , And add multiple products and accumulated values to generate a result matrix.According to another aspect of the present application, a machine-readable medium storing program code is provided, and the program code, when executed by a machine, causes the machine to execute the above method.Description of the drawingsThe present invention is illustrated in the figures of the accompanying drawings by way of example and not limitation, in which similar reference numerals indicate similar elements, and in which:FIG. 1 is a block diagram showing processing components for executing variable format, variable sparse matrix multiplication (VFVSMM) instructions according to an embodiment;Figure 2 is a block diagram of a processing array for executing variable format, variable sparse matrix multiplication (VFVSMM) instructions according to some embodiments;Figure 3 is a block diagram illustrating a partial execution of a variable format, variable sparse matrix multiplication (VFVSMM) instruction according to some embodiments;4 is a block diagram illustrating an execution pipeline for executing variable format, variable sparse matrix multiplication (VFVSMM) instructions according to some embodiments;5 is a block diagram illustrating routing control signals shared between the processing unit and the routing circuit when executing variable format, variable sparse matrix multiplication (VFVSMM) instructions according to some embodiments;FIG. 6 is a block diagram illustrating a flow of variable format, variable sparse matrix multiplication (VFVSMM) performed by a processor according to some embodiments;Figure 7 is a block diagram illustrating a variable precision integer/floating point multiplication and accumulation circuit according to some embodiments;8A-8C are block diagrams showing a general vector friendly instruction format and its instruction template according to an embodiment of the present invention;Figure 8A is a block diagram illustrating a format for a variable format, variable sparse matrix multiplication (VFVSMM) instruction according to some embodiments;Fig. 8B is a block diagram showing a general vector friendly instruction format and its type A instruction template according to an embodiment of the present invention.FIG. 8C is a block diagram showing a general vector friendly instruction format and its type B instruction template according to an embodiment of the present invention.Fig. 9A is a block diagram showing an exemplary specific vector friendly instruction format according to an embodiment of the present invention.FIG. 9B is a block diagram showing the fields of a specific vector friendly instruction format that constitute a complete opcode field according to an embodiment of the present invention.FIG. 9C is a block diagram showing the fields of a specific vector friendly instruction format constituting a register index field according to an embodiment of the present invention.FIG. 9D is a block diagram showing the fields of a specific vector friendly instruction format constituting an enhanced operation field according to an embodiment of the present invention.Figure 10 is a block diagram of a register architecture according to one embodiment of the present invention.Fig. 11A is a block diagram showing an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to an embodiment of the present invention.11B is a block diagram illustrating an exemplary embodiment of an ordered architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to an embodiment of the present invention;12A-B show a block diagram of a more specific exemplary ordered core architecture, the core will be one of several logic blocks in the chip (including other cores of the same type and/or different types);12A is a block diagram of a single processor core and its connection to the on-die interconnection network and its local subset of its level 2 (L2) cache according to an embodiment of the present invention;12B is an expanded view of a part of the processor core in FIG. 12A according to an embodiment of the present invention;FIG. 13 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics according to an embodiment of the present invention;Figures 14-17 are block diagrams of exemplary computer architectures;Figure 14 shows a block diagram of a system according to an embodiment of the present invention;Figure 15 is a block diagram of a first more specific exemplary system according to an embodiment of the present invention;Figure 16 is a block diagram of a second more specific exemplary system according to an embodiment of the present invention;FIG. 17 is a block diagram of a system on chip (SoC) according to an embodiment of the present invention; andFig. 18 is a block diagram of a comparison of using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present invention.Detailed waysIn the following description, many specific details are explained. However, it should be understood that some embodiments may be practiced without these specific details. In other cases, well-known circuits, structures and technologies are not shown in detail, so as not to make the description difficult to understand.References in the specification to "one embodiment", "an embodiment", "exemplary embodiment", etc. indicate that the described embodiment may include features, structures, or characteristics, but each embodiment may not necessarily include this feature, Structure or characteristics. Moreover, these phrases do not necessarily refer to the same embodiment. In addition, when describing a feature, structure, or characteristic with respect to an embodiment, if it is explicitly described, it is considered that such a feature, structure, or characteristic affecting other embodiments is within the knowledge of those skilled in the art.The disclosed embodiments provide for improved execution of variable format, variable sparse matrix multiplication (VFVSMM) instructions. The disclosed embodiments perform matrix multiplication and multiplication accumulation for various data formats, including signed/unsigned 8-bit/16-bit integer and 16-bit/32-bit floating point formats. In addition, the disclosed embodiments utilize blocking, handshake-based routing, and broadcasting matrix data elements between nodes in the processing array to support dense or sparse matrix operands and avoid multiplying by the zero value of the sparse matrix. In addition, by reconfiguring each processing unit in the processing array into a 2×2 array, the disclosed 8-bit mode is optimized to achieve 4 times the throughput.As used herein, the sparsity of a matrix is defined as the proportion of non-zero elements, and the remaining elements are zero or empty. For example, in some embodiments, a sparse matrix with a sparsity of 0.125 has only 1/8 or 12.5% non-zero valued elements. Sparsity can also be used to refer to the proportion of elements with zero value. Either way, the disclosed embodiments utilize the sparsity of one or two matrices in matrix multiplication to improve power, performance, flexibility, and/or functionality.By providing a circuit for multiplying matrices with various formats and various sparsity levels, it is expected that the disclosed embodiments will improve flexibility, functionality, and cost. In contrast to some methods that will use different hardware dedicated to each of various formats and sparsity, the disclosed embodiments provide hardware that can be configured to adapt to changes. In contrast to some methods of multiplying wasted power and performance by the zero elements of the matrix, the disclosed embodiments avoid at least some zero multiplications when operating in dense-sparse or sparse-sparse modes, as described below.Compared with some methods that rely on different circuits dedicated to different data formats, by providing a single reconfigurable execution circuit to support various data formats (including integer and floating point), it is expected that the disclosed embodiments are expected to improve cost and area. The disclosed embodiment provides a matrix multiplication accelerator that supports floating point and integer data formats when accumulating. By avoiding multiplication by zero elements, the disclosed accelerator can also be optimized to operate on sparse matrices. By combining these features into a reconfigurable circuit, the disclosed embodiments therefore enable a single matrix multiplication accelerator circuit to support multiple precision formats with wide accumulators while effectively reconfiguring for dense or sparse matrices. The disclosed accelerator embodiments improve area and energy efficiency while providing flexibility to support many typical matrix multiplication workloads (e.g., machine learning).FIG. 1 is a block diagram illustrating processing components for executing variable format, variable sparse matrix multiplication instruction (VFVSMM instruction) 103 according to some embodiments. As shown in the figure, the storage device 101 stores the VFVSMM instruction 103 to be executed. As described further below, in some embodiments, the computing system 100 is a single instruction multiple data (SIMD) processor to simultaneously process multiple data elements based on a single instruction.In operation, the fetch circuit 105 will fetch the VFVSMM instruction 103 from the storage device 101. The extracted VFVSMM instruction 107 will be decoded by the decoding circuit 109. The VFVSMM instruction format (this format will be further illustrated and explained for Figures 7, 8A-B, and 9A-D) has fields for specifying the opcode and the destination, multiplier, multiplicand, and complex vector of the addend (here Not shown). The decoding circuit 109 decodes the extracted VFVSMM instruction 107 into one or more operations. In some embodiments, the decoding includes generating a plurality of micro-operations to be executed by an execution circuit (such as execution circuit 119) in conjunction with routing circuit 118. The decoding circuit 109 also decodes the instruction suffix and prefix (if used). The execution circuit 119 operating in conjunction with the routing circuit 117 is further described and exemplified below at least with respect to FIGS. 2-6, 11A-B, and 12A-B.In some embodiments, the register renaming, register allocation, and/or scheduling circuit 113 provides functions for one or more of the following operations: 1) Rename logical operand values to physical operand values (for example, in some implementations) The example is the register alias table), 2) assign status bits and flags to decoded instructions, and 3) schedule the decoded VFVSMM instruction 111 for execution on the execution circuit 119 outside the instruction pool (for example, in some implementations) In the example, reservation station (reservation station) is used.The register (register file) and/or the memory 115 store data as an operand of the decoded VFVSMM instruction 111 to be operated by the execution circuit 119. Exemplary register types include write mask registers, packed data registers, general purpose registers, and floating point registers, as further explained and exemplified with respect to at least FIG. 10 below.In some embodiments, the write-back circuit 120 submits the result of executing the decoded VFVSMM instruction 111. The execution circuit 119 and the system 100 are further illustrated and described with respect to FIGS. 2-6, 11A-B, and 12A-B.Figure 2 is a block diagram of a processing array for executing variable format, variable sparse matrix multiplication (VFVSMM) instructions according to some embodiments. As shown in the figure, the system 200 includes an input matrix A 202, an input matrix B 204, an output matrix C 206 and a routing circuit 212. An execution circuit 208 is also shown, which includes a processing array with (M×N) processing units 210. In some embodiments, each processing unit in the processing array is a multiply-accumulate circuit, the expanded view of which is shown as MAC 214.The advantage of the disclosed processing units 210 is that they can be reused to perform multiplication and perform multiplication and accumulation on matrices with various formats. For example, as explained and exemplified with respect to FIG. 8A, the disclosed embodiment can execute the VFVSMM instruction on any of various data formats and the element size 712 in units of bits per matrix element. These data formats For example, 8-bit integer, 16-bit integer, 32-bit integer, half-precision floating point, single-precision floating point, or double-precision floating point. By avoiding the need to implement different hardware to process different types of data, the disclosed embodiments provide power and cost benefits by reusing the same circuit for different types of data.In some embodiments, for example, when processing 8-bit integer data, by configuring each processing unit to perform a 2×2 matrix multiplication, the throughput of the execution circuit is increased by four times.As described herein, processing units are sometimes referred to as processing elements, and sometimes referred to as processing circuits, and sometimes may be referred to as processing nodes. Regardless of the wording, the processing unit is intended to include circuits that perform data path calculations and provide control logic.In operation, the routing circuit 212 and the execution circuit 208 operate in a dense-dense mode, a dense-sparse mode, or a sparse-sparse mode, as described below.Dense-dense modeIn some embodiments, the routing and execution circuits are placed in dense-dense mode by software, for example, by setting control registers that control the routing and execution circuits. The disclosed embodiments improve power and performance by avoiding multiplications involving zero elements of a sparse matrix. The disclosed embodiments provide cost advantages by allowing the same circuit to be reused in various modes, including under various sparsity conditions and with various data formats.In some embodiments, the routing and execution circuits are placed in dense-dense mode, as indicated by the VFVSMM instruction, for example using the suffix of the opcode. The format of the VFVSMM instruction is further explained and exemplified for FIGS. 8A-C and 9A-D. In some embodiments, the routing and execution circuit enters the dense-dense mode in response to one or both of the A and B matrices being stored in a compressed format, where a specifier accompanies each matrix element and specifies the element in the logic The logical position within the A or B matrix.In operation, in response to the decoded VFVSMM instruction, the execution circuit operating in the dense-dense mode using the routing circuit 212 will route each row (stagger subsequent rows) of the specified A matrix to (M×N) In the corresponding row of the processing array of the processing unit, each column of the designated B matrix (staggered subsequent columns) is routed to the corresponding column of the processing array. In some embodiments, each row and column are staggered by one clock cycle, allowing each processing element to infer the row and column addresses of each received A matrix and B matrix element based on the clock cycle and the relative position of the processing element in the processing array.Continuing the operation, each of the (M×N) processing units will generate the product of each of the K data elements received horizontally and each of the K data elements received vertically, and accumulate the corresponding products of each generated product and the C matrix The previous value of the element, the corresponding element having the same relative matrix position as the position of the processing unit in the array.Dense-sparse modeIn some embodiments, the routing circuit and the execution circuit are placed in a dense-sparse mode by software, for example, by setting a control register that controls the routing and execution circuit. The disclosed embodiments improve power and performance by avoiding multiplications involving zero elements of a sparse matrix. The disclosed embodiments provide cost advantages by allowing the same circuit to be reused in various modes, including under various sparsity conditions and with various data formats.In some embodiments, the routing and execution circuits are placed in dense-sparse mode, as indicated by the VFVSMM instruction, for example using the suffix of the opcode. The format of the VFVSMM instruction is further explained and exemplified for FIGS. 8A-C and 9A-D. In some embodiments, the routing and execution circuit enters dense-sparse mode in response to one or both of the A and B matrices being stored in a compressed format, where an indicator accompanies each matrix element and specifies that the element is in logical A or B The logical position within the matrix.It should be noted that even if the A matrix and the B matrix are both dense matrices, the processor can also execute the VFVSMM instruction in a dense-sparse mode. As long as the sparse matrix is formatted to include the address information of each data element in this case, the processor can be configured to perform one or more address checks, and each address check determines that the address matches, because the A and B matrices are Dense matrix, adjacent address interval 1. Therefore, operating in the dense-sparse mode in this case will cause some additional execution costs, but it can simplify the task of executing the VFVSMM instruction.In some embodiments where the processor will execute the VFVSMM instruction in dense-sparse mode, the specified B matrix is a sparse matrix (with a sparsity less than 1, the sparsity is defined as the proportion of non-zero elements, and the remaining elements are zero or empty ), and only include non-zero elements of a logical matrix containing (K×N) elements, and each element includes a field specifying its logical row and column address.In operation, in response to the decoded VFVSMM instruction, the execution circuit operating in the dense-sparse mode using the routing circuit 212 routes each row of the designated A matrix (staggered from subsequent rows) to the corresponding (M×N) processing array Row, and route each column of the designated B matrix to the corresponding column of the processing array.As a sparse matrix, the specified B matrix will be stored in and loaded from the memory in a compressed sparse format storing only non-zero elements. Therefore, subsequent elements of the A and B matrices may have gaps in their row and column addresses. In some embodiments, each row is staggered by one clock cycle, allowing each processing element to infer the column address of each received A matrix element based on the clock cycle.Continuing to operate, each of the (M×N) processing units in the processing array 210 operating in the dense-sparse mode determines the column sum of each horizontally received element based on the clock and the position of the processing unit within the processing array 210 Row address. Then, each of the (M×N) processing units in the processing array 210 determines whether there is an address match between the logical row address of the vertically received element and the column address of the horizontally received element. When there is a match, the processing unit will generate a product. When there is no match, if the column address is greater than the logical row address, the processing unit maintains the horizontally received element and passes the vertically received element; otherwise, maintains the vertically received element and passes the horizontally received element.Sparse-sparse modeIn some embodiments, the routing circuit and the execution circuit are placed in a sparse-sparse mode by software, for example, by setting a control register that controls the routing and execution circuit. The disclosed embodiments improve power and performance by avoiding multiplications involving zero elements of a sparse matrix. The disclosed embodiments provide cost advantages by allowing the same circuit to be reused in various modes, including under various sparsity conditions and with various data formats.In some embodiments, the routing and execution circuit is placed in a sparse-sparse mode, as indicated by the VFVSMM instruction, for example using the suffix of the opcode. The format of the VFVSMM instruction is further explained and exemplified for FIGS. 8A-C and 9A-D. In some embodiments, the routing and execution circuit enters a sparse-sparse mode in response to the A and B matrices being stored in a compressed format, where an indicator accompanies each matrix element and specifies the logical position of the element within the logical A or B matrix.In some embodiments where the processor will execute the VFVSMM instruction in sparse-sparse mode, the specified A and B matrices are both sparse matrices (having a sparsity less than 1, the sparsity is defined as the proportion of non-zero elements, and the remaining elements are Zero or empty). In such an embodiment, the designated A and B matrices are both stored in the memory as compressed sparse matrices. The compressed sparse matrix includes only non-zero elements of the logical (M×K) and (K×N) matrices, respectively. Each element includes fields that specify its logical row and column addresses.In operation, the execution circuit operating in the sparse-sparse mode using the routing circuit 212 routes each row of the designated A matrix to the corresponding row of the (M×N) processing array, and routes each column of the designated B matrix to Process in the corresponding column of the array.As a sparse matrix, the specified A and B matrices will be stored in and loaded from the memory in a compressed sparse format that stores only non-zero elements and includes the addresses of the elements in the logic array. Therefore, subsequent elements of the A and B matrices may have gaps in their row and column addresses.Continuing the operation, each of the (M×N) processing units operating in the sparse-sparse mode compares the logical row address of the vertically received element with the logical column address of the horizontally received element. When there is an address match, the processing unit will generate a product. When there is no match, if the logical column address is greater than the logical row address, the processing unit will maintain the horizontally received element and pass the vertically received element, otherwise, maintain the vertically received element and pass the horizontally received element.Figure 3 is a block diagram illustrating a partial execution of a variable format, variable sparse matrix multiplication (VFVSMM) instruction according to some embodiments. As shown, the execution circuit 300 includes a grid of MAC 308, the rows of which receive A matrix elements from the corresponding rows of the input matrix A 302, and the columns of which receive B matrix elements from the corresponding columns of the input matrix B 304. The grid of MAC 308 produces output matrix C 306. In some embodiments, when operating in dense-dense mode, the rows and columns are "staggered" by one clock cycle, allowing each processing element to infer each received A matrix and the relative position of the processing element in the processing array based on the clock cycle and the processing array. The row and column addresses of the B matrix elements. As shown in the figure, row 1 of the input matrix A 302 is routed one cycle earlier than row 2 and routed two cycles earlier than load 3.In operation, when operating in dense-dense mode, each of the (M×N) processing units will generate K products of matched A matrix and B matrix elements received from the specified A and B matrices ( When the B matrix element has the same row address as the column address of the A matrix element, the match exists), and each generated product is accumulated with the corresponding element of the specified C matrix, and the corresponding element of the C matrix has the same value as in the processing array The position of the processing unit is the same relative position.When operating in dense-sparse mode, each of the (M×N) MACs in the grid of MAC 308 will determine whether there is an address between the designated logical row address of the B matrix element and the column address of the A matrix element match. When there is a match, a product is generated. When there is no match, when the column address of the A matrix element is greater than the designated logical row address of the B matrix element, the A matrix element is maintained and the B matrix element is passed; otherwise, the B matrix element is maintained and the A matrix element is passed.When operating in sparse-sparse mode, each of the (M×N) MACs in the grid of MAC 308 will determine whether between the designated logical column address of the A matrix element and the designated logical row address of the B matrix element There is a match. When there is a match, a product is generated. When there is no match, when the designated logical column address of the A matrix element is greater than the designated logical row address of the B matrix element, the A matrix element is maintained and the B matrix element is passed, otherwise, keep B matrix element and pass A matrix element.The execution circuit 300 is further described and exemplified below at least with respect to FIGS. 2, 4-6, 11A-B, and 12A-B.Figure 4 is a block flow diagram illustrating an execution pipeline for executing variable format, variable sparse matrix multiplication (VFVSMM) instructions according to some embodiments. As shown in the figure, the A matrix 402 and the B matrix 404 are both dense matrices. In operation, in response to the decoded VFVSMM instruction, the execution circuit operating in the dense-dense mode routes each row of the designated A matrix (staggered subsequent rows) to the corresponding processing array with (M×N) processing units In the row, and route each column of the designated B matrix (staggering subsequent columns) to the corresponding column of the processing array. Also shown are seven consecutive snapshots 408A-G taken at seven points: 418, 420, 422, 424, 426, 426, and 428 along the timeline 400.In some embodiments, staggering subsequent rows and columns refers to delaying the routing of each subsequent row and column to the corresponding row and column of the processing array by one cycle. Staggering subsequent rows and columns provides the advantage of creating a pipeline to align horizontally and vertically to perform 27 multiplication and accumulation in 7 cycles, as shown in snapshots 408A-G. Staggering the rows and columns also allows each processing element to infer the row and column addresses of each received A matrix and B matrix element based on the clock cycle and the relative position of the processing unit within the processing array.As shown in the snapshot 408A, row 0 of the A matrix 402 and column 0 of the B matrix 404 are routed to the corresponding rows and corresponding columns of the processing array. Also at 408A, the corresponding elements of the A matrix 402 and the B matrix 404 are multiplied at the element C(0,0) of the C matrix 406 and accumulated with the previous data therein.At snapshot 408B, one cycle has passed, row 0 and column 0 of A matrix 402 have been shifted by 1 point, and row 1 of A matrix 402 and column 1 of B matrix 404 are routed to the corresponding rows and columns of the processing array. In the corresponding column. Also at 408B, the corresponding elements of the A matrix 402 and the B matrix 404 are multiplied at C(0,0), C(0,1), and C(1,0). The product generated at C(0,0) is accumulated with the previous value generated at 408A.At snapshot 408C, another cycle has passed, rows 0 and 1 of A matrix 402 and columns 0 and 1 of B matrix 404 have been shifted by 1 point, and row 2 of A matrix 402 and column 2 of B matrix 404 have been shifted by 1 point. It is routed to the corresponding row and column of the processing array. Also in 408C, the corresponding elements of A matrix 402 and B matrix 404 are in C(0,0), C(0,1), C(0,2), C(1,0), C(1,1) and Multiply at C(2,0) and accumulate with the previous value (if any). As shown by its bold outline, the accumulated product generated at C(0,0) is the final value of C(0,0).At the snapshot 408D, another period has passed, and the row 0-2 of the A matrix 402 and the column 0-2 of the B matrix 404 have been shifted by 1 point. Also in 408D, the corresponding elements of A matrix 402 and B matrix 404 are in C(0,1), C(0,2), C(1,0), C(1,1), C(1,2), C(2,0) and C(2,1) are multiplied and accumulated with the previous value (if any). As shown in its bold outline, the accumulated product generated at C(0,1) and C(1,0) is the final value of C(0,1) and C(1,0).At the snapshot 408E, another period has passed, and the row 0-2 of the A matrix 402 and the column 0-2 of the B matrix 404 have been shifted by 1 point. Also in 408E, the corresponding elements of A matrix 402 and B matrix 404 are in C(0,2), C(1,1), C(1,2), C(2,0), C(2,1) and Multiply at C(2,2) and accumulate with the previous value (if any). As shown in its bold outline, the accumulated products generated at C(0,2), C(1,1), and C(2,0) are C(0,2), C(1,1), and C( 2,0) final value.At the snapshot 408F, another period has passed, and the rows 1-2 of the A matrix 402 and the columns 1-2 of the B matrix 404 have been shifted by 1 point. Also at 408F, the corresponding elements of the A matrix 402 and the B matrix 404 are multiplied at C(1,2), C(2,1) and C(2,2) and accumulated with the previous value (if any). As shown by its bold outline, the accumulated product generated at C(2,1) and C(1,2) is the final value of C(2,1) and C(1,2).At the snapshot 408G, another period has passed, and the row 2 of the A matrix 402 and the column 2 of the B matrix 404 have been shifted by 1 point. Also at 408G, the corresponding elements of the A matrix 402 and the B matrix 404 are multiplied at C(2,2) and accumulated with the previous value (if any). As shown by its bold outline, the accumulated product generated at C(2,2) is the final value of C(2,2).Figure 5 is a block diagram illustrating routing control signals shared between the processing unit and the routing circuit while executing variable format, variable sparse matrix multiplication (VFVSMM) instructions, according to some embodiments. Shown are four snapshots 550A-550D over four cycles of a part (four nodes) of a single-line processing unit operating in sparse-sparse mode. Each of the four snapshots 550A-550D shows four processing units 554A-D-560A-D, each processing unit receiving a vertical data element input with a row address, and a horizontal data element input with a column address. In some embodiments, the row address of the horizontal element and the column address of the vertical element are implicitly defined by the relative position of the processing unit within the processing array.Due to the formatting requirements of the input matrices A and B, the row and column addresses will never decrease. On the contrary, the row and column addresses of consecutive data elements will each increase by 1 when in dense mode, and by 0 or more when in sparse mode, until the end of the row or the end of the column is reached (if the hold request takes effect in the previous cycle Then the address will remain unchanged in sparse mode, as illustrated and described for snapshot 550A).To illustrate the handshaking control signals shared between nodes according to some embodiments, the processing array of FIG. 5 will operate in a sparse-sparse mode.In operation, each processing unit operating in the sparse mode compares the row address of the vertically received element with the column address of the horizontally received element. (Note that such an address check can be performed, but it is not required when operating in dense mode, during which the address of each element will increase exactly by 1. When operating in dense-sparse mode, only Check the input address received from the sparse matrix.)If the addresses match, and if the downstream processing unit does not request to maintain vertical and horizontal elements, the processing unit multiplies the received elements and accumulates the product with the previous content of the corresponding destination matrix element. As used herein, the term “corresponding” means that the relative position of the destination matrix element in the (M×N) destination matrix is the same as the relative position of the processing unit in the (M×N) processing array.However, if the addresses do not match, the processing unit will keep the element with the higher address and let other elements pass. Since the row and column addresses will never decrease, it makes no sense to keep the lower address elements; there will never be an address match. However, the processing unit keeps elements with larger addresses for use when other elements with address matching addresses arrive in the future. In some embodiments, each processing unit has some storage elements for holding data elements, such as registers or flip-flops.In addition, when the addresses do not match, the processing unit will send a hold request in the upstream direction of the held data element so that the data element can continue to be held, and send a hold notification signal in the downstream direction of the held data element.In some embodiments, the processing unit that receives the hold request from the downstream node will generate and send the corresponding hold request upstream.In some embodiments, for example, as illustrated and described for snapshot 550A, the processing unit, in conjunction with the routing circuit, broadcasts the data element downstream to two or more processing units that can use the element.Table 1 lists the various handshaking controls used between processing units.Table 1-Handshake controlSignal Description Hold request Request the upstream node to hold the hold notification Notify the downstream of the hold planTo illustrate the execution snapshot of FIG. 5, before cycle 1 550A, the four processing units 554A-560A have vertical elements whose row addresses are equal to 3, 2, 1, and 0, respectively. As shown in the figure, the vertical data elements reaching the processing units 556A and 560A in cycle 1 550A both have a row address equal to "4", but since they are in different columns, they can have different data. Also in cycle 1 550A, the horizontal element with a column address equal to 4 is broadcast to each processing unit that can use it (due to the horizontal hold request generated by the processing unit 558A during cycle 1 550A, there is a column equal to "4" The data element of the address is held in the flip-flop of the processing node 556B during cycle 2 550B). Also in cycle 1, a horizontal hold request from processing unit 558A causes processing unit 556A to hold a horizontal element, which is addressed as "4" during the next cycle, and the data element is repeated in cycle 2 550B.As shown in the figure, the vertical data element received by the processing unit 558A in cycle 1 has a row address equal to "2" and is likely to be held by the flip-flop due to a vertical hold request during the previous cycle. The processing unit 558A has no vertical hold request for cycle 1. Because the horizontal input address "4" is higher than the vertical input address "2", the processing unit 558A generates a horizontal hold request and sends it back to 556A, so that the flip-flop in 556B is turned off in cycle 2.As shown in the figure, the processing unit 558A also generates and sends a horizontal maintenance notification downstream during the period 1550A. In operation, the downstream processing unit that receives the horizontal hold notification puts it in one or more uses (not shown): 1) There may be multiple hold requests propagating upstream, and the hold is sent from the node that will hold the data Notice. In this case, if 560A receives a hold request during cycle 1, the hold notification from 558A will signal to node 560A that some node upstream is holding the data, so 560A will not need to hold. 2) The hold notification from the node also affects whether to perform multiplication.The execution continues in cycle 2 550B. Because the addresses match between the input elements at address 5, the registered elements at address 4, and the input and registered elements at address 4, the processing units 554B, 556B Multiplication and accumulation occurs at 560B.Execution continues at cycle 3 at 550C, multiplication and accumulation occur at all four nodes, because each case is between input elements (as in 554C, 558C, and 560C) or between registered elements (as in 556C) There is an address match under. Note that the processing unit 554C performs multiplication and accumulation on input data elements, and the processing unit 556C performs multiplication and accumulation on registered data elements. In some embodiments, as shown in the figure, in response to determining the address match between the horizontal and vertical elements, the execution circuit performs multiplication and accumulation on the register element with address 5 at 556C, and then executes the address 6 during the next cycle. Multiplication and accumulation of input elements.Execution continues at cycle 4 550D, and multiplication and accumulation occurs at 550D because there is an address match between the input (as in 554D) or register (as in 556D) elements.Figure 6 is a block flow diagram illustrating a processor that performs variable format, variable sparse matrix multiplication (VFVSMM) according to some embodiments. As shown in the figure, in response to the VFVSMM instruction, the processor executing the flow 600 will use the extraction circuit at 602 to extract the variable format, variable sparse matrix multiplication (VFVSMM) instruction, which has designated storage and has (M×K ), (K×N), and (M×N) elements of the A, B, and C matrices. At 604, the processor uses the decoding circuit to decode the extracted matrix multiplication instruction. In some embodiments, at 606, the processor will retrieve the data elements associated with the specified A and B matrices. Operation 606 is optional, as shown by its dashed border, for example, data elements can be retrieved at different times, or no data elements can be retrieved at all. At 608, the processor will use the execution circuit operating in dense-dense mode to respond to the decoded VFVSMM instruction by the following operations: route each row of the specified A matrix (staggered subsequent rows) to have (M×N) In the corresponding row of the processing array of the processing unit, each column of the designated B matrix (staggered subsequent columns) is routed to the corresponding column of the processing array. In some embodiments not shown, the processor routes each row of the designated A matrix and each column of the designated B matrix at a rate of one element per clock cycle, and staggers each subsequent row and subsequent column by one clock Cycle, and each of the (M×N) processing units infers the column and row addresses of each received A matrix and B matrix element based on the clock cycle and the relative position of the processing unit in the processing array. In some embodiments not shown, the processor maintains an element count or element index, where each processing unit refers to the element count without inferring anything. In some embodiments not shown, each of the A matrix element and the B matrix element includes a field for specifying its logical position within the A or B matrix. At 610, the processor will generate K products of the matching A matrix and B matrix elements received from the designated A and B matrices through each of the (M×N) processing units (when the B matrix element has and When the column address of the A matrix element has the same row address, there is a match), and each generated product is accumulated with the corresponding element of the specified C matrix, the corresponding element of the C matrix has the same position as the processing unit in the processing array relative position.Figure 7 is a block diagram illustrating a variable precision integer/floating point multiplication and accumulation circuit according to some embodiments. As shown in the figure, the multiply and accumulate circuit 700 includes a FP16/INT16/INT8 multiplier 701 and an FP32/INT48/INT24 accumulator 702.In operation, the FP16/INT16/INT8 multiplier 701 will be reconfigured between 8b/16b integer and 16b floating point inputs to support different performance, numerical range and precision requirements. In addition, wide accumulators can provide more accurate results for high-dimensional matrices. MAC is organized as a two-cycle pipeline with multiplication followed by accumulation. Four 8b multipliers 703A-703D (each input of each multiplier can be independently reconfigured with signed/unsigned) in INT8 and INT16 modes to transmit 4 16b and 1 32b results. The 32b result uses each of these as the 16b multiplication quadrants, summing the result using the correct significant bit. Floating point operations map the 11b mantissa multiplication to the 16b multiplier, and the entire non-normalized 22b significant number is sent to the subsequent adder stage along with the single-precision product exponent. The non-normalized product only requires additional guard bits for floating point addition, and normalization is removed from the multiplication critical path. The FP16 multiplication result can be fully represented by the FP32 product range and precision, thereby eliminating under/overflow detection and saturation logic, as well as any rounding logic in the multiplier.The delay of the FP32 adder directly affects MAC throughput because it cannot be pipelined for back-to-back accumulation. Instead of adding the leading zero detector (LZD), the leading zero predictor (LZA704) is used in parallel with the 32b adder to calculate the normalized left shift. It may be necessary to invert the adder output to perform a subtraction operation to produce an unsigned mantissa. By delaying the negation step until after normalization, the criticality of the MSB (Most Significant Bit) of the late adder is hidden. After the inversion operation, the two's complement negation also needs to be incremented. This is combined with the final rounding incrementer to remove it from the critical path. In order to achieve efficient area reconfiguration, 32b integer products are accumulated using a mantissa adder, while 16b products require two additional 16b adders for 4-way throughput. Wide integer accumulation uses four 8b incrementers/decrements for the upper bits, two of which can be reconfigured to operate as 16b units for 48b accumulation. When reconfiguring the mantissa data path for integer mode, bypassing the multiplexer can reduce the critical path. Optimal placement of data path isolation gates and pattern-based clock gating ensures that there is no switching activity in unused logic and clock nodes. Adding INT48 support to the FP32 accumulator can increase the area by 20%, while the 4-way INT24 support increases this by 8%.Figure 8A is a block diagram illustrating a format for a variable format, variable sparse matrix multiplication (VFVSMM) instruction according to some embodiments. As shown in the figure, the VFVSMM instruction 800 includes an operation code 801 (VFVSMM*), and fields for specifying the destination 802, source 1 803, and source 2 804 matrices. The source 1, source 2, and destination matrices used herein are sometimes referred to as the A, B, and C matrices, respectively. The VFVSMM instruction 800 also includes optional fields for specifying the data format 805 and the element size 806 in units of bits per matrix element, such as integer, half-precision floating-point, single-precision floating-point, or double-precision floating-point . The data format 805 can even specify a custom format depending on the implementation. The VFVSMM instruction 800 sometimes includes fields specifying M 807, N808, and K 809, where the specified A, B, and C matrices have (M×K), (K×N), and (M×N) data elements, respectively. As shown by the dashed border, the data format 805, element size 806, M 807, N 808, and K 809 are optional, for example, they can be omitted, thereby assuming predetermined default values. In some embodiments, one or more of data format 805, element size 806, M 807, N808, and K 809 are designated as part of opcode 801, such as a selected code, suffix, or prefix designated as opcode . For example, the opcode 801 may include a suffix such as "B", "W", "D", or "Q" to specify an element size of 8, 16, 32, or 64 bits, respectively. Operation code 801 is shown as including an asterisk to indicate that it can optionally include additional prefixes or suffixes to specify instruction behavior. If the VFVSMM instruction 800 does not specify any optional parameters, the predetermined default values are applied as needed. The format of the VFVSMM instruction 800 is further illustrated and described with respect to FIGS. 8B-C and 9A-D.Instruction SetThe instruction set can include one or more instruction formats. A given instruction format can define various fields (e.g., number of bits, bit position) to specify the operation to be performed (e.g., opcode) and the operand and/or other data fields on which the operation will be performed (e.g. , Mask) etc. Some instruction formats are further subdivided by the definition of instruction templates (or sub-formats). For example, the instruction template of a given instruction format can be defined as having different subsets of the fields of the instruction format (the fields included usually have the same order, but at least some have different bit positions because fewer fields are included) and /Or is defined as a given field with a different interpretation. Therefore, each instruction of the ISA is represented using a given instruction format (and, if defined, in a given instruction template of the instruction format) and includes fields for specifying operations and operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field that specifies the opcode and an operand field that selects the operand (source 1 / destination and source 2); and in the instruction stream The occurrence of the ADD instruction will have specific content in the operand field to select a specific operand. Has published and/or published a set of SIMD extensions called Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extension (VEX) coding scheme (for example, see and IA-32 Architecture Software Developer's Manual , September 2014; see Advanced Vector Extension Programming Reference, October 2014).Exemplary instruction formatThe embodiments of the instructions described herein can be implemented in different formats. In addition, exemplary systems, architectures, and pipelines are described in detail below. The embodiments of instructions can be executed on such systems, architectures, and pipelines, but are not limited to those described in detail.Generic vector friendly instruction formatThe vector friendly instruction format is an instruction format suitable for vector instructions (for example, there are certain fields specific to vector operations). Although embodiments that support vector and scalar operations through a vector-friendly instruction format are described, alternative embodiments only use vector operations in a vector-friendly instruction format.8B-8C are block diagrams showing a general vector friendly instruction format and its instruction template according to an embodiment of the present invention. 8B is a block diagram showing a generic vector friendly instruction format and its type A instruction template according to an embodiment of the present invention; FIG. 8C is a block diagram showing a generic vector friendly instruction format and its type B instruction template according to an embodiment of the present invention. Specifically, a generic vector friendly instruction format 811 defines type A and type B instruction templates, both of which include no memory access 812 instruction templates and memory access 820 instruction templates. In the context of a vector-friendly instruction format, the term "generic" refers to an instruction format that is not tied to any specific instruction set.Although the embodiments of the present invention will be described, the vector friendly instruction format supports the following: a 64-byte vector operand length (or size) with a 32-bit (4 byte) or 64-bit (8 byte) data element width (or size) Or size) (therefore, a 64-byte vector consists of 16 double-word sized elements, or alternatively 8 quad-word sized elements); with 16 bits (2 bytes) or 8 bits (1 byte ) 64-byte vector operand length (or size) of the width (or size) of the data element; with 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes), or 8 bits (1 Byte) the 32-byte vector operand length (or size) of the data element width (or size); and with 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes) or 8 16-byte vector operand length (or size) of bit (1 byte) data element width (or size); alternative embodiments may support more, less, or different data element widths (e.g., 128 bits (16 words) Section) data element width) more, less, and/or different vector operand sizes (e.g., 256 byte vector operands).The type A instruction template in FIG. 8B includes: 1) In the no memory access 812 instruction template, there are shown no memory access, full rounding control type operation 813 instruction template and no memory access, data transformation type operation 815 instruction template; 2) In the memory access 820 instruction template, the memory access, temporal 825 instruction template and the memory access, non-temporal 830 instruction template are shown. The type B instruction template in FIG. 8C includes: 1) In the no memory access 812 instruction template, there are shown no memory access, write mask control, partial rounding control type operation 814 instruction template and no memory access, write mask. Control, vsize type operation 817 instruction template; 2) In the memory access 820 instruction template, the memory access, write mask control 827 instruction template is shown.The general vector friendly instruction format 811 includes the following fields listed below in the order shown in FIGS. 8B-8C.Format field 840-a specific value (instruction format identifier value) in this field uniquely identifies the vector-friendly instruction format, so the identification instruction appears in the vector-friendly instruction format in the instruction stream. Therefore, this field is optional, and it is not required only for instruction sets with a generic vector-friendly instruction format.Basic operation field 842-its content distinguishes different basic operations.Register index field 844-its content, directly or through address generation, specifies the location of the source and destination operands, whether they are in a register or in memory. These include a sufficient number of bits to select N registers from a PxQ (eg 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment, N may be up to three source and one destination register, alternative embodiments may support more or fewer source and destination registers (for example, up to two sources may be supported, where One of these sources also acts as a destination and can support up to three sources, where one of these sources also acts as a destination, possibly supporting up to two sources and one destination).The modifier field 846-its content distinguishes the occurrence of instructions that specify memory access and the occurrence of instructions that do not specify memory access in the general vector instruction format; that is, distinguishes no memory access 812 instruction template and memory access 820 instruction template. Memory access operations read and/or write to the memory hierarchy (in some cases the value in the register is used to specify the source and/or destination address), while non-memory access operations do not (for example, the source and destination are registers) ). Although in one embodiment, this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, fewer, or different ways to perform memory address calculations.Enhanced operation field 850-its content distinguishes which of various different operations should be performed in addition to the basic operation. This field is context specific. In one embodiment of the present invention, this field is divided into a class field 868, an alpha field 852 and a beta field 854. The enhanced operation field 850 allows common operation groups to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 860-its content allows the content of the index field to be scaled for memory address generation (for example, for address generation using 2scale*index+base).Displacement field 862A-its content is used as part of memory address generation (for example, for address generation using 2scale*index+base+displacement).Displacement factor field 862B (note that the juxtaposition of the displacement field 862A directly on the displacement factor field 862B indicates the use of one or the other)-its content is used as part of the address generation; it specifies the displacement factor, which will be accessed through the memory The size (N) is scaled-where N is the number of bytes in memory access (for example, for address generation using 2scale*index+base+scaled displacement). The redundant low-order bits are ignored, so the content of the displacement factor field is multiplied by the total size of the memory operand (N) to generate the final displacement used to calculate the effective address. The value of N is determined by the processor hardware at runtime based on the complete opcode field 874 (described later) and the data manipulation field 854C. The displacement field 862A and the displacement factor field 862B are optional in the following sense, for example, they are not used for a memoryless access 812 instruction template and/or different embodiments may implement only one of the two or not implement both.Data element width field 864-its content distinguishes which of the multiple data element widths is to be used (in some embodiments for all instructions; in other embodiments only for some instructions). This field is optional in the following sense. For example, if only one data element width is supported and/or some aspect of the use of the opcode supports the data element width, this field is not required.The write mask field 870-its content controls whether the data element position in the destination vector operand reflects the result of the basic operation and the enhancement operation based on the position of each data element. Type A instruction templates support combined write masks, while type B instruction templates support combined and zero write masks. When merging, the vector mask allows to protect any element set in the destination from being updated during any operation (specified by the basic operation and enhanced operation); in another embodiment, the purpose of retaining the corresponding mask bit with a value of 0 The old value of each element of the ground. In contrast, when the zero-return vector mask allows any set of elements in the destination to be zeroed during any operation (specified by the basic operation and enhanced operation); in one embodiment, when the corresponding mask bit has a value of 0, The element of the destination is set to 0. A subset of this feature is the ability to control the vector length of the operation being performed (ie, the span from the first to the last modified element); however, the modified elements need not be continuous. Therefore, the write mask field 870 allows partial vector operations, including load, store, arithmetic, logic, and so on. Although the embodiment of the present invention is described, the content of the write mask field 870 selects one of the multiple write mask registers containing the write mask to be used (and therefore the content of the write mask field 870 indirectly Identifies the mask to be performed), but alternative or additional embodiments allow the content of the write mask field 870 to directly specify the mask to be performed.Immediate field 872—its content allows an immediate value to be specified. This field is optional in the following sense, for example, it does not exist in the implementation of a generic vector-friendly format that does not support immediate data and it does not exist in instructions that do not use immediate data.Class field 868-its content distinguishes different types of instructions. With reference to Figures 8A-B, the content of this field selects between Type A and Type B instructions. In FIGS. 8A-B, rounded squares are used to indicate the presence of a specific value in the field (for example, type A 868A and type B 868B of the type field 868 in FIGS. 8A-B, respectively).Type A instruction templateIn the case of a type A non-memory access 812 instruction template, the α field 852 is parsed as the RS field 852A, the content of which distinguishes which of the different enhanced operation types to be performed (for example, rounding 852A.1 and data transformation 852A. 2 are respectively designated for no memory access, rounding operation 810 and no memory access, data transformation type operation 815 instruction template), and the β field 854 distinguishes which specific type of operation is to be performed. In the no memory access 812 instruction template, there is no scaling field 860, displacement field 862A, and displacement factor field 862B.No memory access instruction template-full rounding control type operationIn the no-memory access full rounding control type operation 813 instruction template, the β field 854 is parsed as a rounding control field 854A, and its content provides static rounding. Although in the described embodiment of the present invention, the rounding control field 854A includes the suppression of all floating point exceptions (SAE) field 856 and the rounding operation control field 858, alternative embodiments may support that these concepts can be encoded into the same field Or only have one or the other of these concepts/fields (for example, there may be only the rounding operation control field 858).SAE field 856-its content distinguishes whether exception reporting is disabled; when the content of SAE field 856 indicates that suppression is enabled, the given instruction will not report any type of floating-point exception flags, nor will it trigger any floating-point exception handler.Rounding operation control field 858-its content distinguishes which of a set of rounding operations is to be performed (for example, round up, round down, round to zero, and round to nearest). Therefore, the rounding operation control field 858 allows the rounding mode to be changed on a per-instruction basis. In an embodiment in which the processor of the present invention includes a control register for specifying the rounding mode, the contents of the rounding operation control field 850 override the register value.No memory access instruction template-data transformation type operationIn the no memory access data transformation type operation 815 instruction template, the β field 854 is parsed as a data transformation field 854B, and its content distinguishes which of the multiple data transformations (for example, no data transformation, swizzle), broadcast ).In the case of a memory access 820 instruction template of class A, the alpha field 852 is parsed as an eviction prompt field 852B, and its content distinguishes which eviction prompt will be used (in Figure 8A, temporal 852B.1 and non-temporal 852B .2 are respectively designated for memory access, temporal 825 instruction template and memory access, non-temporal 830 instruction template), and the β field 854 is parsed into a data manipulation field 854C, whose content distinguishes that multiple data manipulation operations are to be performed ( Also known as primitive) which one (for example, no manipulation; broadcast; up-conversion of the source; and down-conversion of the destination). The memory access 820 instruction template includes a zoom field 860, and optionally a displacement field 862A or a displacement factor field 862B.Vector memory instructions use translation to support vector loading from memory and vector storage to memory. Like regular vector instructions, vector memory instructions transfer data from/to the memory in a data-element-by-data-element manner. The actual elements transferred are determined by the content of the vector mask selected as the write mask.Memory access instruction template-tenseTemporal data is data that can be reused fast enough to benefit from caching. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint altogether.Memory access instruction template-non-temporalNon-temporal data is data that is unlikely to be reused fast enough to benefit from being cached in the first-level cache and should be evicted first. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint altogether.Type B instruction templateIn the case of a type B instruction template, the α field 852 is parsed into a write mask control (Z) field 852C, and its content distinguishes whether the write mask controlled by the write mask field 870 should be merged or zeroed.In the case of the no memory access 812 instruction template of Type B, a part of the β field 854 is parsed as the RL field 857A, whose content distinguishes which of the different types of enhancement operations are to be performed (for example, rounding 857A.1 and vector length (VSIZE)857A.2 are respectively designated for no memory access, write mask control, partial rounding control type operation 814 instruction template and no memory access, write mask control, VSIZE type operation 817 instruction template), and the β field The rest of the 854 distinguishes which specific type of operation to perform. In the no memory access 812 instruction template, there is no scaling field 860, displacement field 862A, and displacement factor field 862B.In the no memory access, write mask control, and partial rounding control type operation 810 instruction template, the rest of the β field 854 is parsed as the rounding operation field 859A and abnormal event reporting is disabled (the given instruction does not report any type of floating Click the exception flag, and no floating-point exception handler will be triggered).Rounding operation control field 859A-just like the rounding operation control field 858, its content distinguishes which of a set of rounding operations to be performed (for example, round up, round down, round towards zero, and round towards nearest included). Therefore, the rounding operation control field 859A allows the rounding mode to be changed on a per-instruction basis. In an embodiment in which the processor of the present invention includes a control register for specifying the rounding mode, the contents of the rounding operation control field 850 override the register value.In the 817 instruction template for no memory access, write mask control, and VSIZE type operation, the rest of the β field 854 is parsed as a vector length field 859B, whose content distinguishes which of the multiple data vector lengths (for example, 128 , 256 or 512 bytes).In the case of the type B memory access 820 instruction template, a part of the β field 854 is parsed as a broadcast field 857B, and its content distinguishes whether or not to perform a broadcast type data manipulation operation, while the rest of the β field 854 is parsed as a vector length field 859B. The memory access 820 instruction template includes a zoom field 860, and optionally a displacement field 862A or a displacement factor field 862B.Regarding the general vector friendly instruction format 811, a complete opcode field 874 is shown, which includes a format field 840, a basic operation field 842, and a data element width field 864. Although one embodiment in which the full opcode field 874 includes all of these fields is shown, in embodiments that do not support all of these fields, the full opcode field 874 includes less than all of these fields. The full opcode field 874 provides the operation code (opcode).The enhanced operation field 850, the data element width field 864, and the write mask field 870 allow these characteristics to be specified on a per-instruction basis in the general vector friendly instruction format.The combination of the write mask field and the data element width field creates different types of instructions because they allow masks to be applied based on different data element widths.The various instruction templates found in Type A and Type B are useful in different situations. In some embodiments of the present invention, different processors or different cores in the processors may only support type A, only support type B, or support both types. For example, a high-performance general-purpose out-of-order core intended for general-purpose computing may only support class B, and a core intended mainly for graphics and/or scientific (throughput) computing may only support class A, while it is intended for both The author's core can support both (of course, a core that has templates and instructions from both categories but not a certain mixture of all templates and instructions from both categories is also within the scope of the present invention). In addition, a single processor may include multiple cores, and all cores support the same class or different cores support different classes. For example, in a processor with separate graphics and general-purpose cores, one of the graphics cores intended mainly for graphics and/or scientific computing may only support Class A, while one or more general-purpose cores may have out-of-order execution and A high-performance general-purpose core with register renaming, designed for general-purpose computing that only supports Class B. Another processor that does not have a separate graphics core may include one or more general in-order or out-of-order cores that support both type A and type B. Of course, in different embodiments of the present invention, features from one category can also be implemented in another category. Programs written in high-level languages will be placed (for example, just-in-time compilation or static compilation) into various executable forms, including: 1) A form that only has instructions of the class supported by the target processor for execution; or 2) There are alternative routines written using different combinations of instructions of all types and in the form of control flow code, which selects the routine to be executed based on the instructions supported by the processor currently executing the code.Exemplary specific vector friendly instruction formatFig. 9A is a block diagram showing an exemplary specific vector friendly instruction format according to an embodiment of the present invention. FIG. 9A shows a specific vector friendly instruction format 900, which is specific in the following sense, for example, the format specifies the position, size, resolution, and order of fields, and the values of some of these fields. The specific vector friendly instruction format 900 can be used to extend the x86 instruction set, so some fields are similar or the same as those used in the existing x86 instruction set and its extensions (for example, AVX). This format is consistent with the prefix encoding field, real opcode byte field, MODR/M field, SIB field, displacement field, and immediate field of the existing x86 instruction set with extensions. The fields in FIG. 8 to which the fields in 9A are mapped are shown.It should be understood that although the embodiments of the present invention are described with reference to the specific vector friendly instruction format 900 in the context of the general vector friendly instruction format 811 for illustrative purposes, the present invention is not limited to the specific vector friendly instruction format 900 unless stated Case. For example, the general vector friendly instruction format 811 considers various possible sizes of various fields, while the specific vector friendly instruction format 900 is shown as a field having a specific size. As a specific example, although the data element width field 864 is shown as a one-bit field in the specific vector friendly instruction format 900, the present invention is not limited to this (ie, the general vector friendly instruction format 811 considers data element width fields 864 of other sizes) .The general vector friendly instruction format 811 includes the following fields listed below in the order shown in FIG. 9A.EVEX prefix (bytes 0-3) 902-encoded in four bytes.Format field 840 (EVEX byte 0, bit [7:0])-The first byte (EVEX byte 0) is the format field 840 and it contains 0x62 (in one embodiment of the present invention, it is used to distinguish vector friendly instructions The unique value of the format).The second to fourth bytes (EVEX bytes 1-3) include multiple bit fields that provide specific capabilities.REX field 905 (EVEX byte 1, bit [7-5])-consists of EVEX.R bit field (EVEX byte 1, bit [7]-R), EVEX.X bit field (EVEX byte 1, bit [ 6]-X) and EVEX.X bit field (EVEX byte 1, bit [5]-B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functions as the corresponding VEX bit fields, and are coded in one's complement form, that is, ZMM0 is coded as 1111B, and ZMM15 is coded as 0000B. The other fields of the instruction encode the lower three bits of the register index, as known in the art (rrr, xxx, and bbb), so Rrrr, Xxxx, and EVEX.B can be formed by adding EVEX.R, EVEX.X, and EVEX.B Bbbb.REX'910A-This is the first part of REX' field 910, and is the EVEX.R' bit field (EVEX byte 1, bit [4]-R'), which is used to extend the high 16 or low of the 32 register group 16 bits for encoding. In an embodiment of the present invention, this bit and the other bits shown below are stored in a bit-reversed format to distinguish it from the BOUND instruction (in the well-known x86 32-bit mode), and its real opcode byte It is 62, but the value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments of the present invention do not store this and the other indicated bits below in reverse format. The value 1 is used to encode the lower 16-bit register. That is, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRR from other fields.Opcode mapping field 915 (EVEX byte 1, bit [3:0]-mmmm)-its content encodes the implicit leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 864 (EVEX byte 2, bit [7]-W)-represented by the symbol EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data element or 64-bit data element).EVEX.vvvv 920 (EVEX byte 2, bit [6:3]-vvvv)-EVEX.vvvv's role can include the following: 1) EVEX.vvvv encodes the first source register operand to reverse ( 1's complement) form specified, and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, and 1's complement is specified for some vector shifts ; Or 3) EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Therefore, the EVEX.vvvv field 920 encodes the 4 low-order bits of the first source register specifier stored in inverted (1's complement) form. Depending on the instruction, an extra different EVEX bit field is used to expand the specifier size to 32 registers.EVEX.U 868 type field (EVEX byte 2, bit [2]-U)-if EVEX.U=0, it means type A or EVEX.U0; if EVEX.U=1, it means type B or EVEX. U1.Prefix encoding field 925 (EVEX byte 2, bits [1:0]-pp)-provides additional bits for the basic operation field. In addition to providing support for traditional SSE instructions in the EVEX prefix format, this also has the benefit of compressing the SIMD prefix (not requiring bytes to represent the SIMD prefix, and the EVEX prefix only requires 2 bits). In one embodiment, in order to support traditional SSE instructions that use SIMD prefixes (66H, F2H, F3H) in the traditional format and the EVEX prefix format, these traditional SIMD prefixes are encoded into the SIMD prefix encoding field; The PLA provided to the decoder was previously extended to the traditional SIMD prefix (so PLA can execute the traditional and EVEX formats of these traditional instructions without modification). Although newer instructions can directly use the contents of the EVEX prefix encoding field as an opcode extension, some embodiments extend in a similar manner to maintain consistency, but allow these traditional SIMD prefixes to specify different meanings. Alternative embodiments can redesign PLA to support 2-bit SIMD prefix encoding, so no extension is required.α field 852 (EVEX byte 3, bit [7]-EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also shown by α)- As mentioned earlier, this field is context-specific.β field 854 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also shown by β )-As mentioned earlier, this field is context-specific.REX'910B-This is the remainder of the REX' field 910, and is the EVEX.V' bit field (EVEX byte 3, bit [3]-V'), which can be used to extend the high 16 or low of the 32 register group 16-bit encoding. The bit is stored in bit-reversed format. The value 1 is used to encode the lower 16-bit register. That is, V'VVVV is formed by combining EVEX.V' and EVEX.vvvv.Write mask field 870 (EVEX byte 3, bits [2:0]-kkk)-as mentioned before, its content specifies the index of the register in the write mask register. In one embodiment of the present invention, the specific value EVEX.kkk=000 has a special behavior, implying that no write mask is used for a specific instruction (this can be implemented in various ways, including using hard-wired to all write masks or Hardware that bypasses the masking hardware).The real opcode field 930 (byte 4) is also called the opcode byte. Specify part of the opcode in this field.The MOD R/M field 940 (byte 5) includes the MOD field 942, the Reg field 944, and the R/M field 946. As mentioned earlier, the content of the MOD field 942 distinguishes between memory access and non-memory access operations. The role of the Reg field 944 can be summarized into two situations: encoding the destination register operand or the source register operand, or being regarded as an operation code extension and not used for encoding any instruction operand. The role of the R/M field 946 may include the following: encoding the instruction operand referring to the memory address, or encoding the destination register operand or the source register operand.Scale, Index, Base (SIB) byte (byte 6)-As mentioned earlier, the contents of the scale field 850 are used for memory address generation. SIB.xxx 954 and SIB.bbb 956-The contents of these fields have been mentioned previously in relation to register indexes Xxxx and Bbbbb.Displacement field 862A (bytes 7-10)-When the MOD field 942 contains 10, bytes 7-10 are the displacement field 862A, and it works in the same way as the traditional 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 862B (byte 7)-When the MOD field 942 contains 01, byte 7 is the displacement factor field 862B. The position of this field is the same as that of the traditional x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign-extended, it can only resolve the offset between -128 and 127 bytes; for a 64-byte cache line, disp8 uses 8 bits and can only be set to 4 very useful values -128, -64, 0, and 64; Because a larger range is often required, disp32 is used; however, disp32 requires 4 bytes. Compared with disp8 and disp32, the displacement factor field 862B is a re-parse of disp8; when the displacement factor field 862B is used, the actual displacement is determined by multiplying the content of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte used for displacement, but with a larger range). This compression displacement is based on the assumption that the effective displacement is a multiple of the granularity of memory access, and therefore, there is no need to encode the redundant low-order bits of the address offset. That is, the displacement factor field 862B replaces the 8-bit displacement of the traditional x86 instruction set. Therefore, the displacement factor field 862B is encoded in the same way as the 8-bit displacement of the x86 instruction set (so ModRM/SIB encoding rules are unchanged), with the only exception that disp8 is overloaded to disp8*N. That is, there is no change in the encoding rule or the encoding length, but only the hardware parsing change of the displacement value (the size of the memory operand needs to be used to scale the displacement to obtain the byte-by-byte address offset). The immediate field 872 operates as previously described.Full opcode fieldFIG. 9B is a block diagram showing the fields of the specific vector friendly instruction format 900 that constitute the complete opcode field 874 according to one embodiment of the present invention. Specifically, the complete opcode field 874 includes a format field 840, a basic operation field 842, and a data element width (W) field 864. The basic operation field 842 includes a prefix encoding field 925, an operation code mapping field 915, and a real operation code field 930.Register index fieldFIG. 9C is a block diagram showing the fields of the specific vector friendly instruction format 900 that constitute the register index field 844 according to one embodiment of the present invention. Specifically, the register index field 844 includes REX field 905, REX' field 910, MODR/M.reg field 944, MODR/M.r/m field 946, VVVV field 920, xxx field 954, and bbb field 956.Enhanced operation fieldFIG. 9D is a block diagram showing the fields of the specific vector friendly instruction format 900 that constitute the enhanced operation field 850 according to one embodiment of the present invention.When the class (U) field 868 contains 0, it means EVEX.U0 (Type A 868A); when the class (U) field 868 contains 1, it means EVEX.U1 (Type B 868B). When U=0 and the MOD field 942 contains 11 (indicating no memory access operation), the α field 852 (EVEX byte 3, bit [7]-EH) is parsed as the rs field 852A. When the rs field 852A contains 1 (rounding 852A.1), the β field 854 (EVEX byte 3, bits [6:4]-SSS) is parsed as the rounding control field 854A. The rounding control field 854A includes a one-bit SAE field 856 and a two-bit rounding operation field 858. When the rs field 852A contains 0 (data transformation 852A.2), the β field 854 (EVEX byte 3, bits [6:4]-SSS) is parsed into a three-bit data transformation field 854B. When U=0 and the MOD field 942 contains 00, 01, or 10 (indicating a memory access operation), the α field 852 (EVEX byte 3, bit [7]-EH) is parsed as an eviction hint (EH) field 852B, The β field 854 (EVEX byte 3, bit [6:4]-SSS) is parsed into a three-bit data manipulation field 854C.When U=1, the α field 852 (EVEX byte 3, bit [7]-EH) is parsed as the write mask control (Z) field 852C. When U=1 and the MOD field 942 contains 11 (indicating no memory access operation), a part of the β field 854 (EVEX byte 3, bit [4]-S0) is parsed as the RL field 857A; when the RL field 857A contains 1 (Rounding 857A.1), the rest of the β field 854 (EVEX byte 3, bit [6-5]-S2-1) is parsed as the rounding operation field 859A, and when the RL field 857A contains 0 (VSIZE857 .A2), the rest of the β field 854 (EVEX byte 3, bit [6-5]-S2-1) is parsed into the vector length field 859B (EVEX byte 3, bit [6-5]-L1- 0). When U = 1 and the MOD field 942 contains 00, 01, or 10 (indicating a memory access operation), the β field 854 (EVEX byte 3, bits [6:4]-SSS) is parsed as the vector length field 859B (EVEX word Section 3, bit [6-5]-L1-0) and broadcast field 857B (EVEX byte 3, bit [4]-B).Exemplary register architectureFigure 10 is a block diagram of a register architecture 1000 according to one embodiment of the present invention. In the illustrated embodiment, there are 32 vector registers 1010 that are 512 bits wide; these registers are referenced as zmm0 to zmm31. The low-order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The low-order 128 bits of the lower 16 zmm registers (the low-order 128 bits of the ymm register) are overlaid on the registers xmm0-15. The specific vector friendly instruction format 900 operates on these overwritten register files, as shown in the following table.That is, the vector length field 859B selects between the maximum length and one or more other shorter lengths, where each such shorter length is half the length of the previous length; the instruction template without the vector length field 859B has a maximum vector Length operation. Furthermore, in one embodiment, the type B instruction template of the specific vector friendly instruction format 900 operates on packed or scalar single/double precision floating point data and packed or scalar integer data. A scalar operation is an operation performed on the position of the lowest-order data element in the zmm/ymm/xmm register; according to an embodiment, the position of the higher-order data element remains the same as before the instruction or is zeroed.Write mask register 1015-In the illustrated embodiment, there are 8 write mask registers (k0 to k7), each having a size of 64 bits. In an alternative embodiment, the size of the write mask register 1015 is 16 bits. As mentioned above, in an embodiment of the present invention, the vector mask register k0 cannot be used as a write mask; when the code that usually represents k0 is used for a write mask, it selects a hard-wired write mask of 0x6F, Effectively disable the write mask for this instruction.General Registers 1025-In the illustrated embodiment, there are 16 64-bit general registers, which are used with the existing x86 addressing mode to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 to R15.Scalar floating point stack register file (x87 stack) 1045, on which MMX packed integer flat register file 1050 is aliased-in the illustrated embodiment, the x87 stack is used to use the x87 instruction set extension to 32/64/80 bit float Point data performs an eight-element stack of scalar floating-point operations; while the MMX register is used to perform operations on 64-bit packed integer data, and to hold operands for certain operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. In addition, alternative embodiments of the present invention may use more, fewer or different register files and registers.Exemplary core architecture, processor and computer architectureThe processor core can be implemented in different ways, can be implemented for different purposes, and can be implemented in different processors. For example, the implementation of such a core may include: 1) a general-purpose ordered core for general-purpose computing; 2) a high-performance general-purpose unordered core for general-purpose computing; 3) mainly used for graphics and/or science (throughput ) Dedicated core for calculation. The implementation of different processors can include: 1) CPU, which includes one or more general-purpose in-order cores intended for general-purpose computing and/or one or more general-purpose out-of-order cores for general-purpose computing; 2) The processor includes one or more dedicated cores mainly used for graphics and/or science (throughput). This different processor leads to different computer system architectures, which may include: 1) a coprocessor on a different chip from the CPU; 2) a coprocessor on a separate die in the same package as the CPU Processor; 3) A coprocessor on the same die as the CPU (in this case, this coprocessor is sometimes referred to as dedicated logic (for example, integrated graphics and/or scientific (throughput) logic) (Or called a dedicated core); 4) a system-on-chip, which can include the described CPU (sometimes referred to as application core(s) or application processor(s)) on the same die, The coprocessor described above, and additional functions. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary nuclear architectureOrdered and disordered nuclear block diagramFIG. 11A is a block diagram showing both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to an embodiment of the present invention. FIG. 11B is a block diagram showing an exemplary embodiment of both an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to an embodiment of the present invention. The solid-line boxes in FIGS. 11A-B show the ordered pipeline and the ordered cores, while the optionally-added dashed boxes show the register renaming, out-of-order issue/execution pipeline and the cores. Assuming that the ordered aspect is a subset of the disordered aspect, the disordered aspect will be described.In FIG. 11A, the processor pipeline 1100 includes a fetch stage 1102, a length decoding stage 1104, a decoding stage 1106, an allocation stage 1108, a rename stage 1110, a scheduling (also called dispatch or release) stage 1112, a register read The fetch/memory read stage 1114, the execution stage 1116, the write back/memory write stage 1118, the exception handling stage 1122, and the commit stage 1124.FIG. 11B shows a processor core 1190 that includes a front-end unit 1130 coupled to the execution engine unit 1150, and both the execution engine unit 1150 and the front-end unit 1130 are coupled to the memory unit 1170. The core 1190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As another option, the core 1190 may be a dedicated core, for example, a network or communication core, a compression engine, a coprocessor core, a general-purpose computing graphics processing unit (GPGPU) core, a graphics core, and the like.The front-end unit 1130 includes a branch prediction unit 1132 coupled to the instruction cache unit 1134, the instruction cache unit 1134 is coupled to the instruction conversion backup buffer (TLB) 1136, the instruction conversion backup buffer 1136 is coupled to the instruction fetch unit 1138, and the instruction fetch unit 1138 is coupled To the decoding unit 1140. The decoding unit 1140 (or decoder) can decode instructions and generate as output one or more micro-operations, microcode entry points, micro instructions, other instructions, or other control signals, which are decoded from the original instructions or reflected in other ways Original instructions or derived from original instructions. Various different mechanisms can be used to implement the decoding unit 1140. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc. In one embodiment, the core 1190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in the decoding unit 1140 or in the front-end unit 1130). The decoding unit 1140 is coupled to the rename/allocator unit 1152 in the execution engine unit 1150.The execution engine unit 1150 includes a rename/allocator unit 1152, which is coupled to a retirement unit 1154 and a set of one or more scheduler units 1156. The (one or more) scheduler unit 1156 represents any number of different schedulers, including reservation stations, central command windows, and so on. The scheduler unit(s) 1156 is coupled to the physical register file unit(s) 1158. Each physical register file unit 1158 represents one or more physical register files, and different physical register files in these physical register files store one or more different data types, for example, scalar integer, scalar floating point, packed integer, packed Floating point, vector integer, vector floating point, state (for example, an instruction pointer as the address of the next instruction to be executed), etc. In one embodiment, the physical register file unit 1158 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general registers. The physical register file unit(s) 1158 overlaps the retirement unit 1154 to illustrate the various ways in which register renaming and out-of-order execution can be implemented (for example, using (one or more) reorder buffers and (a (Or more) retirement register files; use (one or more) future files, (one or more) history buffers, and (one or more) retirement register files; use register maps and register pools; etc.). The retirement unit 1154 and the physical register file unit(s) 1158 are coupled to the execution cluster(s) 1160. The execution cluster(s) 1160 includes a group of one or more execution units 1162 and a group of one or more memory access units 1164. The execution unit 1162 may perform various operations (for example, shift, addition, subtraction, multiplication) on various types of data (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a specific function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1156, the physical register file unit(s) 1158, and the execution cluster(s) 1160 are shown as possibly multiple, because some embodiments are specific to certain types Create separate pipelines for data/operations (for example, scalar integer pipeline, scalar float/packed integer/packed float/vector integer/vector float pipeline, and/or memory access pipeline, where each pipeline has its own schedule Processor unit, physical register file unit, and/or execution cluster-and in the case of a separate memory access pipeline, certain embodiments in which only the execution cluster of the pipeline has memory access unit(s) 1164 are implemented ). It should also be understood that, in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order and the rest may be issued/executed in order.The group of memory access units 1164 is coupled to the memory unit 1170, and the memory unit 1170 includes a data TLB unit 1172 coupled to a data cache unit 1174, wherein the data cache unit 1174 is coupled to a level 2 (L2) cache unit 1176. In an exemplary embodiment, the memory access unit 1164 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to the data TLB unit 1172 in the memory unit 1170. The instruction cache unit 1134 is also coupled to a level 2 (L2) cache unit 1176 in the memory unit 1170. The L2 cache unit 1176 is coupled to one or more other levels of cache and ultimately to the main memory.As an example, the out-of-order issue/execution core architecture of exemplary register renaming can implement the pipeline 1100 as follows: 1) instruction fetch 1138 executes the fetch and length decoding stages 1102 and 1104; 2) the decoding unit 1140 executes the decode stage 1106; ) The rename/allocator unit 1152 executes the allocation phase 1108 and the rename phase 1110; 4) (one or more) scheduler unit 1156 executes the scheduling phase 1112; 5) (one or more) physical register file unit 1158 and memory The unit 1170 executes the register read/memory read phase 1114; the execution cluster 1160 executes the execute phase 1116; 6) the memory unit 1170 and the physical register file unit(s) 1158 execute the write back/memory write phase 1118; 7) Various units may be involved in the exception handling phase 1122; 8) the retirement unit 1154 and the physical register file unit(s) 1158 execute the commit phase 1124.The core 1190 can support one or more instruction sets (for example, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIP Technologies, Sunnyvale, California, USA; Sunnyvale, California, USA The ARM instruction set of ARM Holdings (with optional additional extensions, such as NEON)) includes the instruction(s) described herein. In one embodiment, the core 1190 includes logic to support packaged data instruction set extensions (eg, AVX1, AVX2), thereby allowing operations used by many multimedia applications to be performed using packaged data.It should be understood that the core can support multithreading (execute two or more parallel operation sets or thread sets), and can do so in various ways, including time-sliced multithreading, simultaneous multithreading (where a single The physical core provides logical cores for each thread of the physical core that is multithreading at the same time), or a combination of them (for example, extraction and decoding of time slices and simultaneous multithreading thereafter, for example, in hyper-threading technology) ).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an ordered architecture. Although the embodiment of the processor shown also includes separate instruction and data cache units 1134/1174 and a shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instructions and data, for example, level 1. (L1) Internal cache, or multi-level internal cache. In some embodiments, the system may include a combination of internal cache and external cache, where the external cache is external to the core and/or processor. Alternatively, the entire cache can be external to the core and/or processor.Specific exemplary ordered core architecture12A-B show a block diagram of a more specific exemplary ordered core architecture, where the core will be one of several logic blocks in the chip (may include other cores of the same type and/or different types). The logic block communicates with certain fixed function logic, memory I/O interface, and other necessary I/O logic through a high-bandwidth interconnection network (for example, a ring network), depending on the application.Figure 12A is a block diagram of a single processor core and its connection to the on-die interconnection network 1202 and its local subset in the level 2 (L2) cache 1204 according to an embodiment of the invention. In one embodiment, the instruction decoder 1200 supports an x86 instruction set with a packed data instruction set extension. L1 cache 1206 allows low-latency accesses to cache memory in scalar and vector units. Although in one embodiment (in order to simplify the design), the scalar unit 1208 and the vector unit 1210 use separate register sets (scalar register 1212 and vector register 1214 respectively), and the data transferred between them is written to the memory and then from Level 1 (L1) cache 1206 is read back, but alternative embodiments of the invention can use different methods (for example, using a single register set or including allowing data to be transferred between two register files without being written And read back communication path).The local subset 1204 of the L2 cache is a part of the global L2 cache. The global L2 cache is divided into separate local subsets, one local subset for each processor core. Each processor core has a direct access path to its own local subset of L2 cache 1204. The data read by the processor core is stored in its L2 cache subset 1204 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 1204 and is flushed from other subsets if needed. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each circular data path is 1012 bits wide in each direction.FIG. 12B is an expanded view of a part of the processor core in FIG. 12A according to an embodiment of the present invention. FIG. 12B includes the L1 data buffer 1206A portion of the L1 buffer 1206, and more details about the vector unit 1210 and the vector register 1214. Specifically, the vector unit 1210 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1228), which executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports the deployment of register inputs through the deployment unit 1220, the use of digital conversion units 1222A-B for digital conversion, and the use of the replication unit 1224 to copy memory inputs. The write mask register 1226 allows predictive vector writes.FIG. 13 is a block diagram of a processor 1300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to an embodiment of the present invention. The solid line box in FIG. 13 shows a processor 1300 having a single core 1302A, a system agent 1310, and a set of one or more bus controller units 1316; but the optional addition of a dotted line box shows that it has the following Alternative processor 1300: multiple cores 1302A-N, a set of one or more integrated memory controller units 1314 in the system agent unit 1310, and dedicated logic 1308.Therefore, different implementations of the processor 1300 may include: 1) a CPU with dedicated logic 1308 (where the dedicated logic is integrated graphics and/or scientific (throughput) logic (which may include one or more cores)), and cores 1302A-N (which is one or more general-purpose cores (for example, a general ordered core, a general unordered core, or a combination of both); 2) a coprocessor with core 1302A-N (where core 1302A-N is A large number of dedicated cores mainly used for graphics and/or science (throughput); 3) Coprocessors with cores 1302A-N (where cores 1302A-N are a large number of general-purpose ordered cores). Therefore, the processor 1300 may be a general-purpose processor, a co-processor, or a special-purpose processor, for example, a network or communication processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), many integrated cores with high throughput. (MIC) Coprocessor (including 30 or more cores), embedded processor, etc. The processor can be implemented on one or more chips. The processor 1300 may be part of one or more substrates and/or may be implemented on one or more substrates by using any one of a variety of process technologies (for example, BiCMOS, CMOS, or NMOS).The memory hierarchy includes one or more levels of cache within the core, a group or one or more shared cache units 1306, and an external memory (not shown) coupled to the group of integrated memory controller units 1314. The set of shared cache units 1306 may include one or more middle-level caches (for example, level 2 (L2), level 3 (L3), level 4 (L4)), or other levels of cache, last level cache (LLC), and /Their combination. Although in one embodiment, the ring-based interconnection unit 1312 pairs the integrated graphics logic 1308 (the integrated graphics logic 1308 is an example of dedicated logic and is also referred to herein as dedicated logic), the set of shared cache units 1306, and the system The proxy unit 1310/integrated memory controller unit(s) 1314 are interconnected, but alternative embodiments may use any number of well-known techniques to interconnect these units. In one embodiment, consistency is maintained between one or more cache units 1306 and cores 1302A-N.In some embodiments, one or more of the cores 1302A-N are capable of multithreading. System agent 1310 includes those components that coordinate and operate cores 1302A-N. The system agent unit 1310 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or may include the logic and components required to adjust the power state of the core 1302A-N and the integrated graphics logic 1308. The display unit is used to drive one or more externally connected displays.The core 1302A-N may be homogeneous or heterogeneous in terms of architectural instruction set; that is, two or more cores in the core 1302A-N may be able to execute the same instruction set, while other cores may be able to execute Only a subset of the instruction set or a different instruction set.Exemplary computer architectureFigures 14-17 are block diagrams of exemplary computer architectures. Used in laptop computers, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (DSP), graphics equipment, video game equipment Other system designs and configurations known in the art for set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. Generally, various systems or electronic devices capable of incorporating the processors and/or other execution logic disclosed herein are generally suitable.Referring now to FIG. 14, a block diagram of a system 1400 according to an embodiment of the invention is shown. The system 1400 may include one or more processors 1410, 1415, which are coupled to a controller hub 1420. In one embodiment, the controller hub 1420 includes a graphics memory controller hub (GMCH) 1490 and an input/output hub (IOH) 1450 (may be on a separate chip); the GMCH 1490 includes a memory 1440 and a coprocessor 1445 coupled to it Memory and graphics controller; IOH 1450 couples input/output (I/O) devices 1460 to GMCH 1490. Alternatively, one or both of the memory and graphics controller are integrated in the processor (as described herein), and the memory 1440 and the coprocessor 1445 are directly coupled to the processor 1410, and in a single chip including the IOH 1450 The controller hub 1420.The optional nature of the additional processor 1415 is indicated by dotted lines in FIG. 14. Each processor 1410, 1415 may include one or more of the processing cores described herein, and may be a certain version of the processor 1300.The memory 1440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1420 communicates with the processor(s) 1410, 1415 via a multi-drop bus, such as a front side bus (FSB), such as a QuickPath interconnect (QPI) or similar connection 1495.In one embodiment, the coprocessor 1445 is a dedicated processor, such as a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, etc. In one embodiment, the controller hub 1420 may include an integrated graphics accelerator.There may be various differences between the physical resources 1410 and 1415 in terms of the range of index metrics including architectural characteristics, micro-architectural characteristics, thermal characteristics, power consumption characteristics, and the like.In one embodiment, the processor 1410 executes instructions that control general types of data processing operations. Embedded in instructions can be coprocessor instructions. The processor 1410 recognizes these coprocessor instructions as types that should be executed by the attached coprocessor 1445. Therefore, the processor 1410 issues these coprocessor instructions (or control signals representing coprocessor instructions) to the coprocessor bus or other interconnections to the coprocessor 1445. The coprocessor(s) 1445 accept and execute the received coprocessor instructions.Referring now to FIG. 15, shown is a block diagram of a first more specific exemplary system 1500 in accordance with an embodiment of the present invention. As shown in FIG. 15, the multi-processor system 1500 is a point-to-point interconnection system, and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnection 1550. Each of the processors 1570 and 1580 may be a certain version of the processor 1300. In an embodiment of the present invention, the processors 1570 and 1580 are processors 1410 and 1415, respectively, and the coprocessor 1538 is a coprocessor 1445. In another embodiment, the processors 1570 and 1580 are the processor 1410 and the coprocessor 1445, respectively.The processors 1570 and 1580 are shown as including integrated memory controller (IMC) units 1572 and 1582, respectively. The processor 1570 also includes point-to-point (P-P) interfaces 1576 and 1578 as part of its bus controller unit; similarly, the second processor 1580 includes P-P interfaces 1586 and 1588. The processors 1570, 1580 may use P-P interface circuits 1578, 1588 to exchange information via a point-to-point (P-P) interface 1550. As shown in Figure 15, IMC 1572 and 1582 couple the processors to corresponding memories (ie, memory 1532 and memory 1534), which may be part of the main memory locally attached to the corresponding processors.The processors 1570 and 1580 can each use point-to-point interface circuits 1576, 1594, 1586, and 1598 to exchange information with the chipset 1590 via the respective P-P interfaces 1552, 1554. The chipset 1590 can optionally exchange information with the coprocessor 1538 via the high-performance interface 1539. In one embodiment, the coprocessor 1538 is a dedicated processor, such as a high-throughput MIC processor, a network or communication processor, a compression and/or decompression engine, a graphics processor, a GPGPU, an embedded processor, etc.A shared cache (not shown) can be included in either processor, or external to the two processors but connected to the processor via a PP interconnection, so that when the processor enters a low power mode, either or both The processor's local cache information can be stored in the shared cache.The chipset 1590 may be coupled to the first bus 1516 via the interface 1596. In one embodiment, the first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, but the scope of the present disclosure is not limited this.As shown in FIG. 15, various I/O devices 1514 may be coupled to the first bus 1516 and a bus bridge 1518 that couples the first bus 1516 to the second bus 1520. In one embodiment, one or more additional processors 1515 (e.g., co-processor, high-throughput MIC processor, GPGPU, accelerator (e.g., graphics accelerator or digital signal processing (DSP) unit), field programmable gate The array, or any other processor) is coupled to the first bus 1516. In one embodiment, the second bus 1520 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1520, including, for example, a keyboard and/or mouse 1522, a communication device 1527, and a storage device 1528 such as a disk drive or other mass storage device (which may Including instructions/codes and data 1530). In addition, the audio I/O 1524 may be coupled to the second bus 1520. Note that there may be other architectures. For example, instead of the point-to-point architecture of FIG. 15, the system can implement a multi-drop bus or other such architectures.Referring now to Figure 16, shown is a block diagram of a second more specific exemplary system 1600 in accordance with an embodiment of the present invention. Similar elements in FIGS. 15 and 16 have similar reference numerals, and some aspects of FIG. 15 have been omitted from FIG. 16 to avoid obscuring other aspects of FIG. 16.Figure 16 shows that the processors 1570, 1580 may include integrated memory and I/O control logic ("CL") 1572 and 1582, respectively. Therefore, CL 1572, 1582 include integrated memory controller units and include I/O control logic. Figure 16 shows that not only the memory 1532, 1534 is coupled to the CL 1572, 1582, but the I/O device 1614 is also coupled to the control logic 1572, 1582. The legacy I/O device 1615 is coupled to the chipset 1590.Referring now to FIG. 17, a block diagram of SoC 1700 according to an embodiment of the present invention is shown. Similar elements in Figure 13 have similar reference numerals. In addition, the dashed box is an optional feature on the more advanced SoC. In FIG. 17, the interconnection unit(s) 1702 is coupled to the following: application processor 1710, which includes a set of one or more cores 1302A-N (including cache units 1304A-N) and (one or Multiple) shared cache unit 1306; system proxy unit 1310; (one or more) bus controller unit 1316; (one or more) integrated memory controller unit 1314; one group or one or more coprocessors 1720, It may include integrated graphics logic, image processors, audio processors, and video processors; a static random access memory (SRAM) unit 1730; a direct memory access (DMA) unit 1732; and a display unit 1740 for coupling to a Or multiple external displays. In one embodiment, the coprocessor(s) 1720 includes a dedicated processor, such as a network or communication processor, a compression engine, a GPGPU, a high-throughput MIC processor, an embedded processor, etc.The embodiments of the mechanism disclosed herein can be implemented in hardware, software, firmware, or a combination of these implementation methods. The embodiments of the present invention can be implemented as a computer program or program code executed on a programmable system. The programmable system includes at least one processor and a storage system (including volatile and nonvolatile memory and/or storage elements) , At least one input device and at least one output device.Program code (such as code 1530 shown in FIG. 15) can be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For this application, the processing system includes any system having a processor, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in a high-level process or object-oriented programming language to communicate with the processing system. If necessary, the program code can also be implemented in assembly language or machine language. In fact, the mechanism described herein is not limited to the scope of any particular programming language. In any case, the language can be a compiled or parsed language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the representative instructions representing various logics in the processor, and when read by a machine, the machine construction logic is Perform the techniques described in this article. This representation called "IP core" can be stored on a tangible machine-readable medium and provided to various customers or manufacturing facilities to be loaded into the manufacturing machine that actually manufactures logic or processors.Such machine-readable storage media may include, but are not limited to, non-transitory tangible arrangements of objects manufactured or formed by machines or equipment, including storage media such as hard disks, any other types of disks, including floppy disks, optical disks, and optical disks. Read memory (CD-ROM), compact disk erasable (CD-RW) and magneto-optical disks, semiconductor devices (such as read only memory (ROM)), random access memory (RAM) (such as dynamic random access memory (DRAM) , Static Random Access Memory (SRAM)), Erasable Programmable Read Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM), Phase Change Memory (PCM), Magnetic Card or Optical Card or Applicable to any other types of media storing electronic instructions.Therefore, embodiments of the present invention also include non-transitory physical machine-readable media that contain instructions or contain design data, such as hardware description language (HDL), which defines the structures, circuits, devices, processors, and/or System characteristics. These embodiments may also be referred to as program products.Simulation (including binary conversion, code deformation, etc.)In some cases, the instruction converter can be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter may convert (eg, use static binary conversion, including dynamic binary conversion of dynamic compilation), morph, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. The instruction converter can be implemented by software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, external to the processor, or partly on the processor and partly external to the processor.Fig. 18 is a block diagram of a comparison of using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented by software, firmware, hardware, or various combinations thereof. FIG. 18 shows that a program using a high-level language 1802 can be compiled with an x86 compiler 1804 to generate x86 binary code 1806, which can be executed locally by a processor 1816 having at least one x86 instruction set core. The processor 1816 having at least one x86 instruction set core indicates that the following operations can be performed to perform substantially the same functions as the Intel processor having at least one x86 instruction set core, thereby achieving processing with the Intel processor having at least one x86 instruction set core Any processor with essentially the same result: Compatibly executes or otherwise processes (1) most of the instruction set of the Intel x86 instruction set core or (2) the target is to process at least one Intel x86 instruction set core The object code version of the application or other software running on the device. The x86 compiler 1804 represents a compiler operable to generate x86 binary code 1806 (for example, object code), where the binary code can be used with or without additional link processing in a processor with at least one x86 instruction set core 1816 Is executed. Similarly, FIG. 18 shows that a program using a high-level language 1802 can be compiled with an alternative instruction set compiler 1808 to generate an alternative instruction set binary code 1810, which can be used by a processor 1814 (without at least one x86 instruction set core). For example, a processor with a core that executes the MIPS instruction set of MIPS Technologies of Sunnyvale, California and/or the ARM instruction set of ARM Holdings of Sunnyvale, California, USA) is locally executed. The instruction converter 1812 is used to convert the x86 binary code 1806 into code that can be executed locally by the processor 1814 without the x86 instruction set core. The converted code is unlikely to be the same as the alternate instruction set binary code 1810, because it is difficult to make an instruction converter that can implement it; however, the converted code will perform general operations and consist of instructions from the alternate instruction set . Therefore, the instruction converter 1812 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device without an x86 instruction set processor or core to execute the x86 binary code 1806 through simulation, simulation, or any other process.Further exampleExample 1 provides an exemplary processor, including: extraction and decoding circuits for extracting and decoding variable format, variable sparse matrix multiplication (VFVSMM) instructions, the instructions have designated storage (M×K), (K×N) and (M×N) elements of the field of the position of each of the A, B, and C matrices; and the execution circuit, which operates in dense-dense mode, in response to the decoded VFVSMM instruction, will specify Each row and subsequent row of the A matrix are staggered and routed to the corresponding row of the processing array with (M×N) processing units, and each column and subsequent column of the specified B matrix are staggered and routed to the corresponding column of the processing array , And wherein, each of the (M×N) processing units generates K products of the matching A matrix and B matrix elements received from the designated A and B matrices (when the B matrix element has the same value as the A matrix When the column address of the element is the same as the row address, the match exists), and each generated product is accumulated with the corresponding element of the specified C matrix, and the corresponding element of the C matrix has the same position as the processing unit in the processing array relative position.Example 2 includes the essence of the exemplary processor of Example 1, in which the execution circuit is used to route each row of the designated A matrix and each column of the designated B matrix at a rate of one element per clock cycle, and each subsequent The rows and subsequent columns are staggered by one clock cycle, and among them, each of the (M×N) processing units is used to infer each received A matrix and based on the clock cycle and the relative position of the processing unit in the processing array The column and row addresses of the B matrix elements.Example 3 includes the essence of the exemplary processor of Example 1, wherein the specified B matrix only includes non-zero elements of a sparse matrix logically including (K×N) elements, and each element includes specifying its logical row and column address The execution circuit operates in dense-sparse mode, in response to the decoded VFVSMM instruction, staggers each row of the specified A matrix and subsequent rows to the corresponding row of the processing array, and transfers the specified B matrix Each column is routed to the corresponding column of the processing array, and where each of the (M×N) processing units operates in dense-sparse mode, it is used to determine the specified logical row address of the B matrix element Whether there is an address match with the column address of the A matrix element, and when there is a match, a product is generated. When there is no match, the A matrix element is maintained when the column address of the A matrix element is greater than the specified logical row address of the B matrix element And let the B matrix element pass, otherwise, keep the B matrix element and let the A matrix element pass.Example 4 includes the essence of the exemplary processor of Example 1, in which the designated A and B matrices are sparse matrices, which include only non-zero elements of the logical (M×K) and (K×N) matrices, respectively, and each element includes Specifies the fields of its logical row and column addresses, and wherein the execution circuit operates in a sparse-sparse mode, in response to the decoded VFVSMM instruction, routes each row of the specified A matrix to the corresponding row of the processing array, and transfers the specified Each column of the B matrix is routed to the corresponding column of the processing array, and where each of the (M×N) processing units operates in the sparse-sparse mode, it is used to: determine the assignment of the A matrix element Whether there is a match between the logical column address and the specified logical row address of the B matrix element, when there is a match, a product is generated. When there is no match, the specified logical column address of the A matrix element is greater than the specified logical row address of the B matrix element When keeping the A matrix element and let the B matrix element pass, otherwise, keep the B matrix element and let the A matrix element pass.Example 5 includes the essence of the exemplary processor of Example 1, wherein, when there is no match, each of the (M×N) processing units is also used to generate and send the hold in the upstream direction of the held data element. Request, and generate and send a hold notification in the downstream direction of the held data element.Example 6 includes the essence of the exemplary processor of Example 1, wherein the execution circuit broadcasts the data element downstream to two or more processing units when passing the data element.Example 7 includes the essence of the exemplary processor of Example 1, wherein the matrix multiplication instruction is also used to specify the data element size of each data element of the specified A, B, and C matrices, and the data element size is used as the instruction operand Or specified as part of the opcode.Example 8 includes the essence of the exemplary processor of Example 1, wherein the matrix multiplication instruction is also used to specify the data format of each data element of the specified A, B, and C matrices, the data format being integer, half-precision floating One of point, single precision floating point, double precision floating point, and custom format.Example 9 includes the essence of the exemplary processor of Example 1, wherein the processing array effectively includes (M×N) processing units by iteratively using a smaller processing unit array over multiple clock cycles to perform and ( The actual physical array of M×N) processing units is the same processing.Example 10 includes the essence of the exemplary processor of Example 1, wherein the processing array effectively includes (M×N) processing units by cascading multiple smaller processing unit arrays to perform and (M×N) The processing unit is the same as the actual physical array.Example 11 provides an exemplary method, including: using extraction and decoding circuits to extract and decode variable format, variable sparse matrix multiplication (VFVSMM) instructions, which have designated storage with (M×K), (K× The field of the position of each of the A, B, and C matrices of N) and (M×N) elements, and the following operations in response to the decoded VFVSMM instruction when the execution circuit is operated in the dense-dense mode: Stagger each row and subsequent row of the specified A matrix to the corresponding row of the processing array with (M×N) processing units, and stagger each column and subsequent column of the specified B matrix to the corresponding row of the processing array Column, and where each of the (M×N) processing units generates K products of the matching A matrix and B matrix elements received from the specified A and B matrices (when the B matrix element has the same value as the A matrix When the column address of the element is the same as the row address, the match exists), and each generated product is accumulated with the corresponding element of the specified C matrix, and the corresponding element of the C matrix has the same position as the processing unit in the processing array relative position.Example 12 includes the essence of the exemplary method of Example 11, wherein the execution circuit routes each row of the designated A matrix and each column of the designated B matrix at a rate of one element per clock cycle, and each subsequent row and subsequent row The columns are staggered by one clock cycle, and among them, each of the (M×N) processing units infers the columns and rows of each received A matrix and B matrix element based on the clock cycle and the relative position of the processing unit in the processing array address.Example 13 includes the essence of the exemplary method of Example 11, wherein the specified B matrix only includes non-zero elements of a sparse matrix logically including (K×N) elements, and each element includes a logical row and column address specified Field, in which the execution circuit operates in dense-sparse mode, in response to the decoded VFVSMM instruction, staggers each row of the specified A matrix and subsequent rows to the corresponding row of the processing array, and routes each column of the specified B matrix To the corresponding column of the processing array, and where each of the (M×N) processing units is operating in dense-sparse mode, it is used to determine the designated logical row address of the B matrix element and the A matrix Whether there is an address match between the column addresses of the elements, when there is a match, a product is generated, and when there is no match, the A matrix element is maintained when the inferred column address of the A matrix element is greater than the designated logical row address of the B matrix element Pass the B matrix element, otherwise, keep the B matrix element and pass the A matrix element.Example 14 includes the essence of the exemplary method of Example 11, in which the designated A and B matrices are sparse matrices, which respectively include only non-zero elements of the logical (M×K) and (K×N) matrices, and each element includes the designated Its logical row and column address fields, and where the execution circuit operates in sparse-sparse mode, in response to the decoded VFVSMM instruction, routes each row of the designated A matrix to the corresponding row of the processing array, and transfers the designated B Each column of the matrix is routed to the corresponding column of the processing array, and where each of the (M×N) processing units operates in the sparse-sparse mode, it is used to determine the specified logic of the A matrix element Whether there is a match between the column address and the specified logical row address of the B matrix element, when there is a match, a product is generated, when there is no match, when the specified logical column address of the A matrix element is greater than the specified logical row address of the B matrix element Keep the A matrix element and pass the B matrix element, otherwise, keep the B matrix element and pass the A matrix element.Example 15 includes the essence of the exemplary method of Example 11, wherein when there is no match, each of the (M×N) processing units is also used to generate and send a hold request in the upstream direction of the held data element , And generate and send a hold notification in the downstream direction of the held data element.Example 16 includes the essence of the exemplary method of Example 11, wherein the execution circuit broadcasts the data element downstream to two or more processing units when passing the data element.Example 17 includes the essence of the exemplary method of Example 11, wherein the matrix multiplication instruction is also used to specify the data element size of each data element of the specified A, B, and C matrices, and the data element size is used as the instruction operand or Specified as part of the opcode.Example 18 includes the essence of the exemplary method of Example 11, wherein the matrix multiplication instruction is also used to specify the data format of each data element of the specified A, B, and C matrices, and the data format is an integer, half-precision floating point , Single precision floating point, double precision floating point and one of custom format.Example 19 includes the essence of the exemplary method of Example 11, in which the processing array effectively includes (M×N) processing units by iteratively using a smaller processing unit array over multiple clock cycles to perform and ( The actual physical array of M×N) processing units is the same processing.Example 20 includes the essence of the exemplary method of Example 11, wherein the processing array effectively includes (M×N) processing units by cascading a plurality of smaller processing unit arrays to perform and (M×N) The processing unit is the same as the actual physical array.Example 21 provides an exemplary non-transitory machine-readable medium containing instructions that, when executed by a processor, cause the processor to respond in the following manner: extracting and decoding a variable format using an extraction and decoding circuit, The variable sparse matrix multiplication (VFVSMM) instruction has a field that specifies the location of each of the A, B, and C matrices having (M×K), (K×N), and (M×N) elements, respectively , And use the execution circuit to respond to the decoded VFVSMM command in the case of dense-dense mode operation: Stagger each row of the specified A matrix and subsequent rows to a process with (M×N) processing units In the corresponding row of the array, each column of the designated B matrix and subsequent columns are staggered and routed to the corresponding column of the processing array, and among them, each of the (M×N) processing units generates from the designated A and The K products of the matched A matrix and B matrix elements received by the B matrix (when the B matrix element has the same row address as the column address of the A matrix element, the match exists), and each generated product is combined with the specified The corresponding elements of the C matrix are accumulated, and the corresponding elements of the C matrix have the same relative position as the position of the processing unit in the processing array.Example 22 includes the essence of the exemplary non-transitory machine-readable medium of Example 21, wherein the execution circuit routes each row of the designated A matrix and each column of the designated B matrix at a rate of one element per clock cycle, and will Each subsequent row and subsequent column are staggered by a clock cycle, and each of the (M×N) processing units infers each received A matrix and B based on the clock cycle and the relative position of the processing unit in the processing array The column and row addresses of the matrix elements.Example 23 includes the essence of the exemplary non-transitory machine-readable medium of Example 21, wherein the specified B matrix only includes non-zero elements of a sparse matrix logically including (K×N) elements, and each element includes the specified Fields of logical row and column addresses, where the execution circuit operates in dense-sparse mode, in response to the decoded VFVSMM instruction, staggers each row of the specified A matrix from subsequent rows to the corresponding row of the processing array, and assigns Each column of the B matrix is routed to the corresponding column of the processing array, and where each of the (M×N) processing units operates in the dense-sparse mode for: determining the specified logic of the B matrix element Whether there is an address match between the row address and the column address of the A matrix element. When there is a match, a product is generated. When there is no match, the A matrix is maintained when the column address of the A matrix element is greater than the specified logical row address of the B matrix element Element and pass the B matrix element, otherwise, keep the B matrix element and pass the A matrix element.Example 24 includes the essence of the exemplary non-transitory machine-readable medium of Example 21, wherein the designated A and B matrices are sparse matrices, which include only non-zero elements of the logical (M×K) and (K×N) matrices, respectively , Each element includes a field specifying its logical row and column address, and wherein the execution circuit operates in sparse-sparse mode, and in response to the decoded VFVSMM instruction, routes each row of the specified A matrix to the corresponding row of the processing array , And route each column of the designated B matrix to the corresponding column of the processing array, and among them, each of the (M×N) processing units operating in the sparse-sparse mode is used to: Whether there is a match between the specified logical column address and the specified logical row address of the B matrix element. When there is a match, a product is generated. When there is no match, the specified logical column address of the A matrix element is greater than the specified logical row of the B matrix element When addressing, keep the A matrix element and pass the B matrix element; otherwise, keep the B matrix element and pass the A matrix element.Example 25 includes the essence of the exemplary non-transitory machine-readable medium of Example 21, wherein each of the (M×N) processing units is also used to: when there is no match, generate and maintain the The hold request is sent in the upstream direction of the data element, and the hold notification is generated and sent in the downstream direction of the held data element. |
A host bridge (404) includes a memory controller (1304) and a security check unit (418). The memory controller (1304) couples to a memory (406) storing data arranged within multiple memory pages. The memory controller (1304) receives memory access signals and accesses the memory (406). The security check unit (418) receives the memory access signals, including a physical address within a target memory page. The security check unit (418) uses the physical address to access one or more security attribute data structures to obtain a security attribute of the target memory page. The security check unit (418) provides the memory access signals to the memory controller (1304) dependent upon the security attribute of the target memory page. |
CLAIMS 1. A host bridge (404), comprising : a memory controller (1304) adapted for coupling to a memory (406) storing data arranged within a plurality of memory pages, wherein the memory controller (1304) is coupled to receive memory access signals, and wherein the memory controller (1304) is configured to respond to the memory access signals by accessing the memory (406) ; and a security check unit (418) coupled to receive the memory access signals, wherein the memory access signals convey a physical address within a target memory page, and wherein the security check unit (418) is configured to use the physical address to access at least one security attribute data structure located in the memory (406) to obtain a security attribute of the target memory page, and wherein the security check unit (418) is configured to provide the memory access signals to the memory controller (1304) dependent upon the security attribute of the target memory page. 2. The host bridge (404) as recited in claim 1, wherein the at least one security attribute data structure comprises a security attribute table directory (904) and at least one security attribute table (906). 3. The host bridge (404) as recited in claim 2, wherein the security attribute table directory (904) comprises a plurality of entries (910), and where each entry (910) of the security attribute table directory (904) includes a present bit and a security attribute table (906) base address field, and wherein the present bit indicates whether or not a security attribute table (906) corresponding to the security attribute table directory entry (910) is present in the memory (406), and wherein the security attribute table base address field is reserved for a base address of the security attribute table (906) corresponding to the security attribute table directory entry (910). 4. The host bridge (404) as recited in claim 1, wherein the security attribute of the target memory page comprises a secure page (SP) bit, and wherein the SP bit indicates whether or not the target memory page is a secure page. 5. The host bridge (404) as recited in claim 4, wherein when the SP bit indicates the target memory page is a secure page, the memory access is unauthorized, and the security check unit (418) does not provide the memory access signals to the memory controller (1304). 6. The host bridge (404) as recited in claim 5, wherein when the SP bit indicates the target memory page is a secure page and the memory access signals indicate that a memory access type is at least one of a read access and an unauthorized read access, and the security check unit (418) is configured to respond to the unauthorized read access by providing bogus read data. 7. The host bridge (404) as recited in claim 5, wherein when the SP bit indicates the target memory page is a secure page and the memory access signals indicate that a memory access type is at least one of a write <Desc/Clms Page number 18> access and an unauthorized write access, and the security check unit (418) is configured to respond to the unauthorized write access by discarding write data conveyed by the memory access signals. 8. A computer system (400) for managing memory (406), CHARACTERIZED IN THAT, the computer system (400) comprises: a memory (406) storing data arranged within a plurality of memory pages; a device (414A-D) operably coupled to the memory (406) and configurable to produce memory access signals; and a host bridge (404) operably coupled to the device (414A-D) and the memory (406), comprising: a memory controller (1304) adapted for coupling to the memory (406) storing data arranged within a plurality of memory pages, wherein the memory controller (1304) is coupled to receive memory access signals, and wherein the memory controller (1304) is configured to respond to the memory access signals by accessing the memory (406); and a security check unit (418) coupled to receive the memory access signals, wherein the memory access signals convey a physical address within a target memory page, and wherein the security check unit (418) is configured to use the physical address to access at least one security attribute data structure located in the memory (406) to obtain a security attribute of the target memory page, and wherein the security check unit (418) is configured to provide the memory access signals to the memory controller (1304) dependent upon the security attribute of the target memory page. 9. A method for providing access security for a memory (406) used to store data arranged within a plurality of memory pages, the method comprising: receiving memory access signals, wherein the memory access signals convey a physical address within a target memory page; using the physical address to access at least one security attribute data structure located in the memory (406) to obtain a security attribute of the target memory page; and accessing the memory (406) using the memory access signals dependent upon the security attribute of the target memory page. 10. The method as recited in claim 9, wherein using the physical address to access at least one security attribute data structure comprises using the physical address to access at least one security attribute data structure that comprises a security attribute table directory (904) and at least one security attribute table (906). |
<Desc/Clms Page number 1> SYSTEM AND METHOD FOR HANDLING DEVICE ACCESSES TO A MEMORY PROVIDING INCREASED MEMORY ACCESS SECURITY TECHNICAL FIELD This invention relates generally to memory management systems and methods, and, more particularly, to memory management systems and methods that provide protection for data stored within a memory. BACKGROUND ART A typical computer system includes a memory hierarchy to obtain a relatively high level of performance at relatively low cost. Instructions of several different software programs are typically stored on a relatively large but slow non-volatile storage unit (e. g. , a disk drive unit). When a user selects one of the programs for execution, the instructions of the selected program are copied into a main memory unit, and a central processing unit (CPU) obtains the instructions of the selected program from the main memory unit. A well-known virtual memory management technique allows the CPU to access data structures larger in size than that of the main memory unit by storing only a portion of the data structures within the main memory unit at any given time. Remainders of the data structures are stored within the relatively large but slow non-volatile storage unit, and are copied into the main memory unit only when needed. Virtual memory is typically implemented by dividing an address space of the CPU into multiple blocks called page frames or"pages."Only data corresponding to a portion of the pages is stored within the main memory unit at any given time. When the CPU generates an address within a given page, and a copy of that page is not located within the main memory unit, the required page of data is copied from the relatively large but slow non-volatile storage unit into the main memory unit. In the process, another page of data may be copied from the main memory unit to the non-volatile storage unit to make room for the required page. The popular 80x86 (x86) processor architecture includes specialized hardware elements to support a protected virtual address mode (i. e. , a protected mode). Figs. 1-3 will now be used to describe how an x86 processor implements both virtual memory and memory protection features. Fig. 1 is a diagram of a well- known linear-to-physical address translation mechanism 100 of the x86 processor architecture. Address translation mechanism 100 is embodied within an x86 processor, and involves a linear address 102 produced within the x86 processor, a page table directory (i. e. , a page directory) 104, multiple page tables including a page table 106, multiple page frames including a page frame 108, and a control register 3 (CR3) 110. The page directory 104 and the multiple page tables are paged memory data structures created and maintained by operating system software (i. e. , an operating system). The page directory 104 is always located within a memory (e. g. , a main memory unit). For simplicity, the page table 106 and the page frame 108 will also be assumed to reside in the memory. As indicated in Fig. 1, linear address 102 is divided into three portions to accomplish the linear-to- physical address translation. The highest ordered bits of CR3 110 are used to store a page directory base register. The page directory base register is a base address of a memory page containing the page directory 104. The page directory 104 includes multiple page directory entries, including a page directory entry 112. An upper "directory index"portion of linear address 102, including the highest ordered or most significant bits of the linear address 102, is used as an index into the page directory 104. The page directory entry 112 is selected <Desc/Clms Page number 2> from within the page directory 104 using the page directory base register stored in CR3 110 and the upper "directory index"portion of the linear address 102. Fig. 2 is a diagram of a page directory entry format 200 of the x86 processor architecture. As indicated in Fig. 2, the highest ordered (i. e. , most significant) bits of a given page directory entry contain a page table base address, where the page table base address is a base address of a memory page containing a corresponding page table. The page table base address of the page directory entry 112 is used to select the corresponding page table 106. Referring back to Fig. 1, the page table 106 includes multiple page table entries, including a page table entry 114. A middle"table index"portion of the linear address 102 is used as an index into the page table 106, thereby selecting the page table entry 114. Fig. 3 is a diagram of a page table entry format 300 of the x86 processor architecture. As indicated in Fig. 3, the highest ordered (i. e. , most significant) bits of a given page table entry contain a page frame base address, where the page frame base address is a base address of a corresponding page frame. Referring back to Fig. 1, the page frame base address of the page table entry 114 is used to select the corresponding page frame 108. The Page frame 108 includes multiple memory locations. A lower or"offset" portion of the linear address 102 is used as an index into the page frame 108. When combined, the page frame base address of the page table entry 114 and the offset portion of the linear address 102 produce the physical address corresponding to the linear address 102, and indicate a memory location 116 within the page frame 108. The memory location 116 has the physical address resulting from the linear-to-physical address translation. Regarding the memory protection features, the page directory entry format 200 of Fig. 2 and page table entry format 300 of Fig. 3 include a user/supervisor (U/S) bit and a read/write (R/W) bit. The contents of the U/S and R/W bits are used by the operating system to protect corresponding page frames (i. e. , memory pages) from unauthorized access. U/S=0 is used to denote operating system memory pages, and corresponds to a "supervisor"level of the operating system. The supervisor level of the operating system corresponds to current privilege level 0 (CPLO) of software programs and routines executed by the x86 processor. U/S > 0 (i. e., U/S=1, 2, or 3) is used to indicate user memory pages, and corresponds to a"user"level of the operating system. The R/W bit is used to indicate types of accesses allowed to the corresponding memory page. R/W=0 indicates that only read accesses are allowed to the corresponding memory page (i. e. , the corresponding memory page is"read-only"). R/W=1 indicates that both read and write accesses are allowed to the corresponding memory page (i. e. , the corresponding memory page is"read-write"). During the linear-to-physical address translation operation of Fig. 1, the contents of the U/S bits of the page directory entry 112 and the page table entry 114, corresponding to the page frame 108, are logically ANDed determine if the access to the page frame 108 is authorized. Similarly, the contents of the R/W bits of the page directory entry 112 and the page table entry 114 are logically ANDed to determine if the access to the page frame 108 is authorized. If the logical combinations of the U/S and R/W bits indicate that the access to the page frame 108 is authorized, the memory location 116 is accessed using the physical address. On the other hand, if the logical combinations of the U/S and R/W bits indicate the access to the page frame 108 is not authorized, the memory location 116 is not accessed, and a protection fault indication is signaled. <Desc/Clms Page number 3> Unfortunately, the above described memory protection mechanisms of the x86 processor architecture are not sufficient to protect data stored in the memory. For example, any software program or routine executing at the supervisor level (e. g. , having a CPL of 0) can access any portion of the memory, and can modify (i. e., write to) any portion of the memory that is not marked"read-only" (R/W=0). In addition, by virtue of executing at the supervisor level, the software program or routine can change the attributes (i. e. , the U/S and R/W bits) of any portion of the memory. The software program or routine can thus change any portion of the memory marked"read-only"to"read-write" (R/W=1), and then proceed to modify that portion of the memory. The protection mechanisms of the x86 processor architecture are also inadequate to prevent errant or malicious accesses to the memory by hardware devices operably coupled to the memory. It is true that portions of the memory marked"read-only"cannot be modified by write accesses initiated by hardware devices (without the attributes of those portions of the memory first being changed as described above). It is also true that software programs or routines (e. g. , device drivers) handling data transfers between hardware devices and the memory typically execute at the user level (e. g. , CPL3), and are not permitted access to portions of the memory marked as supervisor level (U/S=0). However, the protection mechanisms of the x86 processor architecture cover only device accesses to the memory performed as a result of instruction execution (i. e. , programmed input/output). A device driver can program a hardware device having bus mastering or DMA capability to transfer data from the device into any portion of the memory accessible by the hardware device. For example, it is relatively easy to program a floppy disk controller to transfer data from a floppy disk directly into a portion of the memory used to store the operating system. DISCLOSURE OF INVENTION A host bridge is described including a memory controller and a security check unit. The memory controller is adapted for coupling to a memory storing data arranged within a multiple memory pages. The memory controller receives memory access signals (e. g. , during a memory access), and responds to the memory access signals by accessing the memory. The security check unit receives the memory access signals, wherein the memory access signals convey a physical address within a target memory page. The security check unit uses the physical address to access one or more security attribute data structures located in the memory to obtain a security attribute of the target memory page. The security check unit provides the memory access signals to the memory controller dependent upon the security attribute of the target memory page. The one or more security attribute data structures may include a security attribute table directory and one or more security attribute tables. The security attribute table directory may include multiple entries. Each entry of the security attribute table directory may include a present bit and a security attribute table base address field. The present bit may indicate whether or not a security attribute table corresponding to the security attribute table directory entry is present in the memory. The security attribute table base address field may be reserved for a base address of the security attribute table corresponding to the security attribute table directory entry. The one or more security attribute tables may include multiple entries. Each entry of the one or more security attribute tables may include a secure page (SP) bit indicating whether or not a corresponding memory page is a secure page. The memory access signals may be produced by a device hardware unit coupled to the host bridge. The security attribute of the target memory page may include a secure page (SP) bit indicating whether or not the target memory page is a secure page. When the secure page (SP) bit indicates the target memory page is not <Desc/Clms Page number 4> a secure page, the security check unit may provide the memory access signals to the memory controller. When the secure page (SP) bit indicates the target memory page is a secure page, the memory access may be unauthorized, and the security check unit may not provide the memory access signals to the memory controller. The memory access signals may indicate a memory access type. For example, the memory access type may be a read access or a write access. When the secure page (SP) bit indicates the target memory page is a secure page and the memory access signals indicate the memory access type is a read access, the memory access may be an unauthorized read access. In this situation, the security check unit may respond to the unauthorized read access by providing invalid or bogus read data. The security check unit may further respond to the unauthorized read access by logging the unauthorized read access. When the secure page (SP) bit indicates the target memory page is a secure page and the memory access signals indicate the memory access type is a write access, the memory access may be an unauthorized write access. The security check unit may respond to the unauthorized write access by discarding write data conveyed by the memory access signals. The security check unit may further respond to the unauthorized write access by logging the unauthorized write access. A computer system is described including a memory storing data arranged within a multiple memory pages, a device operably coupled to the memory and configurable to produce memory access signals, the above described host bridge. The computer system may have, for example, a central processing unit (CPU) including a memory management unit (MMU) operably coupled to the memory and configured to manage the memory. The memory management unit (MMU) may manage the memory such that the memory stores the data arranged within the multiple memory pages. A method is disclosed for providing access security for a memory used to store data arranged within a multiple memory pages. The method includes receiving memory access signals, wherein the memory access signals convey a physical address within a target memory page. The physical address is used to access one or more security attribute data structures located in the memory to obtain a security attribute of the target memory page. The memory is accessed using the memory access signals dependent upon the security attribute of the target memory page. BRIEF DESCRIPTION OF THE DRAWINGS The invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify similar elements, and in which: Fig. 1 is a diagram of a well-known linear-to-physical address translation mechanism of the x86 processor architecture; Fig. 2 is a diagram of a page directory entry format of the x86 processor architecture; Fig. 3 is a diagram of a page table entry format of the x86 processor architecture; Fig. 4 is a diagram of one embodiment of a computer system including a CPU and a system or"host" bridge, wherein the CPU includes a CPU security check unit (SCU), and wherein the host bridge includes a host bridge SCU; Fig. 5 is a diagram illustrating relationships between various hardware and software components of the computer system of Fig. 4; <Desc/Clms Page number 5> Fig. 6 is a diagram of one embodiment of the CPU of the computer system of Fig. 4, wherein the CPU includes a memory management unit (MMU); Fig. 7 is a diagram of one embodiment of the MMU of Fig. 6, wherein the MMU includes a paging unit, and wherein the paging unit includes the CPU SCU; Fig. 8 is a diagram of one embodiment of the CPU SCU of Fig. 7; Fig. 9 is a diagram of one embodiment of a mechanism for accessing a security attribute table (SAT) entry of a selected memory page to obtain additional security information of the selected memory page; Fig. 10 is a diagram of one embodiment of a SAT default register; Fig. 11 is a diagram of one embodiment of a SAT directory entry format; Fig. 12 is a diagram of one embodiment of a SAT entry format; Fig. 13 is a diagram of one embodiment of the host bridge of Fig. 4, wherein the host bridge includes the host bridge SCU; Fig. 14 is a diagram of one embodiment of the host bridge SCU of Fig. 13; Fig. 15 is a flow chart of one embodiment of a first method for providing access security for a memory used to store data arranged within multiple memory pages; and Fig. 16 is a flow chart of one embodiment of a second method for providing access security for a memory used to store data arranged within multiple memory pages. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. MODE (S) FOR CARRYING OUT THE INVENTION Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will, of course, be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers'specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordi- nary skill in the art having the benefit of this disclosure. Fig. 4 is a diagram of one embodiment of a computer system 400 including a CPU 402, a system or "host"bridge 404, a memory 406, a first device bus 408 (e. g. , a peripheral component interconnect or PCI bus), a device bus bridge 410, a second device bus 412 (e. g. , an industry standard architecture or ISA bus), and four device hardware units 414A-414D. The host bridge 404 is coupled to CPU 402, memory 406, and device bus 408. Host bridge 404 translates signals between CPU 402 and device bus 408, and operably couples memory 406 to CPU 402 and to device bus 408. Device bus bridge 410 is coupled between device bus 408 and device bus 412, and translates signals between device bus 408 and device bus 412. In the embodiment of Fig. 4, device hardware units 414A and 414B are coupled to device bus 408, and device hardware units 414C and 414D are coupled to device bus 412. One or more of the device hardware units 414A-414D may be, for example, storage <Desc/Clms Page number 6> devices (e. g., hard disk drives, floppy drives, and CD-ROM drives), communication devices (e. g., modems and network adapters), or input/output devices (e. g., video devices, audio devices, and printers). In the embodiment of Fig. 4, CPU 402 includes a CPU security check unit (SCU) 416, and host bridge 404 includes a host bridge SCU 418. As will be described in detail below, CPU SCU 416 protects memory 406 from unauthorized accesses generated by CPU 402 (i. e. ,"software-initiated accesses"), and host bridge SCU 418 protects memory 406 from unauthorized accesses initiated by device hardware units 414A-414D (i. e., "hardware-initiated accesses"). It is noted that in other embodiments, host bridge 404 may be part of CPU 402 as indicated in Fig. 4. Fig. 5 is a diagram illustrating relationships between various hardware and software components of computer system 400 of Fig. 4. In the embodiment of Fig. 5, multiple application programs 500, an operating system 502, a security kernel 504, and device drivers 506A-506D are stored in memory 406. Application programs 500, operating system 502, security kernel 504, and device drivers 506A-506D include instructions executed by CPU 402. Operating system 502 provides a user interface and software"platform"on top of which application programs 500 run. Operating system 502 may also provide, for example, basic support functions including file system management, process management, and input/output (I/O) control. Operating system 502 may also provide basic security functions. For example, CPU 402 (Fig. 4) may be an x86 processor which executes instructions of the x86 instruction set. In this situation, CPU 402 may include specialized hardware elements to provide both virtual memory and memory protection features in the protected mode as described above. Operating system 502 may be, for example, one of the Windows@ family of operating systems (Microsoft Corp. , Redmond, WA) which operates CPU 402 in the protected mode, and uses the specialized hardware elements of CPU 402 to provide both virtual memory and memory protection in the protected mode. As will be described in more detail below, security kernel 504 provides additional security functions above the security functions provided by operating system 502 to protect data stored in memory 406 from unauthorized access. In the embodiment of Fig. 5, device drivers 506A-506D are operationally associated with, and coupled to, respective corresponding device hardware units 414A-414D. Device hardware units 414A and 414D are"secure"devices, and corresponding device drivers 506A and 506D are"secure"device drivers. Security kernel 504 is coupled between operating system 502 and secure device drivers 506A and 506D, and monitors all accesses by application programs 500 and operating system 502 to secure device drivers 506A and 506D and corresponding secure devices 414A and 414D. Security kernel 504 prevents unauthorized accesses to secure device drivers 506A and 506D and corresponding secure devices 414A and 414D by application programs 500 and operating system 502. As indicated in Fig. 5, security kernel 504 is coupled to CPU SCU 416 and host bridge SCU 418 (e. g., via one or more device drivers). As will be described in detail below, CPU SCU 416 and host bridge SCU 418 control accesses to memory 406. CPU SCU 416 monitors all software-initiated accesses to memory 406, and host bridge SCU 418 monitors all hardware-initiated accesses to memory 406. Once configured by security kernel 504, CPU SCU 416 and host bridge SCU 418 allow only authorized accesses to memory 406. In the embodiment of Fig. 5, device drivers 506B and 506C are"non-secure"device drivers, and corresponding device hardware units 414B and 414C are"non-secure"device hardware units. Device drivers <Desc/Clms Page number 7> 506B and 506C and corresponding device hardware units 414B and 414C may be, for example,"legacy"device drivers and device hardware units. It is noted that in other embodiments security kernel 504 may be part of operating system 502. In yet other embodiments, security kernel 504, device drivers 506A and 506D, and/or device drivers 506B and 506C may be part of operating system 502. Fig. 6 is a diagram of one embodiment of CPU 402 of computer system 400 of Fig. 4. In the embodiment of Fig. 6, CPU 402 includes an execution unit 600, a memory management unit (MMU) 602, a cache unit 604, a bus interface unit (BIU) 606, a set of control registers 608, and a set of secure execution mode (SEM) registers 610. CPU SCU 416 is located within MMU 602. As will be described in detail below, the set of SEM registers 610 are used to implement a secure execution mode (SEM) within computer system 400 of Fig. 4, and operations of CPU SCU 416 and host bridge SCU 418 are governed by the contents of the set of SEM registers 610. SEM registers 610 are accessed (i. e. , written to and/or read from) by security kernel 504 (Fig. 5). Computer system 400 of Fig. 4 may, for example, operate in the SEM when: (i) CPU 402 is an x86 processor operating in the x86 protected mode, (ii) memory paging is enabled, and (iii) the contents of SEM registers 610 specify SEM operation. In general, the contents of the set of control registers 608 govern operation of CPU 402. Accordingly, the contents of the set of control registers 608 govern operation of execution unit 600, MMU 602, cache unit 604, and/or BIU 606. The set of control registers 608 may include, for example, the multiple control registers of the x86 processor architecture. Execution unit 600 of CPU 402 fetches instructions (e. g. , x86 instructions) and data, executes the fetched instructions, and generates signals (e. g. , address, data, and control signals) during instruction execution. Execution unit 600 is coupled to cache unit 604, and may receive instructions from memory 406 (Fig. 4) via cache unit 604 and BIU 606. Memory 406 (Fig. 4) of computer system 400 includes multiple memory locations, each having a unique physical address. When operating in protected mode with paging enabled, an address space of CPU 402 is divided into multiple blocks called page frames or"pages."As described above, only data corresponding to a portion of the pages is stored within memory 406 at any given time. In the embodiment of Fig. 6, address signals generated by execution unit 600 during instruction execution represent segmented (i. e.,"logical") addresses. As described below, MMU 602 translates the segmented addresses generated by execution unit 600 to corresponding physical addresses of memory 406. MMU 602 provides the physical addresses to cache unit 604. Cache unit 604 is a relatively small storage unit used to store instructions and data recently fetched by execution unit 600. BIU 606 is coupled between cache unit 604 and host bridge 404, and is used to fetch instructions and data not present in cache unit 604 from memory 406 via host bridge 404. Fig. 7 is a diagram of one embodiment of MMU 602 of Fig. 6. In the embodiment of Fig. 7, MMU 602 includes a segmentation unit 700, a paging unit 702, and selection logic 704 for selecting between outputs of segmentation unit 700 and paging unit 702 to produce a physical address. As indicated in Fig. 7, segmentation unit 700 receives a segmented address from execution unit 600 and uses a well-know segmented- to-linear address translation mechanism of the x86 processor architecture to produce a corresponding linear address at an output. As indicated in Fig. 7, when enabled by a"PAGING"signal, paging unit 702 receives the linear addresses produced by segmentation unit 700 and produces corresponding physical addresses at an <Desc/Clms Page number 8> output. The PAGING signal may mirror the paging flag (PG) bit in a control register 0 (CRO) of the x86 processor architecture and of the set of control registers 608 (Fig. 6). When the PAGING signal is deasserted, memory paging is not enabled, and selection logic 704 produces the linear address received from segmentation unit 700 as the physical address. When the PAGING signal is asserted, memory paging is enabled, and paging unit 702 translates the linear address received from segmentation unit 700 to a corresponding physical address using the above described linear-to-physical address translation mechanism 100 of the x86 processor architecture (Fig. 1). As described above, during the linear-to-physical address translation operation, the contents of the U/S bits of the selected page directory entry and the selected page table entry are logically ANDed determine if the access to a page frame is authorized. Similarly, the contents of the R/W bits of the selected page directory entry and the selected page table entry are logically ANDed to determine if the access to the page frame is authorized. If the logical combinations of the U/S and R/W bits indicate the access to the page frame is authorized, paging unit 702 produces the physical address resulting from the linear-to-physical address translation operation. Selection logic 704 receives the physical address produced by paging unit 702, produces the physical address received from paging unit 702 as the physical address, and provides the physical address to cache unit 604. On the other hand, if the logical combinations of the U/S and R/W bits indicate the access to the page frame 108 is not authorized, paging unit 702 does not produce a physical address during the linear-to-physical address translation operation. Instead, paging unit 702 asserts a page fault signal, and MMU 602 forwards the page fault signal to execution unit 600. In the x86 processor architecture, a page fault signal may, in some cases, indicate a protection violation. In response to the page fault signal, execution unit 600 may execute an exception handler routine, and may ultimately halt the execution of one of the application programs 500 (Fig. 5) running when the page fault signal was asserted. In the embodiment of Fig. 7, CPU SCU 416 is located within paging unit 702 of MMU 602. Paging unit 702 may also include a translation lookaside buffer (TLB) for storing a relatively small number of recently determined linear-to-physical address translations. Fig. 8 is a diagram of one embodiment of CPU SCU 416 of Fig. 7. In the embodiment of Fig. 8, CPU SCU 416 includes security check logic 800 coupled to the set of SEM registers 610 (Fig. 6) and a security attribute table (SAT) entry buffer 802. As described below, SAT entries include additional security information above the U/S and R/W bits of page directory and page table entries corresponding to memory pages. Security check logic 800 uses the additional security information stored within a given SAT entry to prevent unauthorized software-initiated accesses to the corresponding memory page. SAT entry buffer 802 is used to store a relatively small number of SAT entries of recently accessed memory pages. As described above, the set of SEM registers 610 are used to implement a secure execution mode (SEM) within computer system 400 of Fig. 4. The contents of the set of SEM registers 610 govern the operation of CPU SCU 416. Security check logic 800 receives information to be stored in SAT entry buffer 802 from MMU 602 via a communication bus indicated in Fig. 8. The security check logic 800 also receives a physical address produced by paging unit 702. Figs. 9-11 will now be used to describe how additional security information of memory pages selected using address translation mechanism 100 of Fig. 1 is obtained within computer system 400 of Fig. 4. Fig. 9 is a <Desc/Clms Page number 9> diagram of one embodiment of a mechanism 900 for accessing a SAT entry of a selected memory page to obtain additional security information of the selected memory page. Mechanism 900 of Fig. 9 may be embodied within security check logic 800 of Fig. 8, and may be implemented when computer system 400 of Fig. 4 is operating in the SEM. Mechanism 900 involves a physical address 902 produced by paging mechanism 702 (Fig. 7) using address translation mechanism 100 of Fig. 1, a SAT directory 904, multiple SATs including a SAT 906, and a SAT base address register 908 of the set of SEM registers 610. SAT directory 104 and the multiple SATs, including SAT 906, are SEM data structures created and maintained by security kernel 504 (Fig. 5). As described below, SAT directory 104 (when present) and any needed SAT is copied into memory 406 before being accessed. SAT base address register 908 includes a present (P) bit which indicates the presence of a valid SAT directory base address within SAT base address register 908. The highest ordered (i. e. , most significant) bits of SAT base address register 908 are reserved for the SAT directory base address. The SAT directory base address is a base address of a memory page containing SAT directory 904. If P=1, the SAT directory base address is valid, and SAT tables specify the security attributes of memory pages. If P=0, the SAT directory base address is not valid, no SAT tables exist, and security attributes of memory pages are determined by a SAT default register. Fig. 10 is a diagram of one embodiment of the SAT default register 1000. In the embodiment of Fig. 10, SAT default register 1000 includes a secure page (SP) bit. The SP bit indicates whether or not all memory pages are secure pages. For example, if SP=0 all memory pages may not be secure pages, and if SP=1 all memory pages may be secure pages. Referring back to Fig. 9 and assuming the P bit of SAT base address register 908 is a'1', physical address 902 produced by paging logic 702 (Fig. 7) is divided into three portions to access the SAT entry of the selected memory page. As described above, the SAT directory base address of SAT base address register 908 is the base address of the memory page containing SAT directory 904. SAT directory 904 includes multiple SAT directory entries, including a SAT directory entry 910. Each SAT directory entry may have a corresponding SAT in memory 406. An"upper"portion of physical address 902, including the highest ordered or most significant bits of physical address 902, is used as an index into SAT directory 904. SAT directory entry 910 is selected from within SAT directory 904 using the SAT directory base address of SAT base address register 908 and the upper portion of physical address 902. Fig. 11 is a diagram of one embodiment of a SAT directory entry format 1100. In accordance with Fig. 11, each SAT directory entry includes a present (P) bit which indicates the presence of a valid SAT base address within the SAT directory entry. In the embodiment of Fig. 11, the highest ordered (i. e. , the most significant) bits of each SAT directory entry are reserved for a SAT base address. The SAT base address is a base address of a memory page containing a corresponding SAT. If P=1, the SAT base address is valid, and the corresponding SAT is stored in memory 406. If P=0, the SAT base address is not valid, and the corresponding SAT does not exist in memory 406 and must be copied into memory 406 from a storage device (e. g. , a disk drive). If P=0, security check logic 800 may signal a page fault to logic within paging unit 702, and MMU 602 may forward the page fault signal to execution unit 600 (Fig. 6). In response to the page fault signal, execution unit 600 may execute a page fault handler routine which retrieves the needed SAT from the storage device and stores the needed SAT in memory <Desc/Clms Page number 10> 406. After the needed SAT is stored in memory 406, the P bit of the corresponding SAT directory entry is set to '1', and mechanism 900 is continued. Referring back to Fig. 9, a"middle"portion of physical address 902 is used as an index into SAT 906. SAT entry 906 is thus selected within SAT 906 using the SAT base address of SAT directory entry 910 and the middle portion of physical address 902. Fig. 12 is a diagram of one embodiment of a SAT entry format 1200. In the embodiment of Fig. 12, each SAT entry includes a secure page (SP) bit. The SP bit indicates whether or not the selected memory page is a secure page. For example, if SP=0 the selected memory page may not be a secure page, and if SP=1 the selected memory page may be a secure page. BIU 606 (Fig. 6) retrieves needed SEM data structure entries from memory 406, and provides the SEM data structure entries to MMU 602. Referring back to Fig. 8, security check logic 800 receives SEM data structure entries from MMU 602 and paging unit 702 via the communication bus. As described above, SAT entry buffer 802 is used to store a relatively small number of SAT entries of recently accessed memory pages. Security check logic 800 stores a given SAT entry in SAT entry buffer 802, along with a"tag"portion of the corresponding physical address. During a subsequent memory page access, security check logic 800 may compare a"tag"portion of a physical address produced by paging unit 702 to tag portions of physical addresses corresponding to SAT entries stored in SAT entry buffer 802. If the tag portion of the physical address matches a tag portion of a physical address corresponding to a SAT entry stored in SAT entry buffer 802, security check logic 800 may access the SAT entry in SAT entry buffer 802, eliminating the need to perform the process of Fig. 9 to obtain the SAT entry from memory 406. Security kernel 504 (Fig. 5) modifies the contents of SAT base address register 908 in CPU 402 (e. g. , during context switches). In response to modifications of SAT base address register 908, security check logic 800 of CPU SCU 416 may flush SAT entry buffer 802. When computer system 400 of Fig. 4 is operating in the SEM, security check logic 800 receives the current privilege level (CPL) of the currently executing task (i. e. , the currently executing instruction), along with the page directory entry (PDE) U/S bit, the PDE R/W bit, the page table entry (PTE) U/S bit, and the PTE R/W bit of a selected memory page within which a physical address resides. Security check logic 800 uses the above information, along with the SP bit of the SAT entry corresponding to the selected memory page, to determine if memory 406 access is authorized. CPU 402 of Fig. 6 may be an x86 processor, and may include a code segment (CS) register, one of the 16-bit segment registers of the x86 processor architecture. Each segment register selects a 64k block of memory, called a segment. In the protected mode with paging enabled, the CS register is loaded with a segment selector that indicates an executable segment of memory 406. The highest ordered (i. e. , most significant) bits of the segment selector are used to store information indicating a segment of memory including a next instruction to be executed by execution unit 600 of CPU 402 (Fig. 6). An instruction pointer (IP) register is used to store an offset into the segment indicated by the CS register. The CS: IP pair indicate a segmented address of the next instruction. The two lowest ordered (i. e. , least significant) bits of the CS register are used to store a value indicating a current privilege level (CPL) of a task currently being executed by execution unit 600 (i. e. , the CPL of the current task). Table 1 below illustrates exemplary rules for CPU-initiated (i. e. , software-initiated) memory accesses when computer system 400 of Fig. 4 is operating in the SEM. CPU SCU 416 (Figs. 4-8) and security kernel <Desc/Clms Page number 11> 504 (Fig. 5) work together to implement the rules of Table 1 when computer system 400 of Fig. 4 is operating in the SEM to provide additional security for data stored in memory 406 above data security provided by operating system 502 (Fig. 5). Table 1. Exemplary Rules For Software-Initiated Memory Accesses When Computer System 400 Of Fig. 4 Is Operating In The SEM. Currently Selected Executing Memory Instruction Page Permitted SP CPL SP U/S R/W Access Remarks 10 XX 1 (R/W) R/W Full access granted. (Typical accessed page contents: security kernel and SEM data structures.) 1 0 X X 0 (R) Read Write attempt causes page Only fault; if selected memory page is a secure page (SP=1), a SEM Security Exception is signaled instead of page fault. 1 3 1 1 (U) 1 R/W Standard protection mechanisms apply. (Typical accessed page contents: high security applets.) 1 3 1 0 (S) X None Access causes page fault. (Typical accessed page contents: security kernel and SEM data structures.) 1 3 0 0 1 None Access causes page fault. (Typical accessed page contents: OS kernel and Ring 0 device drivers.) 0 0 1 X X None Access causes SEM security exception. 0 0 0 1 1 R/W Standard protection mechanisms apply. (Typical accessed page contents: high security <Desc/Clms Page number 12> 0 3 X 0 X None Access causes page fault; if selected memory page is a secure page (SP=1), a SEM Security Exception is raised instead of page fault. 0 3 0 1 1 R/W Standard protection mechanisms apply. (Typical accessed page contents: applications. ) In Table 1 above, the SP bit of the currently executing instruction is the SP bit of the SAT entry corresponding to the memory page containing the currently executing instruction. The U/S bit of the selected memory page is the logical AND of the PDE U/S bit and the PTE U/S bit of the selected memory page. The R/W bit of the selected memory page is the logical AND of the PDE R/W bit and the PTE R/W bit of the selected memory page. The symbol"X"signifies a"don't care" : the logical value may be either a'0'or a'1'. Referring back to Fig. 8, security check logic 800 of CPU SCU 416 produces a page fault signal and a "SEM SECURITY EXCEPTION"signal, and provides the page fault and the SEM SECURITY EXCEPTION signals to logic within paging unit 702. When security check logic 800 asserts the page fault signal, MMU 602 forwards the page fault signal to execution unit 600 (Fig. 6). In response to the page fault signal, execution unit 600 may use the well-known interrupt descriptor table (IDT) vectoring mechanism of the x86 processor architecture to access and execute a page fault handler routine. When security check logic 800 asserts the SEM SECURITY EXCEPTION signal, MMU 602 forwards the SEM SECURITY EXCEPTION signal to execution unit 600. Unlike normal processor exceptions which use the use the IDT vectoring mechanism of the x86 processor architecture, a different vectoring method may be used to handle SEM security exceptions. SEM security exceptions may be dispatched through a pair of registers (e. g., model specific registers or MSRs) similar to the way x86"SYSENTER"and"SYSEXIT" instructions operate. The pair of registers may be"security exception entry point'registers, and may define a branch target address for instruction execution when a SEM security exception occurs. The security exception entry point registers may define the code segment (CS), then instruction pointer (IP, or the 64-bit version RIP), stack segment (SS), and the stack pointer (SP, or the 64-bit version RSP) values to be used on entry to a SEM security exception handler. Under software control, execution unit 600 (Fig. 6) may push the previous SS, SP/RSP, EFLAGS, CS, and IP/RIP values onto a new stack to indicate where the exception occurred. In addition, execution unit 600 may push an error code onto the stack. It is noted that a normal return from interrupt (IRET) instruction may not be used as the previous SS and SP/RSP values are always saved, and a stack switch is always accomplished, even if a change in CPL does not occur. Accordingly, a new instruction may be defined to accomplish a return from the SEM security exception handler. Fig. 13 is a diagram of one embodiment of host bridge 404 of Fig. 4. In the embodiment of Fig. 13, host bridge 404 includes a host interface 1300, bridge logic 1302, host bridge SCU 418, a memory controller 1304, and a device bus interface 1306. Host interface 1300 is coupled to CPU 402, and device bus interface <Desc/Clms Page number 13> 1306 is coupled to device bus 408. Bridge logic 1302 is coupled between host interface 1300 and device bus interface 1306. Memory controller 1304 is coupled to memory 406, and performs all accesses to memory 406. Host bridge SCU 418 is coupled between bridge logic 1302 and memory controller 1304. As described above, host bridge SCU 418 controls access to memory 406 via device bus interface 1306. Host bridge SCU 418 monitors all accesses to memory 406 via device bus interface 1306, and allows only authorized accesses to memory 406. Fig. 14 is a diagram of one embodiment of host bridge SCU 418 of Fig. 13. In the embodiment of Fig. 14, host bridge SCU 418 includes security check logic 1400 coupled to a set of SEM registers 1402 and a SAT entry buffer 1404. The set of SEM registers 1402 govern the operation of security check logic 1400, and includes a second SAT base address register 908 of Fig. 9. The second SAT base address register 908 of the set of SEM registers 1402 may be an addressable register. When security kernel 504 (Fig. 5) modifies the contents of SAT base address register 908 in the set of SEM registers 610 of CPU 402 (e. g. , during a context switch), security kernel 504 may also write the same value to the second SAT base address register 908 in the set of SEM registers 1402 of host bridge SCU 418. In response to modifications of the second SAT base address register 908, security check logic 1400 of host bridge SCU 418 may flush SAT entry buffer 1404. Security check logic 1400 receives memory access signals of memory accesses initiated by hardware device units 414A-414D (Fig. 4) via device bus interface 1306 and bridge logic 1302 (Fig. 13). The memory access signals convey physical addresses from hardware device units 414A-414D, and associated control and/or data signals. Security check logic 1400 may embody mechanism 900 (Fig. 9) for obtaining SAT entries of corresponding memory pages, and may implement mechanism 900 when computer system 400 of Fig. 4 is operating in the SEM. SAT entry buffer 1404 is similar to SAT entry buffer 802 of CPU SCU 416 (Fig. 8) described above, and is used to store a relatively small number of SAT entries of recently accessed memory pages. When computer system 400 of Fig. 4 is operating in SEM, security check logic 1400 of Fig. 14 uses additional security information of a SAT entry associated with a selected memory page to determine if a given hardware-initiated memory access is authorized. If the given hardware-initiated memory access is authorized, security check logic 1400 provides the memory access signals (i. e. , address signals conveying a physical address and the associated control and/or data signals) of the memory access to memory controller 1304. Memory controller 1304 uses the physical address and the associated control and/or data signals to access memory 406. If memory 406 access is a write access, data conveyed by the data signals is written to memory 406. If memory 406 access is a read access, memory controller 1304 reads data from memory 406, and provides the resulting read data to security check logic 1400. Security check logic 1400 forwards the read data to bridge logic 1302, and bridge logic 1302 provides the data to device bus interface 1306. If, on the other hand, the given hardware-initiated memory access is not authorized, security check logic 1400 does not provide the physical address and the associated control and/or data signals of memory 406 accesses to memory controller 1304. If the unauthorized hardware-initiated memory access is a memory write access, security check logic 1400 may signal completion of the write access and discard the write data, leaving memory 406 unchanged. Security check logic 1400 may also create a log entry in a log (e. g. , set or clear one or more bits of a status register) to document the security access violation. Security kernel 504 may periodically access the log to check for such log entries. If the unauthorized hardware-initiated memory access is a memory <Desc/Clms Page number 14> read access, security check logic 1400 may return a false result (e. g., all"F"s) to device bus interface 1306 via bridge logic 1302 as the read data. Security check logic 1400 may also create a log entry as described above to document the security access violation. Fig. 15 is a flow chart of one embodiment of a method 1500 for providing access security for a memory used to store data arranged within multiple memory pages. Method 1500 reflects the exemplary rules of Table 1 for CPU-initiated (i. e. , software-initiated) memory accesses when computer system 400 of Fig. 4 is operating in the SEM. Method 1500 may be embodied within MMU 602 (Figs. 6-7). During a step 1502 of method 1500, a linear address produced during execution of an instruction is received, along with a security attribute of the instruction (e. g. , a CPL of a task including the instruction). The instruction resides in a memory page. During a step 1504, the linear address is used to access at least one paged memory data structure located in the memory (e. g. , a page directory and a page table) to obtain a base address of a selected memory page and security attributes of the selected memory page. The security attributes of the selected memory page may include, for example, a U/S bit and a R/W bit of a page directory entry and a U/S bit and a R/W bit of a page table entry. During a decision step 1506, the security attribute of the instruction and the security attributes of the selected memory page are used to determine whether or not the access is authorized. If the access is authorized, the base address of the selected memory page and an offset are combined during a step 1508 to produce a physical address within the selected memory page. If the access is not authorized, a fault signal (e. g. , a page fault signal) is generated during a step 1510. During a step 1512 following step 1508, at least one security attribute data structure located in the memory (e. g. , SAT directory 904 of Fig. 9 and a SAT) is accessed using the physical address of the selected memory page to obtain an additional security attribute of the first memory page and an additional security attribute of the selected memory page. The additional security attribute of the first memory page may include, for example, a secure page (SP) bit as described above, wherein the SP bit indicates whether or not the first memory page is a secure page. Similarly, the additional security attribute of the selected memory page may include a secure page (SP) bit, wherein the SP bit indicates whether or not the selected memory page is a secure page. The fault signal is generated during a step 1514 dependent upon the security attribute of the instruction, the additional security attribute of the first memory page, the security attributes of the selected memory page, and the additional security attribute of the selected memory page. It is noted that steps 1512 and 1514 of method 1500 may be embodied within CPU SCU 416 (Figs. 4-8). Table 2 below illustrates exemplary rules for memory page accesses initiated by device hardware units 414A-414D (i. e. , hardware-initiated memory accesses) when computer system 400 of Fig. 4 is operating in the SEM. Such hardware-initiated memory accesses may be initiated by bus mastering circuitry within device hardware units 414A-414D, or by DMA devices at the request of device hardware units 414A-414D. Security check logic 1400 may implement the rules of Table 2 when computer system 400 of Fig. 4 is operating in the SEM to provide additional security for data stored in memory 406 above data security provided by operating system 502 (Fig. 5). In Table 2 below, the"target"memory page is the memory page within which a physical address conveyed by memory access signals of a memory access resides. <Desc/Clms Page number 15> Table 2. Exemplary Rules For Hardware-Initiated Memory Accesses When Computer system 400 Of Fig. 4 Is Operating In The SEM. Particular Memory Page Access SP Type Action 0 R/W The access completes as normal. 1 Read The access is completed returning all"F"s instead of actual memory contents. The unauthorized access may be logged. 1 Write The access is completed but write data is discarded. Memory contents remain unchanged. The unauthorized access may be logged. In Table 2 above, the SP bit of the target memory page is obtained by host bridge SCU 418 using the physical address of the memory access and the above described mechanism 900 of Fig. 9 for obtaining SAT entries of corresponding memory pages. As indicated in Fig. 2, when SP=1 indicating the target memory page is a secure page, the memory access is unauthorized. In this situation, security check logic 1400 (Fig. 14) does not provide the memory access signals to the memory controller. A portion of the memory access signals (e. g. , the control signals) indicate a memory access type, and wherein the memory access type is either a read access or a write access. When SP=1 and the memory access signals indicate the memory access type is a read access, the memory access is an unauthorized read access, and security check logic 1400 responds to the unauthorized read access by providing all"F"s instead of actual memory contents (i. e. , bogus read data). Security check logic 1400 may also respond to the unauthorized read access by logging the unauthorized read access as described above. When SP=1 and the memory access signals indicate the memory access type is a write access, the memory access is an unauthorized write access. In this situation, security check logic 1400 responds to the unauthorized write access by discarding write data conveyed by the memory access signals. Security check logic 1400 may also respond to the unauthorized write access by logging the unauthorized write access as described above. Fig. 16 is a flow chart of one embodiment of a method 1600 for providing access security for a memory used to store data arranged within multiple memory pages. Method 1600 reflects the exemplary rules of Table 2 for hardware-initiated memory accesses when computer system 400 of Fig. 4 is operating in the SEM. Method 1600 may be embodied within host bridge 404 (Figs. 4 and 13-14). During a step 1602 of <Desc/Clms Page number 16> method 1600, memory access signals of a memory access are received, wherein the memory access signals convey a physical address within a target memory page. As described above, the memory access signals may be produced by a device hardware unit. The physical address is used to access at least one security attribute data structure located in the memory to obtain a security attribute of the target memory page during a step 1604. The at least one security attribute data structure may include, for example, a SAT directory (e. g. , SAT directory 904 in Fig. 9) and at least one SAT (e. g., SAT 906 in Fig. 9), and the additional security attribute of the target memory page may include a secure page (SP) bit as described above which indicates whether or not the target memory page is a secure page. During a step 1606, the memory is accessed using the memory access signals dependent upon the security attribute of the target memory page. It is noted that steps 1600 and 1602 of method 1600 may be embodied within host bridge SCU 418 (Figs. 4 and 13-14). The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
The present disclosure relates to a method, system, and apparatus for configuring a computing system, such as a cloud computing system. A method includes, based on user selections received via a user interface, configuring a cluster of nodes by selecting the cluster of nodes from a plurality of available nodes, selecting a workload container module from a plurality of available workload container modules for operation on each node of the selected cluster of nodes, and selecting a workload for execution with the workload container on the cluster of nodes. Each node of the cluster of nodes includes at least one processing device and memory, and the cluster of nodes is operative to share processing of a workload. |
WHAT IS CLAIMED IS: 1. A computing configuration method carried out by one or more computing devices, the method comprising: configuring, based on a plurality of user selections received via a user interface, a cluster of nodes for a computing system such that processing of a workload is distributed across the cluster of nodes, the configuring comprising: selecting the cluster of nodes for the computing system from a plurality of available nodes, selecting a workload container module for operation on each node of the cluster of nodes, the workload container module comprising a selectable code module that when executed by each node is operative to coordinate execution of a workload, and selecting a workload for execution with the workload container module on the cluster of nodes. 2. The method of claim 1, further comprising modifying at least one operational parameter of the workload container module based on user input received via the user interface, the at least one operational parameter being associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. 3. The method of claim 1, wherein the workload container module is selected from a plurality of available workload container modules. 4. The method of claim 1, wherein the selected workload container module is a custom workload container module stored on memory remote from the one or more computing devices and including a plurality of user-defined instructions operative to coordinate execution of the workload, the configuring further comprising loading the custom workload container module onto each node of the cluster of nodes. 5. The method of claim 1, further comprising providing the user interface, the user interface comprising selectable node data, selectable workload container data, and selectable workload data, wherein the selecting the cluster of nodes is based on a user selection of the selectable node data, the selecting the workload container module is based on a user selection of the selectable workload container data, and the selecting the workload is based on a user selection of the selectable workload data. 6. The method of claim 1, wherein the cluster of nodes are connected to a communication network, and wherein the method further comprises adjusting, based on a user input received via the user interface, at least one communication network parameter to limit the performance of the communication network during execution of the selected workload. 7. The method of claim 1, wherein the configuring comprises selecting at least one type of data from a plurality of types of data to be collected from each node of the cluster of nodes based on a user selection received via the user interface, and wherein the method further comprises collecting the at least one type of data from each node of the cluster of nodes. 8. A computing configuration system comprising: a node configurator operative to select a cluster of nodes from a plurality of available nodes for a computing system, the cluster of nodes being operative to share processing of a workload; a workload container configurator operative to select a workload container module for operation on each node of the cluster of nodes, the workload container module comprising a selectable code module that when executed by each node is operative to coordinate execution of the workload; and a workload configurator operative to select a workload for execution with the selected workload container module on the cluster of nodes. 9. The system of claim 8, wherein the workload container configurator is further operative to modify at least one operational parameter of the workload container module based on user input received via the user interface, the at least one operational parameter being associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. 10. The system of claim 8, wherein workload container configurator selects the workload container module from a plurality of available workload container modules. 11. The system of claim 8, wherein the selected workload container module is a custom workload container module stored on memory remote from the computing configuration system and comprising a configuration file having a plurality of user-defined instructions for coordinating execution of the workload, the workload container configurator further being operative to load the custom workload container module stored on the remote memory onto each node of the cluster of nodes. 12. The system of claim 8, wherein the cluster of nodes are interconnected via a communication network, and wherein the system further comprises a network configurator operative to adjust at least one communication network parameter of the computing system to modify the performance of the communication network during the execution of the selected workload. 13. The system of claim 8, wherein the workload configurator selects the workload from at least one of an actual workload and a synthetic test workload, the actual workload being stored in memory accessible by the computing configuration system, and the synthetic test workload being generated by the computing configuration system based on user-defined workload parameters. 14. The system of claim 8, further comprising a data aggregator operative to collect performance data associated with the execution of the workload from each node of the cluster of nodes and to generate a statistical graph representing the performance data collected from each node. 15. The system of claim 8, further comprising at least one processor; and memory containing executable instructions that when executed by the at least one processor cause the at least one processor to provide a graphical user interface on a display, the graphical user interface comprising data representing the node configurator, the workload container configurator, and the workload configurator. 17. A method of configuring a computing system carried out by one or more computing devices, the method comprising: selecting, based on a user selection received via a user interface, a workload container module from a plurality of available workload container modules for operation on each node of a cluster of nodes of the computing system, the selected workload container module comprising a selectable code module that when executed by each node is operative to coordinate execution of a workload on the cluster of nodes; and configuring each node of the cluster of nodes with the selected workload container module for executing the workload such that processing of the workload is distributed across the cluster of nodes. 18. The method of claim 17, wherein the one or more computing devices comprise a control server of the computing system, the plurality of available workload container modules includes a custom workload container module stored on a memory remote from the computing system, and the custom workload container module includes a plurality of user-defined instructions for coordinating execution of the workload. 19. The method of claim 17, further comprising providing the user interface, the user interface comprising selectable workload container data, wherein the selecting the workload container module is based on a user selection of the selectable workload container data. 20. The method of claim 17, further comprising modifying an operational parameter of the selected workload container module for each node based on user input received via the user interface, the operational parameter being associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. 21. The method of claim 17, wherein the configuring comprises installing the selected workload container module onto each node of the cluster of nodes and initiating the execution of the workload with the selected workload container module on the cluster of nodes such that each node of the cluster of nodes processes at least one processing thread of the workload. 22. A computing configuration system comprising: a workload container configurator operative to receive user input and to select a workload container module based on the user input from a plurality of available workload container modules, the selected workload container module comprising a selectable code module that when executed by a cluster of nodes of a computing system is operative to coordinate execution of a workload; and a node configurator operative to configure each node of the cluster of nodes of the computing system with the selected workload container module for executing the workload such that processing of the workload is distributed across the cluster of nodes. 23. The system of claim 22, wherein the plurality of available workload container modules comprises a custom workload container module stored on memory remote from the computing configuration system, and the custom workload container module comprises a configuration file including a plurality of user-defined instructions operative to coordinate execution of the workload. 24. The system of claim 22, further comprising at least one processor; and memory containing executable instructions that when executed by the at least one processor cause the at least one processor to provide a graphical user interface on a display, the graphical user interface comprising data representing the workload container configurator and the node configurator. 25. The system of claim 22, wherein the workload container configurator is further operative to modify an operational parameter of the selected workload container module based on user input received via a user interface, the operational parameter being associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. 26. The system of claim 22, wherein the node configurator is further operative to install the selected workload container module onto each node of the cluster of nodes and to initiate the execution of the workload with the selected workload container module on the cluster of nodes such that each node of the cluster of nodes processes at least one processing thread of the workload. 27. A method of configuring a computing system carried out by one or more computing devices, the method comprising: selecting a cluster of nodes from a plurality of available nodes for the computing system that are operative to share processing of a workload; and modifying an operational parameter of a same workload container module of each node of the cluster of nodes based on user input received via a user interface, the workload container module comprising a code module that when executed by each node of the cluster of nodes is operative to coordinate execution of the workload with the cluster of nodes based on the operational parameter, the operational parameter being associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. 28. The method of claim 27, further comprising providing the user interface, the user interface comprising selectable workload container data, wherein the modifying the operational parameter of the workload container module is based on a user selection of the selectable workload container data. 29. The method of claim 27, wherein the operational parameter associated with the read/write operation comprises at least one of a memory buffer size for the read/write operation and a size of a data block transferred during the read/write operation. 30. The method of claim 27, wherein the operational parameter associated with the file system operation comprises at least one of a number of file system records stored in memory of each node and a number of processing threads of each node allocated for processing requests for the file system. 31. The method of claim 27, wherein the operational parameter associated with the sorting operation comprises a number of data streams to merge when performing the sorting operation. 32. The method of claim 27, wherein the cluster of nodes are connected to a communication network, and wherein the method further comprises adjusting, based on a user input received via the user interface, at least one communication network parameter to limit the performance of the communication network during execution of the workload. 33. A computing configuration system comprising: a node configurator operative to select a cluster of nodes from a plurality of available nodes for a computing system, the cluster of nodes being operative to share processing of the workload; and a workload container configurator operative to modify an operational parameter of a same workload container module of each node of the cluster of nodes based on user input received via a user interface, the workload container module comprising a code module that when executed by each node of the cluster of nodes is operative to coordinate execution of the workload with the cluster of nodes based on the operational parameter, the operational parameter being associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. 34. The system of claim 33, wherein the operational parameter associated with the read/write operation comprises at least one of a memory buffer size for the read/write operation and a size of a data block transferred during the read/write operation. 35. The system of claim 33, wherein the operational parameter associated with the file system operation comprises at least one of a number of file system records stored in memory of each node and a number of processing threads of each node allocated for processing requests for the file system. 36. The system of claim 33, wherein the operational parameter associated with the sorting operation comprises a number of data streams to merge when performing the sorting operation. 37. The system of claim 33, wherein the cluster of nodes are connected to a communication network, and wherein the system further comprises a network configurator operative to adjust, based on a user input received via the user interface, at least one communication network parameter to limit the performance of the communication network during execution of the workload. 38. The system of claim 33, wherein the user interface comprises a graphical user interface, the system further comprising at least one processor and memory containing executable instructions that when executed by the at least one processor cause the at least one processor to provide the graphical user interface on a display, the graphical user interface comprising data representing the node configurator and the workload container configurator. |
SYSTEM AND METHOD FOR CONFIGURING CLOUD COMPUTING SYSTEMS FIELD OF THE DISCLOSURE [0001] The present disclosure is generally related to the field of computing systems, and more particularly to methods and systems for configuring a cloud computing system and for analyzing the performance of the cloud computing system. BACKGROUND [0002] Cloud computing involves the delivery of hosted services over a network, such as the Internet, for example. Cloud computing systems provide for the delivery of computing capacity and storage capacity as a service to end users. Cloud computing systems include multiple servers, or "nodes", operating on a distributed communication network, and each node includes local processing capability and memory. For example, each node of the cloud computing system includes at least one processing device for providing computing capability and a memory for providing storage capacity. Rather than running an application locally or storing data locally, a user may run the application or store data remotely on the cloud or "cluster" of nodes. End users may access cloud-based applications through a web browser or some other software application on a local computer, for example, while the software application and/or data related to the software application are stored and/or executed on the cloud nodes at a remote location. Cloud computing resources are typically allocated to the end user on demand, with the cloud computing system cost corresponding to the actual amount of resources utilized by the end user. [0003] Computing tasks are distributed across multiple nodes of the cloud computing system in the form of a workload. The nodes operate to share processing of the workload. A workload (also referred to as a "kernel") includes a computing job or task that is performed and executed on the cloud of nodes. A workload, which comprises a collection of software or firmware code and any necessary data, includes any application or program or a portion of an application or program that is executed on the cluster of nodes. For example, one exemplary workload is an application that implements one or more algorithms. Exemplary algorithms include, for example, clustering, sorting, classifying, or filtering a dataset. Other exemplary workloads include service-oriented applications that are executed to provide a computing service to an end- user. In some embodiments, a workload includes a single application that is cloned and executed on multiple nodes simultaneously. A load balancer distributes requests to be executed with the workload across the cluster of nodes such that the nodes share the processing load associated with the workload. The cluster of nodes collaborates results of an execution of the workload to produce a final result. [0004] A workload container, which comprises one or more processors of a node executing a workload container module (e.g., software or firmware code), operates on each node. The workload container is an execution framework for workloads to provide a software environment that initiates and orchestrates the execution of workloads on a cluster of nodes. Workload containers typically provide an execution framework for a particular class of workloads on the cluster of nodes. The workload container configures the associated node to operate as a node of the cloud such that the node executes the workload, shares the results of the workload execution with other nodes of the cloud, and collaborates and communicates with other nodes of the cloud. In one embodiment, the workload container includes application program interfaces (API's) or XML-based interfaces for interfacing with other nodes as well as with other applications and hardware of the associated node. [0005] One exemplary workload container is Apache Hadoop, which is Java-based, that provides a map-reduce framework and a distributed file system (HDFS) for map-reduce workloads. A cluster of nodes operating with the Hadoop workload container typically includes a master node as well as multiple worker nodes. The Hadoop workload container coordinates the assignment of the master or worker status to each node and informs each node that it is operating in a cloud. The master node tracks job (i.e., workload) initiation and completion as well as file system metadata. In the "map" phase of the map-reduce framework, a task or workload is partitioned into multiple portions (i.e., multiple groups of one or more processing threads), and the portions of the workload are distributed to the worker nodes that process the threads and the associated input data. In the "reduce" phase, the output from each worker node is collected and combined to produce a final result or answer. The distributed file system (HDFS) of Hadoop is utilized to store data and to communicate data between the worker nodes. The HDFS file system supports data replication to increase the likelihood of data reliability by storing multiple copies of the data and files. [0006] Setting up or configuring a cluster of nodes in prior art cloud computing platforms is a complex process that requires a steep learning curve. The cloud software and workloads must be individually deployed to each node, and any configuration changes must also be deployed to each node individually. Analyzing the performance of the cluster of nodes and optimizing the cloud set-up involves multiple independent variables and is often time-consuming, requiring ad- hoc interfaces adapted for monitoring and analyzing particular applications. In particular, the cloud operator or engineer must create commands to obtain data about how the workload is running as well as to obtain the actual results of the workload. Additionally, such data is in a format that is specific to the system configuration at hand, and the data must be integrated by the cloud operator or engineer in a form that is suitable for performance analysis. The cloud operator or engineer is required to learn specific details of the cloud mechanism, any networking issues, system administration-related tasks, as well as deployment and data formats of the available performance analysis tools. Further, monitoring and analyzing performance of workloads on the cluster of nodes is complex, time consuming, and dependent on the particular cloud configuration. The cloud operator or engineer is not always privy to all of the configuration and hardware information for the particular cloud system, making accurate performance analysis difficult. [0007] Several cloud computing platforms are available today, including Amazon Web Services (AWS) and OpenStack, for example. Amazon's AWS, which includes Elastic Compute Cloud (EC2), rents a cluster of nodes (servers) to an end-user for use as a cloud computing system. AWS allows the user to allocate a cluster of nodes and to execute a workload on the cluster of nodes. AWS limits the user to execute workloads only on Amazon-provided server hardware with various restrictions, such as requiring specific hardware configurations and software configurations. OpenStack allows a user to build and manage a cluster of nodes on user- provided hardware. AWS and OpenStack lack a mechanism for quickly configuring and deploying workload and workload container software to each node, for modifying network parameters, and for aggregating performance data from all nodes of the cluster. [0008] A known method of testing the performance of a particular local processor includes creating a synthetic, binary code based on user-specified parameters that can be executed by the local processor. However, generation of the binary synthetic code requires the user to hard-code the user-specified parameters, requiring significant development time and prior knowledge of the architecture of the target processor. Such hard-coded synthetic code must be written to target a particular instruction set architecture (ISA) (e.g. x86) and a particular microarchitecture of the targeted processor. Instruction set architecture refers to the component of computer architecture that identifies data types/formats, instructions, data block size, processing registers, memory addressing modes, memory architecture, interrupt and exception handling, I/O, etc. Microarchitecture refers to the component of computer architecture that identifies the data paths, data processing elements (e.g., logic gates, arithmetic logic units (ALUs), etc.), data storage elements (e.g., registers, cache, etc.), etc., and how the processor implements the instruction set architecture. As such, the synthetic code must be re-engineered with modified or new hard- coded parameters and instructions to execute variations of an instruction set architecture and different microarchitectures of other processor(s). As such, such hard-coded synthetic code is not suitable for testing multiple nodes of a cloud computing system. [0009] Another method of testing the performance of a local processor is to execute an industry- standard workload or trace, such as a workload provided by the Standard Performance Evaluation Corporation (SPEC), to compare the processor's performance with a performance benchmark. However, executing the entire industry-standard workload often requires large amounts of simulation time. Extracting relevant, smaller traces from the workload for execution by the processor may reduce simulation time but also requires extra engineering effort to identify and extract the relevant traces. Further, the selection of an industry-standard workload, or the extraction of smaller traces from a workload, must be repeated for distinct architectural configurations of the processor(s). [0010] Current cloud systems that deliver computing capacity and storage capacity as a service to end users lack a mechanism to change the boot-time configuration of each node of the cluster of nodes of the cloud system. For example, boot-time configuration changes must be hard-coded onto each node of the cloud by an engineer or programmer in order to modify boot-time parameters of the nodes, which requires considerable time and is cumbersome. Further, the engineer must have detailed knowledge of the hardware and computer architecture of the cluster of node prior to writing the configuration code. [0011] Typical cloud systems that deliver computing capacity and storage capacity as a service to end users lack a mechanism to allow a user to specify and to modify a network configuration of the allocated cluster of nodes. In many cloud systems, users can only request a general type of nodes and have little or no direct control over the network topology, i.e., the physical and logical network connectivity of the nodes, and the network performance characteristics of the requested nodes. Amazon AWS, for example, allows users to select nodes that are physically located in a same general region of the country or world (e.g., Eastern or Western United States, Europe, etc.), but the network connectivity of the nodes and the network performance characteristics of the nodes are not selectable or modifiable. Further, some of the selected nodes may be physically located far away from other selected nodes, despite being in the same general region of the country or even in the same data center. For example, the nodes allocated by the cloud system may be located on separate racks in a distributed data center that are physically far apart, resulting in decreased or inconsistent network performance between nodes. [0012] Similarly, in typical cloud systems, the end user has limited or no control over the actual hardware resources of the node cluster. For example, when allocating nodes, the user can only request nodes of a general type. Each available type of node may be classified by the number of the CPU(s) of the node, the available memory, available disk space, and general region of the country or world where the node is located. However, the allocated node may not have the exact hardware characteristics as the selected node type. Selectable node types are coarse classifications. For example, the node types may include small, medium, large, and extra large corresponding to the amount of system memory and disk space as well as the number of processing cores of the node. However, even with nodes selected having a same general type, the actual computing capacity and storage capacity of the nodes allocated by the system may vary. For example, the available memory and disk space as well as operating frequency and other characteristics may vary or fall within a range of values. For example, a "medium" node may include any node having a system memory of 1500 MB to 5000 MB and storage capacity of 200 GB to 400 GB. As such, the user is not always privy to the actual hardware configuration of the allocated nodes. Further, even among nodes having the same number of processors and memory/disk space, other hardware characteristics of these nodes may vary. For example, similar nodes vary based on the operating frequency of the nodes, the size of the cache, a 32-bit architecture versus a 64-bit architecture, the manufacturer of the nodes, the instruction set architecture, etc., and user has no control over these characteristics of the selected nodes. [0013] Often the user does not have a clear understanding of the specific hardware resources required by his application or workload. The difficulty in setting up the node cluster to execute the workload results in the user having limited opportunity to try different hardware configurations. Combined with the user's lack of knowledge of the actual hardware resources of the allocated nodes, this often results in unnecessary user costs for under-utilized hardware resources. Various monitoring tools are available that can measure the CPU, memory, and disk and network utilization of a single physical processing machine. However, current cloud systems do not provide a mechanism to allow a user to deploy these monitoring tools to the nodes of the cluster to monitor hardware usage. As such, actual hardware utilization during workload execution is unknown to the user. Most public cloud services offer an accounting mechanism that can provide basic information about the cost of the requested hardware resources used by the user while running a workload. However, such mechanisms only provide basic information about the costs of the requested hardware resources, and do not identify the actual hardware resources used during workload execution. [0014] In many cloud systems, a limited number of configuration parameters are available to the user for adjusting and improving a configuration of the node cluster. For example, a user may only be able to select different nodes having different general node types to alter the cloud configuration. Further, each configuration change must be implemented manually by the user by selecting different nodes for the node cluster and starting the workload with the different nodes. Such manual effort to apply configuration changes and to test the results is costly and time consuming. Further, the various performance monitoring tools that are available for testing node performance are typically adapted for a single physical processing machine, and current cloud systems lack a mechanism to allow a user to deploy these monitoring tools to the nodes of the cluster to test performance of the node cluster with the different configurations. [0015] Therefore, a need exists for methods and systems for automating the creation, deployment, provision, execution, and data aggregation of workloads on a node cluster of arbitrary size. A need further exists for methods and systems to quickly configure and deploy workload and workload container software to each node and to aggregate and analyze workload performance data from all nodes of the cluster. A need further exists for methods and systems to test the performance of multiple nodes of a cloud computing system and to provide automated configuration tuning of the cloud computing system based on the monitored performance. A need further exists for methods and systems to generate retargetable synthetic test workloads for execution on the cloud computing system for testing node processors having various computer architectures. A need further exists for methods and systems that provide for the modification of a boot-time configuration of nodes of a cloud computing system. A need further exists for methods and systems that facilitate the modification of a network configuration of the cluster of nodes of the cloud system. A need further exists for methods and systems that allow for the automated selection of suitable nodes for the cluster of nodes based on a desired network topology, a desired network performance, and/or a desired hardware performance of the cloud system. A need further exists for methods and systems to measure the usage of hardware resources of the node cluster during workload execution and to provide hardware usage feedback to a user and/or automatically modify the node cluster configuration based on the monitored usage of the hardware resources. SUMMARY OF EMBODIMENTS OF THE DISCLOSURE [0016] In an exemplary embodiment of the present disclosure, a computing configuration method carried out by one or more computing devices is provided. The method includes configuring, based on a plurality of user selections received via a user interface, a cluster of nodes for a computing system such that processing of a workload is distributed across the cluster of nodes. The configuring includes: selecting the cluster of nodes for the computing system from a plurality of available nodes; selecting a workload container module for operation on each node of the cluster of nodes, the workload container module comprising a selectable code module that when executed by each node is operative to coordinate execution of a workload; and selecting a workload for execution with the workload container module on the cluster of nodes. [0017] Among other advantages, some embodiments may allow for the selection, configuration, and deployment of a cluster of nodes, a workload, a workload container, and a network configuration via a user interface. In addition, some embodiments may allow for the control and adjustment of configuration parameters, thereby enabling performance analysis of the computing system under varying characteristics of the nodes, network, workload container, and/or workload. Other advantages will be recognized by those of ordinary skill in the art. [0018] In another exemplary embodiment of the present disclosure, a computing configuration system is provided including a node configurator operative to select a cluster of nodes from a plurality of available nodes for a computing system. The cluster of nodes are operative to share processing of a workload. The system further includes a workload container configurator operative to select a workload container module for operation on each node of the cluster of nodes. The workload container module includes a selectable code module that when executed by each node is operative to coordinate execution of the workload. The system further includes a workload configurator operative to select a workload for execution with the selected workload container module on the cluster of nodes. [0019] In yet another exemplary embodiment of the present disclosure, a method of configuring a computing system carried out by one or more computing devices is provided. The method includes selecting, based on a user selection received via a user interface, a workload container module from a plurality of available workload container modules for operation on each node of a cluster of nodes of the computing system. The selected workload container module includes a selectable code module that when executed by each node is operative to coordinate execution of a workload on the cluster of nodes. The method further includes configuring each node of the cluster of nodes with the selected workload container module for executing the workload such that processing of the workload is distributed across the cluster of nodes. [0020] In still another exemplary embodiment of the present disclosure, a computing configuration system is provided including a workload container configurator operative to receive user input and to select a workload container module based on the user input from a plurality of available workload container modules. The selected workload container module includes a selectable code module that when executed by a cluster of nodes of a computing system is operative to coordinate execution of a workload. The system further includes a node configurator operative to configure each node of the cluster of nodes of the computing system with the selected workload container module for executing the workload such that processing of the workload is distributed across the cluster of nodes. [0021] In another exemplary embodiment of the present disclosure, a method of configuring a computing system carried out by one or more computing devices is provided. The method includes selecting a cluster of nodes from a plurality of available nodes for the computing system that are operative to share processing of a workload. The method further includes modifying an operational parameter of a same workload container module of each node of the cluster of nodes based on user input received via a user interface. The workload container module includes a code module that when executed by each node of the cluster of nodes is operative to coordinate execution of the workload with the cluster of nodes based on the operational parameter. The operational parameter is associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. [0022] In yet another exemplary embodiment of the present disclosure, a computing configuration system is provided including a node configurator operative to select a cluster of nodes from a plurality of available nodes for a computing system. The cluster of nodes are operative to share processing of the workload. The system further includes a workload container configurator operative to modify an operational parameter of a same workload container module of each node of the cluster of nodes based on user input received via a user interface. The workload container module includes a code module that when executed by each node of the cluster of nodes is operative to coordinate execution of the workload with the cluster of nodes based on the operational parameter. The operational parameter is associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. BRIEF DESCRIPTION OF THE DRAWINGS [0023] The invention will be more readily understood in view of the following description when accompanied by the below figures and wherein like reference numerals represent like elements: [0024] FIG. 1 is a block diagram of a cloud computing system in accordance with an embodiment including a cluster of nodes operating on a communication network, a control server in communication with the cluster of nodes, and a configurator of the control server; [0025] FIG. 2 is a block diagram of an exemplary node of the cluster of nodes of FIG. 1 including at least one processor and a memory; [0026] FIG. 3 is a block diagram of an exemplary control server of the cloud computing system of FIG. 1 including a configurator operative to configure the cloud computing system of FIG. 1; [0027] FIG. 4 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for configuring a cloud computing system; [0028] FIG. 5 is a flow chart of another exemplary method of operation of the configurator of FIG. 3 for configuring a cloud computing system; [0029] FIG. 6 is a flow chart of another exemplary method of operation of the configurator of FIG. 3 for configuring a cloud computing system; [0030] FIG. 7 illustrates an exemplary user interface provided by the configurator of FIG. 3 including an Authentication and Settings Library module for facilitating user access authentication; [0031] FIG. 8 illustrates an Instances module of the exemplary user interface of FIG. 7 including an Instances tab for facilitating the selection of the cluster of nodes of FIG. 1; [0032] FIG. 9 illustrates an Instance Types tab of the Instances module of FIG. 8 for facilitating the selection of a node type for nodes of the cluster of nodes of FIG. 1; [0033] FIG. 10 illustrates an Other Instances Settings tab of the Instances module of FIG. 8 for facilitating the configuration of boot-time parameters of one or more nodes of the cluster of nodes of FIG. 1; [0034] FIG. 11 illustrates a Network Settings Wizard of a Network Configuration module of the exemplary user interface of FIG. 7 including a Delay tab for facilitating the implementation of a network delay on the communication network of FIG. 1 ; [0035] FIG. 12 illustrates a Packet Loss tab of the Network Configuration module of FIG. 11 for facilitating the adjustment of a packet loss rate on the communication network of FIG. 1; [0036] FIG. 13 illustrates a Packet Duplication tab of the Network Configuration module of FIG. 11 for facilitating the adjustment of a packet duplication rate on the communication network of FIG. 1; [0037] FIG. 14 illustrates a Packet Corruption tab of the Network Configuration module of FIG. 11 for facilitating the adjustment of a packet corruption rate on the communication network of FIG. 1; [0038] FIG. 15 illustrates a Packet Reordering tab of the Network Configuration module of FIG. 11 for facilitating the adjustment of a packet reordering rate on the communication network of FIG. 1; [0039] FIG. 16 illustrates a Rate Control tab of the Network Configuration module of FIG. 11 for facilitating the adjustment of a communication rate on the communication network of FIG. 1; [0040] FIG. 17 illustrates a Custom Commands tab of the Network Configuration module of FIG. 11 for facilitating the adjustment of network parameters on the communication network of FIG. 1 based on custom command strings; [0041] FIG. 18 illustrates a Workload Container Configuration module of the exemplary user interface of FIG. 7 including a Hadoop tab for facilitating the selection of a Hadoop workload container; [0042] FIG. 19 illustrates the Hadoop tab of the Workload Container Configuration module of FIG. 18 including an Extended tab for facilitating the configuration of operational parameters of the Hadoop workload container; [0043] FIG. 20 illustrates the Hadoop tab of the Workload Container Configuration module of FIG. 18 including a Custom tab for facilitating the configuration of operational parameters of the Hadoop workload container based on custom command strings; [0044] FIG. 21 illustrates a Custom tab of the Workload Container Configuration module of FIG. 18 for facilitating the selection of a custom workload container; [0045] FIG. 22 illustrates a Workload Configuration module of the exemplary user interface of FIG. 7 including a Workload tab for facilitating the selection of a workload for execution on the cluster of nodes of FIG. 1; [0046] FIG. 23 illustrates a Synthetic Kernel tab of the Workload Configuration module of FIG. 22 for facilitating the configuration of a synthetic test workload; [0047] FIG. 24 illustrates a MC-Blaster tab of the Workload Configuration module of FIG. 22 for facilitating the configuration of a Memcached workload; [0048] FIG. 25 illustrates a Batch Processing module of the exemplary user interface of FIG. 7 for facilitating the selection and configuration of a batch sequence for execution on the cluster of nodes of FIG. 1; [0049] FIG. 26 illustrates a Monitoring module of the exemplary user interface of FIG. 7 including a Hadoop tab for facilitating the configuration of a Hadoop data monitoring tool; [0050] FIG. 27 illustrates a Ganglia tab of the Monitoring module of FIG. 26 for facilitating the configuration of a Ganglia data monitoring tool; [0051] FIG. 28 illustrates a SystemTap tab of the Monitoring module of FIG. 26 for facilitating the configuration of a SystemTap data monitoring tool; [0052] FIG. 29 illustrates an I/O Time tab of the Monitoring module of FIG. 26 for facilitating the configuration of virtual memory statistics (VMStat) and input/output statistics (IOStat) data monitoring tools; [0053] FIG. 30 illustrates a Control and Status module of the exemplary user interface of FIG. 7 for facilitating the deployment of the system configuration to the cluster of nodes of FIG. 1 and for facilitating the aggregation of data monitored by the monitoring tools of FIGS. 26-29; [0054] FIG. 31 is another block diagram of the cloud computing system of FIG. 1 illustrating a web-based data aggregator of the configurator of FIG. 1; [0055] FIG. 32 illustrates an exemplary table illustrating a plurality of user-defined workload parameters for generating a synthetic test workload; [0056] FIG. 33 is a block diagram of an exemplary synthetic test workload system including a synthesizer operative to generate the synthetic test workload and a synthetic workload engine of a node operative to activate and execute at least a portion of the synthetic test workload; [0057] FIG. 34 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for configuring a cloud computing system with at least one of an actual workload and a synthetic test workload; [0058] FIG. 35 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for configuring a cloud computing system with a synthetic test workload; [0059] FIG. 36 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for selecting a boot-time configuration of at least one node of the cluster of nodes of FIG. 1; [0060] FIG. 37 is a flow chart of an exemplary method of operation of a node of the cluster of nodes of FIG. 1 for modifying at least one boot-time parameter of the node; [0061] FIG. 38 is a flow chart of an exemplary method of operation of the cloud computing system of FIG. 1 for modifying a boot-time configuration of one or more nodes of the cluster of nodes of FIG. 1; [0062] FIG. 39 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for modifying a communication network configuration of at least one node of the cluster of nodes of FIG. 1; [0063] FIG. 40 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for selecting a cluster of nodes for a cloud computing system based on a network configuration of an emulated node cluster; [0064] FIG. 41 is a flow chart of another exemplary method of operation of the configurator of FIG. 3 for selecting and configuring a cluster of nodes for a cloud computing system based on a network configuration of an emulated node cluster; [0065] FIG. 42 illustrates an exemplary data file that identifies a plurality of communication network characteristics of a node cluster; [0066] FIG. 43 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for selecting the cluster of nodes of FIG. 1; [0067] FIG. 44 is a flow chart of another exemplary method of operation of the configurator of FIG. 3 for selecting the cluster of nodes of FIG. 1; [0068] FIG. 45 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for selecting a hardware configuration of the cluster of nodes of FIG. 1; [0069] FIG. 46 is a flow chart of another exemplary method of operation of the configurator of FIG. 3 for selecting a hardware configuration of the cluster of nodes of FIG. 1; [0070] FIG. 47 is a flow chart of an exemplary method of operation of the configurator of FIG. 3 for selecting configuration parameters for the cluster of nodes of FIG. 1 based on monitored performance characteristics of the cluster of nodes; and [0071] FIG. 48 is a flow chart of another exemplary method of operation of the configurator of FIG. 3 for selecting configuration parameters for the cluster of nodes of FIG. 1 based on monitored performance characteristics of the cluster of nodes. DETAILED DESCRIPTION [0072] While the embodiments disclosed herein are described with respect to a cloud computing system, the methods and systems of the present disclosure may be implemented with any suitable computing system that includes multiple nodes cooperating to execute a workload. [0073] As referenced herein, a node of a computing system includes at least one processing device and a memory accessible by the at least one processing device. A node may also be referred to as a server, a virtual server, a virtual machine, an instance, or a processing node, for example. [0074] FIG. 1 illustrates an exemplary cloud computing system 10 according to various embodiments that is configured to deliver computing capacity and storage capacity as a service to end users. Cloud computing system 10 includes a control server 12 operatively coupled to a cluster of nodes 14. The cluster of nodes 14 is connected to a distributed communication network 18, and each node 16 includes local processing capability and memory. In particular, each node 16 includes at least one processor 40 (FIG. 2) and at least one memory 42 (FIG. 2) that is accessible by the processor 40. Communication network 18 includes any suitable computer networking protocol, such as an internet protocol (IP) format including Transmission Control Protocol/Internet Protocol (TCP/IP) or User Datagram Protocol (UDP), an Ethernet network, a serial network, or other local or wide area network (LAN or WAN), for example. [0075] As described herein, nodes 16 are selected by control server 12 from a cloud of multiple available nodes 16 connected on communication network 18 to designate the cluster of nodes 14. The available nodes 16 are provided on one or more server storage racks in a data center, for example, and include a variety of hardware configurations. In one embodiment, available nodes 16 from multiple data centers and/or other hardware providers are accessible by control server 12 for selection and configuration as a cluster of nodes 14 for a cloud computing system 10. For example, one or more third-party data centers (e.g., Amazon Web Services, etc.) and/or user- provided hardware may be configured for cloud computing by control server 12. In one example, thousands of nodes 16 may be available for selection and configuration by control server 12, although any number of nodes 16 may be available. While five nodes 16 are illustrated in FIG. 1, any suitable number of nodes 16 may be selected for cloud computing system 10. Control server 12 includes one or more computing devices, illustratively server computers, each including one or more processors. In the illustrated embodiment, control server 12 is a dedicated server computer 12 physically separate from node cluster 14. In one embodiment, control server 12 is physically remote from the data center housing the available nodes 16. Control server 12 alternatively may be one or more nodes 16 of the selected cluster of nodes 14. Control server 12 serves as a cloud computing configuration system operative to allocate and configure nodes 16, to start a workload on nodes 16, to collect and report performance data, etc., as described herein. [0076] Control server 12 illustratively includes a configurator 22, a load generator 24, and a load balancer 26. As referenced herein, configurator 22, load generator 24, and load balancer 26 comprise one or more processors that execute software or firmware code stored in an internal or external memory accessible by the one or more processors. The software/firmware code contains instructions corresponding to the functions of configurator 22, load generator 24, and load balancer 26 that, when executed by the one or more processors, cause the one or more processors to perform the functions described herein. Configurator 22, load generator 24, and/or load balancer 26 may alternatively include application-specific integrated circuits (ASICs), field- programmable gate arrays (FPGAs), digital signal processors (DSPs), hardwired logic, or combinations thereof. Configurator 22 is operative to select and configure one or more nodes 16 for inclusion in the cluster of nodes 14, to configure parameters of communication network 18, to select, configure, and deploy a workload container module and a workload for execution on the cluster of nodes 14, and to gather and analyze performance data associated with the execution of the workload, as described herein. Configurator 22 is operative to generate configuration files 28 that are provided to and processed at nodes 16 for configuring software on nodes 16 and at least one configuration file 30 provided to load generator 24 for providing workload request parameters to load generator 24. [0077] Load generator 24 is operative to generate requests that serve as input used by node cluster 14 for workload execution. In other words, node cluster 14 executes the workload based on the requests and the input parameters and data provided with the requests. In one embodiment, the requests from load generator 24 are initiated by a user. For example, a user or customer may request (e.g., via user interface 200) a search or a sort operation for a specified search term or dataset, respectively, and load generator 24 generates a corresponding search or sort request. In one embodiment, configurator 22 generates a configuration file 30 that describes the user requests received via user interface 200. Nodes 16 execute the workload using the identified terms to be searched or the dataset to be sorted. Load generator 24 may generator other suitable requests depending on the type of workload to be executed. Load balancer 26 is operative to distribute the requests provided by load generator 24 among nodes 16 to direct which nodes 16 execute which requests. Load balancer 26 is also operative to divide a request from load generator 24 into parts and to distribute the parts to nodes 16 such that multiple nodes 16 operate in parallel to execute the request. [0078] Configurator 22 is illustratively web-based such that a user may access configurator 22 over the Internet, although configurator 22 may be accessed over any suitable network or communication link. An exemplary user's computer 20 is illustrated in FIG. 1 including a display 21, a processor 32 (e.g., central processing unit (CPU)), and a memory 34 accessible by processor 32. Computer 20 may include any suitable computing device such as a desktop computer, a laptop, a mobile device, a smartphone, etc. A web-browser 36, which includes software or firmware code, is run on computer 20 and is used to access a graphical user interface provided by configurator 22 and to display the graphical user interface on display 21. See, for example, graphical user interface 200 illustrated in FIGS. 7-30. [0079] Various other arrangements of components and corresponding connectivity of cloud computing system 10, that are alternatives to what is illustrated in the figures, may be utilized and such arrangements of components and corresponding connectivity would remain in accordance with the embodiments herein disclosed. [0080] Referring to FIG. 2, an exemplary node 16 of node cluster 14 of FIG. 1 that is configured by configurator 22 is illustrated according to one embodiment. Node 16 includes at least one processor 40 that is operative to execute software or firmware stored in memory 42. Memory 42 includes one or more physical memory locations and may be internal or external to processor 40. [0081] FIG. 2 illustrates the software (or firmware) code that is loaded onto each node 16, including an operating system 44, a kernel-mode measurement agent 46, a network topology driver 48, a user-mode measurement agent 50, a web application server 52, a workload container module 54, a service oriented architecture runtime agent 56, and a synthetic workload engine 58. In the illustrated embodiment, kernel-mode measurement agent 46 and network topology driver 48 require privilege from operating system 44 to access certain data, such as data from input/output (I/O) devices of node 16, for example. Similarly, user-mode measurement agent 50, web application server 52, workload container module 54, service oriented architecture runtime agent 56, and synthetic workload engine 58 illustratively do not require privilege from operating system 44 to access data or to perform their respective functions. [0082] Operating system 44 manages the overall operation of node 16, including, for example, managing applications, privileges, and hardware resources and allocating processor time and memory usage. Network topology driver 48 is operative to control the network characteristics and parameters of node 16 on communication network 18 (FIG. 1). In one embodiment, network topology driver 48 is operative to change network characteristics associated with node 16 based on a configuration file 28 (FIG. 1) received from configurator 22 (FIG. 1). [0083] A network software stack (not shown) is also stored and executed at each node 16 and includes a network socket for facilitating communication on network 18 of FIG. 1. In the embodiment described herein, the network socket includes a TCP socket that is assigned an address and port number(s) for network communication. In one embodiment, the network software stack utilizes a network driver of the operating system 44. [0084] Kernel-mode measurement agent 46 and user-mode measurement agent 50 are each operative to collect and analyze data for monitoring operations and workload performance at node 16. Kernel-mode measurement agent 46 monitors, for example, the number of processor instructions, processor utilization, the number of bytes sent and received for each I/O operation, as well as other suitable data or combinations thereof. An exemplary kernel-mode measurement agent 46 includes SystemTap software. User-mode measurement agent 50 collects performance data not requiring system privileges from the operating system 44 for access to the data. An example of this performance data includes application-specific logs indicating the start time and completion time of individual sub-tasks, the rate at which such tasks are executed, the amount of virtual memory utilized by the system, the amount of input records processed for a task, etc. In one embodiment, agents 46, 50 and/or other monitoring tools are pre-installed on each node 16 and are configured by configurator 22 at each node 16 based on configuration files 28 (FIG. 1). Alternatively, configurator 22 loads configured agents 46, 50 and/or other monitoring tools onto nodes 16 during workload deployment. [0085] Web application server 52 is an application that controls communication between the node 16 and both control server 12 of FIG. 1 and other nodes 16 of node cluster 14. Web application server 52 effects file transfer between nodes 16 and between control server 12 and nodes 16. An exemplary web application server 52 is Apache Tomcat. [0086] Workload container module 54 is also stored in memory 42 of each node 16. As described herein, control server 12 provides workload container module 54 to node 16 based on a user's selection and configuration of the workload container module 54. An exemplary workload container module 54 includes Apache Hadoop, Memcached, Apache Cassandra, or a custom workload container module provided by a user that is not commercially available. In one embodiment, workload container module 54 includes a file system 55 comprising a code module that when executed by a processor manages data storage in memory 42 and the communication of data between nodes 16. An exemplary file system 55 is the distributed file system (HDFS) of the Apache Hadoop workload container. File system 55 supports data replication by storing multiple copies of the data and files in node memory 42. [0087] Other suitable workload container modules may be provided, such as the optional service-oriented architecture (SOA) runtime agent 56 and the optional synthetic workload engine 58. SOA runtime agent 56 is another type of workload container module that when executed by a processor is operative to coordinate execution of a workload. SOA runtime agent 56 performs, for example, service functions such as caching and serving frequently used files (e.g., images, etc.) to accelerate workload operation. An exemplary SOA runtime agent 56 includes Google Protocol Buffers. Synthetic workload engine 58 includes a workload container module that when executed by a processor is operative to activate and execute a synthetic test workload received via configurator 22 (FIG. 1), as described herein. In the illustrated embodiment, synthetic workload engine 58 is tailored for execution with a synthetic test workload rather than for a actual, non-test workload. [0088] Referring to FIG. 3, configurator 22 of control server 12 is illustrated according to one embodiment. Configurator 22 illustratively includes an authenticator 70, a node configurator 72, a network configurator 74, a workload container configurator 76, a workload configurator 78, a batch processor 80, a data monitor configurator 82, and a data aggregator 84, each comprising the one or more processors 22 of control server 12 executing respective software or firmware code modules stored in memory (e.g., memory 90) accessible by the processor(s) 22 of control server 12 to perform the functions described herein. Authenticator 70 includes processor(s) 22 executing an authentication code module and is operative to authenticate user access to configurator 22, as described herein with respect to FIG. 7. Node configurator 72 includes processor(s) 22 executing a node configuration code module and is operative to select and configure nodes 16 to identify a cluster of nodes 14 having a specified hardware and operational configuration, as described herein with respect to FIGS. 8-10. Network configurator 74 includes processor(s) 22 executing a network configuration code module and is operative to adjust network parameters of communication network 18 of FIG. 1, such as for testing and performance analysis and/or for adjusting system power consumption, as described herein with respect to FIGS. 11-17. Workload container configurator 76 includes processor(s) 22 executing a workload container configuration code module and is operative to select and to configure a workload container module for operation on nodes 16, as described herein with respect to FIGS. 18-21. Workload configurator 78 includes processor(s) 22 executing a workload configuration code module and is operative to select and configure a workload for execution with the selected workload container by nodes 16. Workload configurator 78 illustratively includes a code synthesizer 79 that includes processor(s) 22 executing a synthetic test workload generation code module, and the code synthesizer 79 is operative to generate a synthetic test workload based on user-defined workload parameters, as described herein with respect to FIGS. 23 and 32-35. Batch processor 80 includes processor(s) 22 executing a batch processor code module and is operative to initiate batch processing of multiple workloads wherein multiple workloads are executed in a sequence on node cluster 14, as described herein with respect to FIG. 25. Data monitor configurator 82 includes processor(s) 22 executing a data monitoring configuration code module and is operative to configure monitoring tools that monitor performance data real-time during execution of the workload and collect data, as described herein with respect to FIGS. 26- 29. Data aggregator 84 includes processor(s) 22 executing a data aggregation code module and is operative to collect and aggregate the performance data from each node 16 and to generate logs, statistics, graphs, and other representations of the data, as described herein with respect to FIGS. 30 and 31. [0089] Output from configurator 22 is illustratively stored in memory 90 of control server 12. Memory 90, which may be internal or external to the processor(s) of control server 12, includes one or more physical memory locations. Memory 90 illustratively stores the configuration files 28, 30 of FIG. 1 that are generated by configurator 22. Memory 90 also stores log files 98 that are generated by nodes 16 and are communicated to control server 12 following execution of a workload. As illustrated, an image file 92 of the operating system, an image file 94 of the workload container selected with workload container configurator 76, and an image file 96 of the workload selected or generated with workload configurator 78 are stored in memory 90. In one embodiment, multiple operating system image files 92 are stored in memory 90 such that a user may select an operating system via configurator 22 for installation on each node 16. In one embodiment, a user may upload an operating system image file 92 from a remote memory (e.g., memory 34 of computer 20 of FIG. 1) onto control server 12 for installation on nodes 16. The workload container image file 94 is generated with workload container configurator 76 based on a user's selection and configuration of the workload container module from multiple available workload container modules. In the embodiment described herein, workload container configurator 76 configures the corresponding workload container image file 94 based on user input received via user interface 200 of FIGS. 7-30. Similarly, workload configurator 78 generates and configures workload image file 96 based on a user's selection of a workload from one or more available workloads via user interface 200 of control server 12. Workload image file 96 includes a predefined, actual workload selected by workload configurator 78 based on user input or a synthetic test workload generated by workload configurator 78 based on user input. [0090] In one embodiment, memory 90 is accessible by each node 16 of the node cluster 14, and control server 12 sends a pointer or other identifier to each node 16 of node cluster 14 that identifies the location in memory 90 of each image file 92, 94, 96. Nodes 16 retrieve the respective image files 92, 94, 96 from memory 90 based on the pointers. Alternatively, control server 12 loads image files 92, 94, 96 and the appropriate configuration files 28 onto each node 16 or provides the image files 92, 94, 96 and configuration files 28 to nodes 16 by any other suitable mechanism. [0091] As described herein, configurator 22 is operative to automatically perform the following actions based on user selections and input: allocate the desired resources (e.g., nodes 16); pre- configure the nodes 16 (e.g., network topology, memory characteristics); install the workload container software in each node 16; deploy user-provided workload software and data to the nodes 16; initiate monitoring tools (e.g., Ganglia, SystemTap) and performance data to be gathered from each node 16; provide live status updates to the user during workload execution; collect all data requested by the user, including the results of the workload and information gathered by monitoring tools; process, summarize, and display performance data requested by the user; and perform other suitable functions. Further, a user may use configurator 22 to create and deploy sequences of workloads running sequentially or in parallel, as described herein. A user may execute any or all of the workloads repeatedly, while making optional adjustments to the configuration or input parameters during or between the executions. Configurator 22 is also operative to store data on designated database nodes 16 of node cluster 14 based on requests by a user. [0092] FIG. 4 illustrates a flow diagram 100 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 for configuring a cloud computing system. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 4. In the illustrated embodiment, configurator 22 configures node cluster 14 of FIG. 1 according to the flow diagram 100 of FIG. 4 based on a plurality of user selections received via a user interface, such as user interface 200 illustrated in FIGS. 7-30. At block 102, node configurator 72 of configurator 22 selects a cluster of nodes 14 from a plurality of available nodes 16. Each node 16 of the cluster of nodes 14 includes at least one processing device 40 and memory 42 (FIG. 2) and is operative to share processing of a workload with other nodes 16 of the cluster 14, as described herein. In the illustrated embodiment, multiple nodes 16 are available for selection by configurator 22, and configurator 22 selects a subset of the available nodes 16 as the node cluster 14. In one embodiment, configurator 22 selects at least one type of data to be collected from each node 16 of the cluster of nodes 14 based on a user selection received via the user interface, and data aggregator 84 of configurator 22 collects and aggregates the at least one type of data from each node 16 of the cluster of nodes 14, as described herein with respect to FIGS. 26-30. [0093] At block 104, workload container configurator 76 of configurator 22 selects a workload container module for operation on each node 16 of the selected cluster of nodes 14. The workload container module includes a selectable code module that when executed by node 16 is operative to initiate and coordinate execution of a workload. In one embodiment, the workload container module is selected from a plurality of available workload container modules, as described herein with respect to FIG. 18. In one embodiment, configurator 22 modifies at least one operational parameter of the workload container module on each node 16 based on user input received via the user interface. The at least one operational parameter is associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation, as described herein. [0094] In one embodiment, the selected workload container module is a custom workload container module stored on memory remote from cloud computing system 10 (e.g., memory 34 of FIG. 1), and configurator 22 loads the custom workload container module stored on the remote memory onto each node 16 of the cluster of nodes 14. For example, a custom workload container module includes a workload container module that is provided by a user and is not commercially available. In one embodiment, the custom workload container module includes a configuration file that contains user-defined instructions and parameters for executing the workload. Exemplary instructions include instructions for testing workload parameters that are uncommon in typical workloads and/or are unique to a specific workload. Other exemplary instructions of a custom workload container module include instructions to redirect the output or log files of the execution to a different location for further analysis. Alternatively, the workload container module includes a commercially available, third party workload container module, such as Apache Hadoop, Memcached, Apache Cassandra, etc., that is stored at computing system 10 (e.g., memory 90 of FIG. 3) and is available for selection and deployment by configurator 22. [0095] At block 106, workload configurator 78 of configurator 22 selects a workload for execution with the workload container module on the cluster of nodes 14. The processing of the selected workload is distributed across the cluster of nodes 14, as described herein. In one embodiment, the workload is selected from at least one of an actual workload and a synthetic test workload. One or more actual, pre-compiled workloads are stored in a memory (e.g., memory 34 of FIG. 1) accessible by the processor of control server 12, and configurator 22 loads a selected actual workload onto nodes 16. A synthetic test workload is generated by configurator 22 based on user-defined workload parameters received via user interface 200 and is loaded onto nodes 16, as described herein with respect to FIGS. 23 and 32-35. In one embodiment, configurator 22 adjusts, based on a user input received via user interface 200, at least one communication network parameter to modify or limit the performance of communication network 18 during execution of the selected workload, as described herein with respect to FIGS. 11-17. [0096] In the illustrated embodiment, configurator 22 provides the user interface 200 (FIGS. 7- 30) that includes selectable node data (e.g. table 258 of FIG. 8), selectable workload container data (e.g., selectable input 352 of FIG. 18), and selectable workload data (e.g., selectable input 418 of FIG. 22). The cluster of nodes 14 is selected based on a user selection of the selectable node data, the workload container module is selected based on a user selection of the selectable workload container data, and the workload is selected based on a user selection of the selectable workload data. [0097] FIG. 5 illustrates a flow diagram 120 of another exemplary operation performed by configurator 22 of FIGS. 1 and 3 for configuring cloud computing system 10. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 5. At block 122, workload container configurator 76 selects, based on a user selection received via a user interface (e.g., user interface 200), a workload container module from a plurality of available workload container modules for operation on each node 16 of a cluster of nodes 14 of the cloud computing system 10. In the illustrated embodiment, the workload container module is selected based on selectable workload container data, such as inputs 352, 360, 362 of FIG. 18 and inputs 352, 401 of FIG. 21, for example. The selected workload container module includes a selectable code module (e.g., selectable with inputs 360, 362 of FIG. 18 and input 401 of FIG. 21) operative to coordinate execution of a workload. In one embodiment, the plurality of available workload container modules includes a custom workload container module, as described herein. At block 124, node configurator 72 configures each node 16 of the cluster of nodes 14 with the selected workload container module for executing the workload such that processing of the workload is distributed across the cluster of nodes. As described herein, each node 16 includes a processing device 40 and memory 42 and is operative to share processing of the workload with other nodes 16 of the cluster of nodes 14. Configurator 22 installs the selected workload container module on each node 16 of the cluster of nodes 14 and initiates the execution of the workload with the selected workload container module on the cluster of nodes 14. [0098] FIG. 6 illustrates a flow diagram 140 of another exemplary operation performed by configurator 22 of FIGS. 1 and 3 for configuring cloud computing system 10. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 6. At block 142, node configurator 72 of configurator 22 selects a cluster of nodes 14 from a plurality of available nodes 16 for a cloud computing system 10 that are operative to share processing of a workload. In the illustrated embodiment, the cluster of nodes 14 is selected based on selectable node data, as described herein. [0099] At block 144, workload container configurator 76 modifies an operational parameter of a same workload container module of each node 16 based on user input received via a user interface (e.g., selectable inputs 367 and fields 374, 378, 380 of user interface 200 of FIG. 19). The same workload container module includes a selectable code module that when executed by the node 16 is operative to coordinate execution of a workload based on the operational parameter. The operational parameter is associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation, as described herein with respect to FIGS. 19 and 20. Configurator 22 modifies the operating parameter(s) prior to deploying the workload container module onto each node 16, or after deployment of the workload container module to each node 16 when updating configurations. The workload container module when executed by each node 16 is operative to coordinate execution of the workload on the cluster of nodes 14 based on the modified operational parameter. In one embodiment, the operational parameter includes a memory buffer size for a read/write operation, a size of a data block transferred during a read/write operation, a number of data blocks stored in the memory 42 of each node 16, a number of processing threads of each node 16 allocated for processing requests for the file system 55, and/or a number of data streams to merge when sorting data. Other suitable operational parameters may be modified, as described with respect to FIGS. 19 and 20. [00100] An exemplary user interface 200 is illustrated in FIGS. 7-30 that provides user access to control server 12 of FIG. 3. User interface 200 is illustratively a web-based, graphical user interface 200 that includes multiple selectable screens configured for display on a display, such as on display 21 of computer 20 (FIG. 1). Other suitable user interfaces may be provided, such as a native user interface application, a command line driven interface, a programmable API, or another other type or combination of interfaces. User interface 200 includes selectable data, such as selectable inputs, fields, modules, tabs, drop-down menus, boxes, and other suitable selectable data, that are linked to and provide input to the components 70-84 of configurator 22. In one embodiment, the selectable data of user interface 200 is rendered in a manner that allows it to be individually selectable. For example, the selectable data is selected by a user with a mouse pointer, by touching a touchscreen of user interface 200, by pressing keys of a keyboard, or by any other suitable selection mechanism. Selected data may result in the data being highlighted or checked, for example, and a new screen, menu, or pop-up window may appear based on selection of some selectable data (e.g., modules, drop-down menus, etc.). [00101] Reference is made to FIGS. 1-3 throughout the description of user interface 200. As illustrated in FIG. 7, user interface 200 includes several selectable modules that, when selected, provide access to configurator 22, thereby allowing user selections and other user input to configurator 22. In particular, the Authentication and Settings Library module 202 comprises data representing and linked to authenticator 70 of configurator 22. Instances module 204 comprises data representing and linked to node configurator 72 of configurator 22. Network Configuration module 206 comprises data representing and linked to network configurator 74 of configurator 22. Workload Container Configuration module 208 comprises data representing and linked to workload container configurator 76 of configurator 22. Workload Configuration module 210 comprises data representing and linked to workload configurator 78 of configurator 22. Batch Processing module 212 comprises data representing and linked to batch processor 80 of configurator 22. Monitoring module 214 comprises data representing and linked to data monitor configurator 82 of configurator 22. Control and Status module 216 comprises data representing and linked to data aggregator 84 of configurator 22. Components 70-84 of configurator 22 implement their respective functions based on the user selections, data, and other user input provided via modules 202-216 of user interface 200. [00102] Referring to FIG. 7, the Authentication and Settings Library module 202 is selected. Based on user input to module 202, authenticator 70 authenticates user access to configurator 22 as well as loads previously saved system configurations. Authenticator 70 grants a user access to configurator 22 by confirming credential data entered in the form of an access key, a secret key, and/or an EC2 key pair in respective fields 220, 222, 224. In the illustrated embodiment, the EC2 key pair of field 224 provides root or initial access to newly selected nodes 16 when using module 202 to access the Amazon Web Services cloud platform. Authenticator 70 loads a previously saved system configuration from a system configuration file (e.g., stored on user's computer 20 or control server 12 of FIG. 1) based on user selection of input 238. The system configuration file includes workload and workload container configurations, node 16 and network settings information, data monitoring/collection settings for cloud computing system 10, and all other configuration information associated with a system configuration previously saved with configurator 22. Loading a previously saved system configuration file updates configurator 22 with the configuration information from the system configuration file. The system configuration file illustratively includes a JSON file format, although other suitable formats may be provided. After loading the system configuration file, the loaded system configuration may be modified via the modules of user interface 200. Selection of input 240 causes authenticator 70 to save a current system configuration of configurator 22 to a file. The authentication data may be included in the saved system configuration file based on selection of selection box 242. [00103] While the system configuration file is identified and loaded onto control server 12 via a web-based user interface 200, other suitable remote method invocation (RMI) mechanisms may be used to obtain the system configuration file. For example, an Apache Hypertext Transfer Protocol (HTTP) server, an Apache Tomcat server, a Tomcat servlet using the RMI mechanism to pass the system configuration file, or a custom application (e.g., command line utility) that uses the RMI mechanism to pass the system configuration file directly to control server 12. [00104] A settings library 226 provides a table or list of previously created system configuration files that are available for selection and execution via selectable inputs 227. The selection of input 228 causes authenticator 70 to update the modules 202-216 with configuration information from the system configuration file selected in library 226. A current system configuration (e.g., configured via modules 202-216) is saved to a file and added to library 226 based on selection of input 230, and a system configuration file is deleted from library 226 based on selection of input 234. Selection of inputs 232 and 236 causes authenticator 70 to upload a system configuration file from a local computer (e.g., computer 20 of FIG. 1) to library 226 or to download a system configuration file from a remote computer (e.g., via the Internet) to library 226, respectively. Library 226 allows one or more previously used system configuration to be loaded and executed quickly. The system configuration files of library 226 may be selected and executed separately, in parallel, or in a sequence on cloud computing system 10. For example, multiple system configuration files may be provided in library 226 for execution in a batch sequence, wherein configurator 22 automatically deploys each selected system configuration in sequence to execute the workload(s) with each system configuration. In the illustrated embodiment, the system configuration is deployed to nodes 16 via the Control and Status module 216 of FIG. 30, as described herein. The deployment of the system configuration involves configurator 22 configuring the cloud computing system 10 with the settings, software, and workload information associated with the system configuration file, as described herein with reference to FIG. 30. As described herein, configurator 22 illustratively generates one or more configuration files 28 that are routed to each node 16 for configuring the respective nodes 16. The configuration files 28 deployed to nodes 16 include all configuration information contained in the system configuration file loaded via module 202 plus any additional configuration changes made via modules 202-216 after loading the system configuration file. [00105] Referring to FIG. 8, the Instances module 204 is selected for configuring the number and characteristics of nodes 16. Based on user input to module 204, node configurator 72 identifies and selects a cluster of nodes 14 having a specified hardware and operational configuration. Instances module 204 includes an Instances tab 250, an Instance Types tab 252, and an Other Instance Settings tab 254. Under the Instances tab 250 selected in FIG. 8, the number of desired nodes 16 for inclusion in node cluster 14 is entered in field 256. Node configurator 72 generates a default list of nodes 16, each having a specific hardware configuration, in table 258 upon user selection of the desired number of nodes 16 with field 256. Table 258 provides a list and a configuration description of the cluster of nodes 14 of FIG. 1. Table 258 includes several descriptive fields for each node 16, including the node number and name, the instance (node) type, the memory capacity, the number of core processors (e.g., CPU's), the storage capacity, the quota, the receive/transmit quota, and the receive/transmit cap. The instance type generally describes the relative size and compute power of the node, illustratively selected from micro, small, medium, large, x-large, 2x-large, 4x-large, etc (see FIG. 9). In the exemplary table 258 of FIG. 8, each node 16 is a large type with a memory capacity of 7680 megabytes (MB), a storage capacity of 850 MB, and four core processors. Node configurator 72 selects nodes 16 based on the user selection of selectable node data, illustratively selection boxes 259 and selectable inputs 262. The type of each node 16 is changeable based on selection of a node 16 of table 258 (e.g., using inputs 262 or by checking the corresponding selection boxes 259) and selecting the edit instance type input 260, which causes Instance Types tab 252 to be displayed for the selected node 16. Referring to FIG. 9, table 264 comprises a list of the types of nodes 16 that are available for selection (i.e., the available server hardware) for use in the node cluster 14. One or more nodes 16 of table 264 are selected with selectable inputs 265 for replacing the node 16 selected in table 258 of FIG. 8. In one embodiment, the fields of table 264 (e.g., Memory, VCPUs, Storage, etc.) are modifiable by a user to further identify desired hardware performance characteristics of the selected nodes 16. Fewer or additional types of nodes 16 may be available for selection in table 264, depending on available server hardware. In the illustrated embodiment, multiple nodes 16 are available for each node type listed in table 264 for adding to node cluster 14. [00106] Referring to FIG. 10, node configurator 72 adjusts the boot-time configuration of each node 16 based on user input provided in the Instance Settings tab 254 of user interface 200. The boot-time configuration includes one or more boot-time parameters that are applied to individual nodes 16 or groups of nodes 16, or to the entire node cluster 14. Boot-time parameters such as the computing capacity, system memory capacity, and/or storage capacity of each node 16 are limited or constrained by node configurator 72 based on user inputs to fields 268, 270, 272, 274 such that the respective node 16 operates at less than a maximum capacity. The default boot-time parameters are selected based on user selection of inputs 269, and customized boot-time parameters are selected based on user selection of inputs 271. In the illustrated embodiment, the maximum setting of each adjustable parameter is the default, but a user may adjust each parameter upon selecting the "Custom" option with input 271 and entering a configuration setting into the respective field 268, 270, 272, 274. [00107] In the illustrated embodiment, the number of processing cores of a node 16 is adjustable with field 268. For example, if the node 16 selected in table 258 of Instances tab 250 (FIG. 8) has four processing cores, the number of processing cores that are enabled during workload execution may be reduced to zero, one, two, or three cores via field 268, thereby "hiding" one or more processing cores of the selected node 16 from the operating system 44 (FIG. 2) during workload execution. The visible system memory size is adjustable based on inputs to fields 270, 272, i.e., the system memory that is accessible by operating system 44 (FIG. 2). For example, if the node 16 selected in table 258 of Instances tab 250 (FIG. 8) has a memory capacity of 2048 MB, the "visible" memory 9 (e.g., random access memory) enabled during workload execution may be reduced to less than 2048 MB, thereby "hiding" a portion of the memory from the operating system 44 (FIG. 2) during workload execution. Additional workload arguments or instructions are applied with field 274 to adjust additional boot-time parameters. The number of arguments of the workload may be increased or decreased based on a number entered into field 274. For example, a subset of the instructions of the workload are selectable for execution with field 274, thereby hiding the remaining instructions from operating system 44 (FIG. 2). Further, a node 16 having a 64-bit architecture is configurable based on input to field 274 such that it operates in a 32-bit mode wherein only 32 bits are visible to operating system 44. Additional boot-time parameters may be entered in field 276. In one embodiment, instructions or code are manually entered in field 276 by a user to provide additional cloud configuration settings. For example, the master node 16 for a map-reduce workload may be specified via field 276 such that a specific node 16 operates as master upon booting. In one embodiment, limiting the operation of one or more nodes 16 with node configurator 72 is used to test performance of cloud computing system 10, as described herein. In the illustrated embodiment, the boot-time configuration settings specified in FIG. 10 are provided in a boot-time configuration file 28 (FIG. 3) that is provided by node configurator 72 to each node 16 for adjusting the boot-time configuration of the respective nodes 16, as described herein with respect to FIGS. 36-38. [00108] Configurator 22 generates the exemplary Network Settings Wizard window 280 illustrated in FIGS. 11-17 based on the user selection of the Network Configuration module 206 of FIG. 7. Referring to FIG. 11, Network Settings Wizard 280 provides multiple global network settings tabs each including selectable data for adjusting network parameters of one or more nodes 16. The adjustable network parameters include network delay via tab 282, packet loss via tab 284, packet duplication via tab 286, packet corruption via tab 288, packet reordering via tab 290, packet rate control via tab 292, and other custom commands via tab 294. Based on user selections and input via Network Settings Wizard 280 of user interface 200, network configurator 74 of FIG. 3 is operative to adjust network parameters of nodes 16 of communication network 18 of FIG. 1, as described herein. In one embodiment, the modification of network parameters is used for network testing and performance analysis and/or for adjusting system power consumption. In the illustrated embodiment, network configurator 74 artificially shapes network traffic and behavior based on user input to Network Settings Wizard 280, thereby modeling various types of network topologies. For example, different communication networks have different latencies, bandwidth, performance, etc., depending on network configuration. As such, network configurator 74 allows networks with different configurations to be implemented with the workload execution to test and analyze performance of the different networks with the selected workload. In one embodiment, the testing and analysis is done in conjunction with batch processor 80 initiating workload executions with differing network configurations. For example, an optimal network topology may be determined for execution of a particular workload with the selected hardware (node 16) configuration. In one embodiment, network configurator 74 is operative to apply network settings to certain groups or subsets of nodes 16 of node cluster 14. [00109] Referring still to FIG. 11, selectable data associated with implementing a communication network delay is illustrated in tab 282. Network configurator 74 selects and modifies a network delay based on the user selection of inputs (illustratively boxes) 298-301 and fields 302, 304, 306, 308, 310, 312. A communication delay for each packet communication (i.e., packets carrying data or information between nodes 16 or between a node 16 and control server 12) over communication network 18 (FIG. 1) is implemented based on the selection of input 298 and the delay value entered via fields 302. A variation of the specified communication delay is implemented based on selection of input 299 and variation value entered via field 304 (e.g., illustratively a variation of plus or minus 10 milliseconds). Fields 310, 312 include drop- down menus for selecting a unit of time (e.g., milliseconds, microseconds, etc.) associated with the respective value of fields 302, 304. A correlation between specified communication delays is implemented based on selection of input 300 and a correlation value entered via field 306, illustratively a percentage correlation value. A distribution of the specified communication delay is implemented based on selection of drop-down menu 301. The distribution includes a normal distribution or other suitable distribution type. [00110] Referring to FIG. 12, selectable data associated with implementing a network packet loss rate is illustrated in tab 284. Network configurator 74 selects and modifies a packet loss rate (i.e., the rate at which packets are artificially lost) based on the user selection of inputs (illustratively boxes) 313, 314 and fields 315, 316. A packet loss rate is implemented for packet communication over network 18 based on selection of input 313 and a rate value entered via field 315. The packet loss rate is illustratively entered as a percentage, e.g., 0.1% results in one packet lost every 1000 packets sent by the node 16. A correlation for the packet loss rate is implemented based on selection of input 314 and a correlation value entered via field 316 (illustratively a percentage value). [00111] Referring to FIG. 13, selectable data associated with implementing a network packet duplication rate is illustrated in tab 286. Network configurator 74 selects and modifies a packet duplication rate (i.e., the rate at which packets are artificially duplicated) based on the user selection of inputs (illustratively boxes) 317, 318 and fields 319, 320. A packet duplication rate is implemented for packet communication over network 18 based on selection of input 317 and a rate value entered via field 319. The packet duplication rate is illustratively entered as a percentage, e.g., 0.1% results in one packet duplicated for every 1000 packets sent by the node 16. A correlation for the packet duplication rate is implemented based on selection of input 318 and a correlation value entered via field 320 (illustratively a percentage value). [00112] Referring to FIG. 14, selectable data associated with implementing a network packet corruption rate is illustrated in tab 288. Network configurator 74 selects and modifies a packet corruption rate (i.e., the rate at which packets are artificially corrupted) based on the user selection of input (illustratively box) 321 and field 322. A packet corruption rate is implemented for packet communication over network 18 based on selection of input 321 and a rate value entered via field 322. The packet corruption rate is illustratively entered as a percentage, e.g., 0.1%) results in one packet corrupted for every 1000 packets sent by the node 16. In one embodiment, a correlation for the packet corruption rate may also be selected and implemented. [00113] Referring to FIG. 15, selectable data associated with implementing a network packet reordering rate is illustrated in tab 290. Network configurator 74 selects and modifies a packet reordering rate (i.e., the rate at which packets are placed out of order during packet communication) based on the user selection of inputs (illustratively boxes) 323, 324 and fields 325, 326. A packet reordering rate is implemented for packet communication over network 18 based on selection of input 323 and a rate value entered via field 325. The packet reordering rate is illustratively entered as a percentage, e.g., 0.1% results in one packet reordered for every 1000 packets sent by the node 16. A correlation for the packet reordering rate is implemented based on selection of input 324 and a correlation value entered via field 326 (illustratively a percentage value). [00114] Referring to FIG. 16, selectable data associated with implementing a network communication rate is illustrated in tab 292. Network configurator 74 selects and modifies a packet communication rate (i.e., the rate at which packets are communicated between nodes 16) based on the user selection of inputs (illustratively boxes) 327-330 and fields 331-338. A packet communication rate is implemented for communication network 18 based on selection of input 327 and a rate value entered via field 331, and a ceiling (maximum) for the packet communication rate is implemented based on selection of input 328 and a ceiling value entered via field 332. A packet burst is implemented based on selection of input 329 and a packet burst value entered via field 333, and a ceiling (maximum) for the packet burst is implemented based on selection of input 330 and a ceiling value entered via field 334. Fields 335 and 336 provide drop-down menus for selecting rate units (illustratively kilobytes per second), and fields 337 and 338 provide drop-down menus for selecting burst units (illustratively in bytes). [00115] Referring to FIG. 17, selectable data associated with implementing a network communication rate is illustrated in tab 292. Network configurator 74 provides custom commands for modifying network parameters associated with one or more nodes 16 on communication network 18 based on the user selection of input (illustratively box) 340 and custom commands entered via field 342. [00116] Referring to FIG. 18, the Workload Container Configuration module 208 is selected. Based on user input to module 208 (e.g., the user selection of selectable workload container data, such as inputs 352, 360, 362), workload container configurator 76 is operative to select and to configure a workload container module for operation on the node cluster 14. Module 208 includes multiple selectable tabs 350 corresponding to various available workload container modules. Each available workload container module includes a selectable code module that when executed is operative to initiate and control execution of the workload on node cluster 14. The workload container modules available via module 208 in the illustrative embodiment include several third party, commercially available workload container modules such as Apache Hadoop, Memcached, Cassandra, and Darwin Streaming. Cassandra is an open- source distributed database management system that provides a key- value store for providing basic database operations. Darwin Streaming is an open-source implementation of a media streaming application, such as QuickTime provided by Apple, Inc., that is utilized to stream a variety of movie media types. While open-source workload container software is illustratively provided via module 208, closed-source workload container software may also be provided for selection. For example, license information associated with the closed-source workload container software may be input or purchased via user interface 200. One or more custom workload container modules may also be loaded and selected via the "Custom" tab of module 208. Other workload container modules may be provided. A "Library" tab is also provided that provides access to a library of additional workload container modules available for selection, such as previously-used custom workload container modules, for example. [00117] Under the "Hadoop" tab of FIG. 18, workload container configurator 76 selects the Apache Hadoop workload container module based on user selection of input 352. The version and build variant of Apache Hadoop is selectable via drop-down menus 360, 362, respectively, under the General tab 354. Operational parameters of the selected workload container module are adjustable by workload container configurator 76 based on user input provided via the Extended tab 356 and the Custom tab 358. The operational parameters available for adjustment illustratively depend on the selected workload container module. For example, with Apache Hadoop selected as the workload container module, Extended tab 356 illustrated in FIG. 19 displays a table 366 of exemplary selectable operational parameters of the Apache Hadoop workload container module that are configurable by workload container configurator 76. Workload container configurator 76 selects the operational parameters for configuration based on user selection of corresponding selection boxes 367. Table 366 provides several fields for workload container configurator 76 to receive configuration data, including an override field 374, a master value field 378, and a slave value field 380. Based on user selections in the override field 374, the nodes 16 are selected whose workload containers are to be adjusted with the corresponding operational parameter. Nodes 16 are selected in the override field 374 based on user selections in the corresponding dropdown menus or based on user selections of inputs 384. Illustratively, the selection of "never" results in the default configuration of the corresponding operational parameter being implemented at all nodes 16, the selection of "master" or "slaves" results in the implementation of the parameter adjustment at the master node 16 or at the slave nodes 16, respectively, and the selection of "always" results in the implementation of the parameter adjustment at all nodes 16 of the node cluster 14. Alternatively, individual nodes 16 of node cluster 14 may be selected for implementation of the adjusted operational parameter. [00118] In the master value field 378 and slave value field 380, a constraint, data value, or other user selection provides the adjustment value for the corresponding operational parameter of the workload container in the respective master node 16 or slave nodes 16. A property name field 376 illustratively lists the name of the associated operational parameter as referenced in the code module of the selected workload container module. A description field 382 illustratively displays a general description to the user of the associated operational parameter. Inputs 386 allow a user to select or to deselect all operational parameters listed in table 366. Input 388 allows a user to reverse or "undo" a previous selection or parameter change, and input 390 allows a user to reset the values provided in fields 374, 378, and 380 to the default settings. [00119] Exemplary operational parameters adjustable with workload container configurator 76 based on user selections in table 366 include operational parameters associated with read/write (I/O) operations of the node 16, sorting operations, the configuration of the network socket operation (e.g., TCP socket connection) of the node 16, and the file system 55 (e.g., HDFS for Apache Hadoop) of the workload container. Operational parameters associated with read/write operations include, for example, a memory buffer size of the node 16 and a size of a data block transferred during the read/write operation. The memory buffer size, illustratively shown in row 368 of table 366, corresponds to how much data is buffered (temporarily stored in cache) during read/write (I/O) operations of the node 16. In the illustrated embodiment, the memory buffer size is a multiple of a memory page or data block size of the node hardware. A memory page or data block, as described herein, refers to a fixed-length block of virtual memory of a node 16 that is the smallest unit of data used for memory allocation and memory transfer. In row 368 of FIG. 19, the master and slave node values are illustratively set to 4096 bits, but these values may be adjusted to 8192 bits or another suitable multiple of the data block size of the node processor 40 (FIG. 2). Similarly, the size of the data block transferred during read/write operations is also adjustable based on user input to table 366. [00120] Operational parameters associated with sorting operations include, for example, the number of data streams to merge simultaneously when sorting data. Operational parameters associated with the file system (e.g., file system 55 of FIG. 2) of the workload container include the number of file system records or files stored in memory 42 of each node 16 (see row 370, for example) and the number of processing threads of each node 16 allocated for processing requests for the file system 55. In the exemplary row 370 of table 366, the number of records stored in memory 42 for file system 55 of FIG. 2 is 100000 records for both the master and slave nodes 16, although other suitable record limits may be entered. In one embodiment, limiting the number of file system records serve to limit the replication of files by file system 55. [00121] Operational parameters associated with the configuration and operation of the network socket, such as TCP network socket described herein, involve the interaction of the workload container with the network socket. For example, the communication delay or latency of the network socket and the number of packets sent over network 18 (FIG. 1) may be adjusted. For example, row 372 of table 366 allows for the activation/deactivation via fields 378, 380 of an algorithm, illustratively "Nagle's algorithm" known in the art, to adjust the latency and number of data packets sent via the TCP socket connection of the node 16. Other suitable operational parameters associated with the operation of network socket may be adjusted. [00122] Another exemplary operational parameter adjustable by workload container configurator 76 includes the number of software tasks executed concurrently by processor 40 of node 16. For example, a user may specify a number of tasks (e.g., Java tasks) to run concurrently during workload execution via input to table 366, and workload container configurator 76 adjusts the number of tasks accordingly. Other suitable operational parameters associated with the workload container may be adjustable. [00123] Referring to Custom tab 358 of FIG. 20, additional configuration adjustments may be implemented for the selected workload container module, illustratively the Hadoop workload container module, to allow for further customization of the selected workload container module. Workload container configurator 76 further adjusts the configuration of the selected workload container module based on command strings input into fields 392, 394, and 396 and user selection of corresponding selectable boxes 398. In the illustrated embodiment, each of these fields 392, 394, 396 specify configurations that apply, respectively, to the Hadoop master node, the Hadoop file system, and the parameters related to map-reduce execution, such as number of tasks in a task tracker, the local directory of where to place temporary data, and other suitable parameters. [00124] Operational parameters associated with the other available workload container modules (e.g., Memcached, Cassandra, Darwin Streaming, etc.) are adjusted similarly as described with the Hadoop workload container module. Based on the workload container module selected based on input 352 and the configuration information provided via tabs 354, 356, 358 of module 208, workload container configurator 76 generates a workload container image file 94 (FIG. 3) for loading onto nodes 16 of node cluster 14. In one embodiment, a workload container image file 94 is saved in memory 90 of control server 12 or in memory 42 of nodes 16, and workload container configurator 76 updates the image file 94 with the configuration information. In one embodiment, multiple configurations of the workload container module may be saved and then run in a sequence, such as for exploring the impact of the workload container configuration changes on workload and system performance, for example. [00125] Referring to FIG. 21, workload container configurator 76 selects a user-defined custom workload container module for execution on nodes 16 based on user selection of inputs 353, 401 of the "Custom" tab of module 208. In the illustrated embodiment, a custom workload container module includes a workload container module that is provided by a user and that may not be commercially available, as described herein. Workload container configurator 76 illustratively loads a compressed zip file that includes a workload container code module. In particular, the zip file includes a configuration file or script that contains user-defined parameters for coordinating the execution of a workload on node cluster 14. As illustrated in FIG. 21, table 400 provides a list of loaded custom workload container modules that are stored at control server 12 (or at computer 20) and are available for user selection via selectable input(s) 401. Additional custom workload container modules are uploaded or downloaded and displayed in table 400 based on user selection of inputs 402, 404, respectively, and a custom workload container module is deleted from table 400 based on user selection of input 403. A user may enter the zip folder path and/or configuration script path via respective fields 406, 408. In one embodiment, the custom workload container module is stored remote from cloud computing system 10, such as on memory 34 of computer 20 (FIG. 1), and is uploaded onto memory 90 (FIG. 3) of control server 12 based on user selection of input 402. [00126] Referring to FIG. 22, the Workload Configuration module 210 is selected. Based on user input to module 210, workload configurator 78 (FIG. 3) is operative to select and configure a workload for execution with the selected workload container module by node cluster 14. Workload configurator 78 is also operative to generate a synthetic test workload based on user-defined workload parameters that is executed on nodes 16 with the selected workload container module. Module 210 includes several selectable tabs including a workload tab 410, a synthetic kernel tab 412, an MC-Blaster tab 414, a settings library tab 416, and a CloudSuite tab 417. Under the workload tab 410 of FIG. 22, the workload to be executed is selected by workload configurator 78 based on user selection of selectable workload data, illustratively including selectable inputs 418, 424, and 428. The available workloads illustratively include a workload adapted for execution on a Hadoop workload container (inputs 418), a workload adapted for execution on a Memcached workload container (input 424), or any another suitable workload configured for a selected workload container, such as a custom workload (input 428), for example. [00127] Referring to FIG. 22, a Hadoop workload is selected from an actual workload and a synthetic test workload based on user selection of one of corresponding inputs 418. The actual workload, which includes a pre-defined code module adapted for the map-reduce functionality of the Hadoop workload container, is loaded onto control server 12 based on an identification of the storage location of the actual workload in field 422. In one embodiment, the actual workload is stored on a memory remote from cloud computing system 10, such as memory 34 of FIG. 1, and is uploaded to memory 90 of control server 12 via field 422. In another embodiment, the actual workload is a sample Hadoop workload that is provided with the Hadoop workload container module or is another workload pre-loaded onto control server 12. A synthetic test workload is also selectable based on user selection of corresponding input 418 for execution on a Hadoop workload container. The number of input records or instructions to be generated with the synthetic test workload and to be processed in the "map" phase of the synthetic test workload may be entered via field 420 and provided as input to synthesizer 79 of workload configurator 78 (FIG. 3), as described herein. Other input parameters for the generation of the synthetic test workload by synthesizer 79 are configured via the synthetic kernel tab 412, as described herein. While the synthetic test workload is illustratively adapted for execution with a Hadoop workload container, synthetic test workloads may also be selected and generated for other available workload containers. [00128] A custom script is loaded as a pre-defined, actual workload for execution with a selected workload container module via field 430 and upon user selection of input 428. The custom script comprises user-provided code that includes one or more execution commands that are executed with the selected workload container module by node cluster 14. In the illustrated embodiment, the custom script is used as the workload executed during system testing with batch processor 80, wherein various network, workload container, and/or other system configuration changes are made during sequential workload executions to monitor the effects on system performance, as described herein. [00129] A pre-defined workload may also be loaded for execution with a Memcached workload container based on user selection of input 424. In one embodiment, the Memcached workload includes an in-memory acceleration structure that stores key-value pairs via "set" commands and retrieves key- value pairs via "get" commands. A key- value pair is a set of two linked data items including a key, which is an identifier for an item of data, and a value, which is either the data identified with the key or a pointer to the location of that data. The Memcached workload illustratively operates with a selectable MC-Blaster tool whose run time is selected based on an input value to field 426. MC-Blaster is a tool to stimulate the system under test by generating requests to read/write records from Memcached on a number of network (e.g., TCP) socket connections. Each request specifies a key and a value. The MC-Blaster tool is configured via MC-Blaster tab 414 of FIG. 24. Referring to FIG. 24, input to field 460 specifies the number of TCP connections to utilize per processing thread, input to field 462 specifies the number of keys to operate on, and input to fields 464 and 466 specify the number of 'get' and 'set' commands requested to be sent per second, respectively. A user-specified (custom) buffer size may be implemented by workload configurator 78 based on selection of corresponding input 469 and a value entered into field 468, and a TCP request may be delayed based on selection of "on" input 470. A number of processing threads to start may be customized by workload configurator 78 based on user selection of corresponding input 473 and a value entered in field 472. The default number of processing threads is equal to the number of active processing cores of the node 16. The number of UDP replay ports is selected based on input to field 474, and the size (in bytes) of the value stored (or returned) resulting from workload execution is selected based on input to field 476. [00130] Referring to FIG. 23, a synthetic test workload is generated by synthesizer 79 based on user input provided via synthetic kernel tab 412. In particular, synthesizer 79 of workload configurator 78 (FIG. 3) generates a synthetic test workload based on user-defined parameters provided in a code module, illustratively a trace file (e.g., configuration file), that is loaded onto memory 90 of control server 12. The trace file includes data that describe desired computational characteristics of the synthetic test workload, as described herein. Upon user selection of the "synthesize" input 434 of FIG. 23, the location of the stored trace file may be identified based on user input to field 436 or field 438. Field 436 illustratively identifies a hard disk location (e.g., memory 34 of computer 20 of FIG. 1) containing the trace file, and field 438 illustratively identifies the web address or URL for retrieving the trace file. Table 440 displays the trace files and previously generated synthetic test workloads that are loaded and available for selection. A trace file is loaded and displayed in table 440 with user selection of input 442, deleted from table 440 with user selection of input 444, and downloaded (i.e., from the URL identified in field 438) based on user selection of input 446. The trace file is illustratively a JSON file format, although other suitable file types may be provided. A maximum number of instructions to be generated in the synthetic test workload is identified in field 448, and a maximum number of iterations of the generated synthetic test workload is identified in field 450. Alternatively, a previously generated synthetic test workload is loaded by workload configurator 78 based on user selection of Library input 432, the identification of the stored location (local hard drive, website, etc.) of the synthetic test workload with field 436 or 438, and the user selection of the input 441 corresponding to the desired pre-generated synthetic test workload displayed in table 440. The maximum number of instructions and iterations of the previously generated synthetic test workload is adjustable with fields 448, 450. [00131] The trace file includes a modifiable data structure, illustratively a table having modifiable fields, that identifies the workload characteristics and user-defined parameters used as input by synthesizer 79 for generating the synthetic test workload. The table is displayed on a user interface, such as with user interface 200 or a user interface of user computer 20, such that the fields of the table may be modified based on user input and selections to the table. See, for example, table 150 of FIG. 32 described herein. The trace file further identifies at least a portion of a target instruction set architecture (ISA) used as input by synthesizer 79. The trace file further identifies other characteristics associated with instructions of the synthetic workload, including: inter-instruction dependencies (e.g., a first instruction depends on the completion of a second instruction before executing the first instruction), memory register allocation constraints (e.g., constrain an instruction to take a value from a particular register), and architectural execution constraints (e.g., a limited number of logic units being available for executing a particular type of instruction). As such, configurator 22 is operative to predict how long workload instructions should take to execute based on the execution characteristics specified in the trace file. [00132] Exemplary user-defined workload parameters set forth in the trace file include the following: the total number of instructions to be generated; the types of instructions to be generated including, for example, a floating point instruction, an integer instruction, and a branch instruction; the behavior (e.g., execution flow) of instruction execution, such as, for example, the probabilities of the execution flow branching off (i.e., whether branches are likely to be taken during instruction execution or whether execution will continue along the execution flow path and not jump to a branch); the distribution of data dependencies among instructions; the average size of basic blocks that are executed and/or transferred; and the latencies associated with instruction execution (i.e., length of time required to execute an instruction or instruction type, such as how many cycles a particular instruction or instruction type requires for execution). In one embodiment, the user-defined workload parameters specify which specific instructions to use as integer instructions or floating point instructions. In one embodiment, the user-defined workload parameters specify the average number and statistical distribution of each instruction type (e.g., integer, floating point, branch). In one embodiment, each instruction includes one or more input and output arguments. [00133] In the illustrated embodiment, the workload parameters and instruction set architecture data set forth in the trace file are provided in a table-driven, retargetable manner. Based on changes to the contents of the table, configurator 22 is operative to target different microarchitectures and systems as well as different instruction set architectures of nodes 16. An exemplary table 150 is illustrated in FIG. 32 that includes data representing a set of user- defined workload parameters to be input to code synthesizer 79. Referring to FIG. 32, table 150 includes an instruction portion 152 that describes a collection of instructions for the generated synthetic test workload and an addressing mode portion 154 that describes addressing modes to be used with the synthetic test workload. Instructions and addressing modes in addition to those illustrated may be provided in the table 150. Instruction portion 152 of table 150 includes several modifiable fields 158, 160, 162, 164. Field 158 includes data identifying the instruction to be generated, field 160 includes data identifying the computation type associated with the instruction, and field 162 includes data identifying a mnemonic assigned to assist code generation by the synthesizer 79. Field 164 includes data identifying the different addressing modes (i.e., the way in which the instructions' arguments are obtained from memory). [00134] In the illustrated embodiment, the input command 156 ("gen_ops.initialize()") indicates that instruction portion 152 of the table 150 is starting that sets forth the instructions to be generated. Line 166 illustrates one example of user-defined workload parameters for generating one or more instructions. Referring to line 166, "D(IntShortLatencyArith)" entered into field 158 specifies an integer arithmetic instruction with short latency, and "op add" and "addq" entered into fields 160, 162 indicate the instruction is an addition or "add" instruction. In one embodiment, short latency indicates that the processor (e.g., node processor 40) takes one cycle or a few cycles to execute the instruction. The "addr regOrw reglr" of field 164 indicates that a first, register 0 argument is "rw" (read and write) and the second, register 1 argument is "r" (read). Similarly, the "addr regOrw imm" of field 164 describes another variant of the instruction in which the first argument (register 0 argument) is "rw" (read and write), and the second argument is an "imm" ("immediate") value (e.g., a numeral like 123). [00135] Referring to the addressing mode portion 154 of table 150, exemplary line 170 includes "addr regOw reglr" of field 172 that identifies a class of instructions that operate only on registers. The first register argument (i.e., register 0) is a destination "w" (write) and the second register argument (i.e., register 1) is an input "r" (read). The entries in fields 174 and 176 identify the arguments and indicate "src" for a read argument, "dst" for a write argument, or "rmw" for a read-modify-write argument. In x86 architecture, for example, the first register argument may be "rmw" (the argument is read, operated upon, and then written with the result) or another suitable argument. Additional or different user-defined workload parameters may be specified via table 150. [00136] In one embodiment, the table 150 (e.g., trace file) is generated offline, such as with user computer 20, for example, and loaded onto configurator 22. In one embodiment, the table 150 is stored on or loaded onto control server 12 and is displayed with user interface 200 to allow a user to modify the user-defined workload parameters via selectable and modifiable data displayed by the user interface 200. [00137] Referring to FIG. 33, an exemplary process flow for generating and executing a synthetic workload is illustrated. Code synthesizer 79 is illustrated that generates the synthetic test workload and outputs a configuration file 28 and a synthetic workload image 96 to each node 16, and synthetic workload engine 58 of each node 16 executes the synthetic test workloads, as described herein. Blocks 60, 62, 64 of FIG. 32 provide an abstract representation of the contents provided in the trace file that is input into synthesizer 79. Block 60 is a general task graph that represents the execution flow of an instruction set. Block 62 represents the task functions that are executed including input, output, begin, and end instructions. Block 64 represents workload behavior parameters including the data block size, execution duration and latencies, message propagation, and other user-defined parameters described herein. [00138] Synthesizer 79 illustratively includes a code generator 66 and a code emitter 68, each comprising the one or more processors 22 of control server 12 executing software or firmware code stored on memory (e.g., memory 90) accessible by processor(s) 22 to perform the functions described herein. Code generator 66 operates on the data structure (e.g., table) of the trace file describing the user-defined workload parameters and target instruction set architecture and generates an abstracted synthetic code that has the specified execution properties. Code emitter 68 creates an executable synthetic code (i.e., the synthetic test workload) from the abstracted synthetic code in a format suitable for the execution environment (e.g., assembly code to be linked in an execution harness, binary code, or position-independent code to be linked with the simulation infrastructure, etc.). In one embodiment, the desired format of the executable code is hard-coded in synthesizer 79. In another embodiment, the desired format of the executable code is selectable via selectable data of user interface 200. In one embodiment, the executable code is compact in size such that the code may be executed via cycle-accurate simulators that are not adapted to execute full-size workloads. Other suitable configurations of synthesizer 79 may be provided. In one embodiment, synthesizer 79 has access to the computer architecture data of the nodes 16 of node cluster 14. As such, synthesizer 79 generates a synthetic test workload targeting specific microarchitecture and instruction set architecture based on the known computer architecture data of the node cluster 14. As such, the synthetic test workload may be targeted to exercise a desired set of architectural characteristics, for example. [00139] The synthetic test workload generated by synthesizer 79 includes a code module executable with the selected workload container module on nodes 16. When a synthetic test workload is generated and selected for execution, the synthetic test workload is stored as workload image file 96 of FIG. 3 in memory 90 of control server 12. Configurator 22 then loads the workload image file 96 onto each node 16 for execution, or nodes 16 retrieve the workload image file 96. In one embodiment, with the Hadoop workload container module selected, the synthetic test workload is run as the "map" phase of the map-reduce. [00140] In the illustrated embodiment, the synthetic test workload is executed to exercise the hardware of computing system 10 for testing and performance analysis, as described herein. Synthesizer 79 receives desired workload behavior as input via the trace file and produces a synthetic test workload that behaves according to the input. In particular, statistical properties of the desired workload behavior are the input to synthesizer 79, such as the number of instructions to be executed and a statistical distribution of the type of instructions, as described herein. For example, a loaded trace file may include user-defined parameters that request a program loop that contains 1000 instructions, and the trace file may specify that 30% of the instructions are integer instructions, 10% are branch instructions having a particular branch structure, 40% are floating-point instructions, etc. The trace file (or field 450 of FIG. 23) may specify that the loop is to be executed 100 times. Synthesizer 79 then produces the program loop containing the requested parameters as the synthetic test workload. [00141] In one embodiment, the generated synthetic test workload serves to emulate the behavior of an actual workload, such as a specific proprietary code or complex code of a known application or program. For example, some proprietary code contains instructions that are not accessible or available to a user. Similarly, some complex code contains instructions that are complicated and numerous. In some instances, creating a workload based on such proprietary or complex code may be undesirable or difficult. As such, rather than creating a workload code module that contains all the instructions of the proprietary or complex code, monitoring tools (offline from configurator 22, for example) are used to monitor how the proprietary or complex code exercises server hardware (nodes 16 or other server hardware) during execution of the proprietary or complex code. The statistical data gathered by the monitoring tools during the execution of the proprietary code are used to identify parameters that represent the desired execution characteristics of the proprietary or complex code. The collection of parameters is provided in the trace file. The trace file is then loaded as the input to synthesizer 79, and synthesizer 79 generates synthetic code that behaves similar to the proprietary code based on the statistical input and other desired parameters. As such, the complex or proprietary instructions of a particular code are not required to model behavior of that code on cloud computing system 10. [00142] In one embodiment, synthesizer 79 operates in conjunction with batch processor 80 to execute multiple synthetic test workloads generated by synthesizer 79 from varying trace files. In one embodiment, synthetic test workloads are generated based on modified user-defined workload parameters of a table (e.g., table 150 of FIG. 32) that test different target processors, both CPU and GPU's, of the nodes 16. [00143] FIG. 34 illustrates a flow diagram 600 of an exemplary operation performed by configurator 22 of control server 12 of FIGS. 1 and 3 for configuring cloud computing system 10 with a selected workload. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 34. In the illustrated embodiment, configurator 22 configures node cluster 14 of FIG. 1 according to the flow diagram 600 of FIG. 34 based on a plurality of user selections received via user interface 200. At block 602, workload configurator 78 selects, based on a user selection (e.g., selections of inputs 418) received via user interface 200, a workload for execution on cluster of nodes 14 of the cloud computing system 10. The workload is selected at block 602 from a plurality of available workloads including an actual workload and a synthetic test workload. The actual workload comprises a code module stored in a memory (e.g., memory 90 or memory 34) accessible by the control server 12, as described herein. At block 604, configurator 22 configures cluster of nodes 14 of cloud computing system 10 to execute the selected workload such that processing of the selected workload is distributed across cluster of nodes 14, as described herein. [00144] In one embodiment, configurator 22 provides the user interface 200 comprising selectable actual workload data and selectable synthetic test workload data, and the selection of the workload is based on the user selection of at least one of the selectable actual workload data and the selectable synthetic test workload data. Exemplary selectable actual workload data includes selectable input 418 of FIG. 22 corresponding to the "actual workload" and selectable inputs 424, 428 of FIG. 22, and exemplary selectable synthetic test workload data includes selectable input 418 of FIG. 22 corresponding to the "synthetic workload" and selectable inputs 434, 436, 441 of FIG. 23. In one embodiment, workload configurator 78 selects at least one of a pre-generated synthetic test workload and a set of user-defined workload parameters based on the user selection of the selectable synthetic test workload data. The pre-generated synthetic test workload comprises a code module (e.g., loaded via library input 432) stored in a memory (e.g., memory 90 or memory 34) accessible by the control server 12. The synthesizer 79 is operative to generate a synthetic test workload based on the selection of the set of user-defined workload parameters, illustratively provided via the trace file described herein. The user-defined workload parameters of the trace file identify execution characteristics of the synthetic test workload, as described herein. [00145] As described herein, exemplary user-defined workload parameters include at least one of: a number of instructions of the synthetic test workload, a type of instruction of the synthetic test workload, a latency associated with an execution of at least one instruction of the synthetic test workload, and a maximum number of execution iterations of the synthetic test workload, and the type of instruction includes at least one of an integer instruction, a floating point instruction, and a branch instruction. In one embodiment, an execution of the synthetic test workload by the cluster of nodes 14 is operative to simulate execution characteristics associated with an execution of an actual workload by the cluster of nodes 14, such as a complex workload or proprietary workload, as described herein. [00146] FIG. 35 illustrates a flow diagram 610 of an exemplary operation performed by configurator 22 of control server 12 of FIGS. 1 and 3 for configuring cloud computing system 10 with a synthetic test workload. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 35. In the illustrated embodiment, configurator 22 configures node cluster 14 of FIG. 1 according to the flow diagram 610 of FIG. 35 based on a plurality of user selections received via user interface 200. At block 612, code synthesizer 79 of workload configurator 78 generates a synthetic test workload for execution on cluster of nodes 14 based on a set of user-defined workload parameters provided via user interface 200. The set of user-defined workload parameters (e.g., provided with the trace file) identify execution characteristics of the synthetic test workload, as described herein. At block 614, configurator 22 configures cluster of nodes 14 with the synthetic test workload to execute the synthetic test workload such that processing of the synthetic test workload is distributed across the cluster of nodes, as described herein. [00147] In one embodiment, the generation of the synthetic test workload is further based on computer architecture data that identifies at least one of an instruction set architecture and a microarchitecture associated with cluster of nodes 14. As described herein, in one embodiment configurator 22 stores the computer architecture data in memory (e.g., memory 90) such that configurator 22 can identify the instruction set architecture and microarchitecture of each node 16 of cluster of nodes 14. As such, configurator 22 generates the synthetic test workload such that it is configured for execution with specific computer architecture of the nodes 16 of node cluster 14 based on the computer architecture data stored in memory. In one embodiment, code synthesizer 79 generates a plurality of synthetic test workloads each based on a different computer architecture associated with nodes 16 of cluster of nodes 14, and each computer architecture includes at least one of an instruction set architecture and a microarchitecture. In one embodiment, configurator 22 provides the user interface 200 comprising selectable synthetic test workload data, and workload configurator 78 selects the set of user-defined workload parameters for generation of the synthetic test workload based on user selection of the selectable synthetic test workload data. Exemplary selectable synthetic test workload data includes selectable input 418 of FIG. 22 corresponding to the "synthetic workload" and selectable inputs 434, 436, 441 of FIG. 23. In one embodiment, the set of user-defined workload parameters are identified in a data structure (e.g., table 150 of FIG. 32) displayed on a user interface (e.g., user interface 200 or a user interface displayed on display 21 of computer 20), and the data structure includes a plurality of modifiable input fields each identifying at least one user-defined workload parameter, as described herein with respect to table 150 of FIG. 32. In one embodiment, configurator 22 selects a modified hardware configuration of at least one node 16 of node cluster 14 based on a user selection received via user interface 200 (e.g., selection of boot-time parameters with inputs 269-276). In this embodiment, configurator 22 configures cluster of nodes 14 with the synthetic test workload to execute the synthetic test workload on the cluster of nodes 14 having the modified hardware configuration, and the modified hardware configuration results in at least one of a reduced computing capacity and a reduced memory capacity of the at least one node 16, as described herein. [00148] Referring again to FIG. 23, a previously saved workload may be loaded from a local memory (e.g., memory 90 of FIG. 3) via Settings Library tab 416. The workload loaded via the Settings Library tab 416 may include an actual workload, a synthetic test workload, a custom script, or any other workload suitable for execution with a selected workload container module. The loaded workload configuration may be modified based on user inputs to module 210 of user interface 200. A current workload configuration may also be saved to memory 90 via the Settings Library tab 416. [00149] In the illustrated embodiment, a Cloud Suite workload collection may also be loaded and configured via tab 417. CloudSuite is a collection of workloads that comprise typical cloud workloads that are utilized to characterize cloud systems. [00150] Referring to FIG. 25, the Batch Processing module 212 is selected. Based on user input to module 212, batch processor 80 (FIG. 3) is operative to initiate batch processing of multiple workloads. Batch processor 80 is also operative to initiate execution of one or more workloads having a plurality of different configurations, such as different network configurations, different workload container configurations, different synthetic workload configurations, and/or different node configurations (e.g., boot-time configurations, etc.) described herein. Based on user input, batch processor 80 initiates the execution of each workload and/or configuration in a sequence on node cluster 14 such that manual intervention is not required for all workloads to be run to completion. Further, batch processor 80 may configure one or more workloads may to run multiple times based on user settings received via module 212 of user interface 200. Batch processor 80 is operative to execute actual workloads and/or synthetic test workloads as a batch. In the illustrated embodiment, performance data is monitored and aggregated from the batch processing of workloads to enable automatic system tuning, as described herein with respect to FIGS. 47 and 48, for example. [00151] The number of executions for a batch of workloads and/or configurations is specified via repeat count field 480. Based on user input to field 480, batch processor 80 executes one or more workloads the specified number of iterations. A batch sequence table 482 comprises display data listing the batch jobs to be executed by the node cluster 14. A batch job includes one or more workloads that are adapted for execution a specified number of times (e.g., as specified based on input to field 480). In one embodiment, a batch job includes one or more cloud system configurations that are adapted for execution with one or more workloads a specified number of times. While only one batch job is listed in table 482, multiple batch jobs may be added to the table 482. Batch processor 80 selects the listed batch job(s) for execution based on user selection of input(s) 483 corresponding to the listed batch job(s). In one embodiment, the selected batch jobs are executed in a sequence in the order they are listed in table 482. The batch job is illustratively in a JSON file format, although other suitable formats may be used. The batch jobs listed in table 482 are edited, added, and deleted based on user selection of inputs 484, 486, 488, respectively. The order of the batch sequence is adjustable based on user selection of inputs 490, 492 to move a selected batch job to a different position in the sequence displayed in table 482. A batch sequence and other settings associated with the execution of the batch job may be loaded from memory (e.g., memory 34 or memory 90) via selectable input 494, and a currently configured batch sequence is saved to memory (e.g., memory 34 or memory 90) via selectable input 496. Inputs 484-496 are illustratively selectable buttons. [00152] Referring to FIG. 26, the Monitoring module 214 is selected. Based on user input to module 214, data monitor configurator 82 (FIG. 3) is operative to configure one or more data monitoring tools used for monitoring and collecting performance data during execution of a workload on the node cluster 14. Data monitor configurator 82 is operative to configure monitoring tools that monitor data related to the performance of node 16, the workload, the workload container, and/or network 18. In one embodiment, the monitoring tools configured by data monitor configurator 82 include both commercially available monitoring tools and custom monitoring tools provided by a user. The monitoring tools collect data from multiple sources within cloud computing system 10 and other available nodes 16. For example, the monitoring tools include kernel-mode measurement agent 46 and user-mode measurement agent 50 that collect data at each node 16 (FIG. 2). Control server 12 also includes one or more monitoring tools operative monitor network and computing performance on node cluster 14. In one embodiment, based on user input (e.g., input to fields 530, 532 of FIG. 27), data monitor configurator 82 specifies a sampling rate at which the monitoring tool(s) monitors data from nodes 16. Data monitor configurator 82 is operative to configure and initiate the operation of multiple data monitoring tools, including an Apache Hadoop monitoring tool provided on each node 16 (tab 500), a Ganglia tool provided on control server 12 (tab 502), a SystemTap tool provided on each node 16 (tab 504), and virtual memory statistics and I/O statistics monitoring tools provided on one or more nodes 16 (tab 506). [00153] The Hadoop monitoring tool monitors the performance of nodes 16 at the workload container level when the Hadoop workload container module is selected for execution on nodes 16. The Hadoop monitoring tool is loaded by configurator 22 onto each node 16 with the Hadoop workload container module to monitor data associated with the performance of the Hadoop workload container module based on the monitoring configuration identified in FIG. 26. As illustrated in FIG. 26, various monitoring parameters associated with the Hadoop monitoring tool are configured by data monitor configurator 82 based on user input to several modifiable fields and drop-down menus. The modifiable monitoring parameters include a default log level (selected based on input to drop-down menu 508), a maximum file size of collected data (selected based on input to field 510), a total size of all files of collected data (selected based on input to field 512), a log level of the JobTracker tool of the Hadoop workload container (selected based on input to drop-down menu 514), a log level of the TaskTracker tool of the Hadoop workload container (selected based on input to drop-down menu 516), and a log level of the FSNamesystem tool of the Hadoop workload container (selected based on input to drop-down menu 518). A log level identifies the type of data to collect via the Hadoop monitoring tool, such as information ("INFO"), warnings, errors, etc. The JobTracker, Tasktracker, and FSNamesystem tools of the Hadoop workload container include various processes and data tracked by data monitor configurator 82, including the initiation and completion of a workload at the master node 16, metadata associated with file system 55 (FIG. 2), and the initiation of the map and reduce tasks at worker nodes 16, for example. Other suitable data may be collected with the Hadoop monitoring tool. [00154] Referring to FIG. 27, the Ganglia monitoring tool is also operative to monitor and collect performance data of cloud computing system 10 based on the monitoring configuration implemented by data monitor configurator 82. Ganglia is a known system monitoring tool that provides remote live viewing (e.g., via control server 12) of system performance as well as graphs and charts showing historical statistics. In the illustrated embodiment, the Ganglia monitoring tool is executed on control server 12 based on the configuration data provided with data monitor configurator 82. Exemplary data monitored with Ganglia includes processing load averages of node processor 40 (CPU's) during workload execution, the utilization (e.g., stall or inactive time, percentage of time spent processing, percentage of time spent waiting, etc.) of node processors 40 and network 18 during workload execution, and other suitable data. The Ganglia monitoring tool is enabled and disabled by data monitor configurator 82 based on user selection of selectable inputs 520, and a unicast or a multicast communication mode is selected by data monitor configurator 82 based on user selection of selectable inputs 522. Other configurable monitoring parameters associated with Ganglia include a data refresh interval of a generated graph of collected data (selected based on input to field 524), a cleanup threshold (selected based on input to field 526), and an interval for sending metadata (selected based on input to field 528). The data input into fields 524, 526, and 528 are illustratively in seconds. Data monitor configurator 82 is operative to adjust the collection (i.e., sampling) interval and sending intervals based on values (illustratively in seconds) entered into respective fields 530, 532 for collecting data during workload execution associated with the node processor 40 (CPU), the processing load on nodes 16 (e.g., associated with the workload being executed), the usage of node memory 42, the network performance of the nodes 16 on the communication network 18, and the hard disk usage of each node 16. [00155] The SystemTap tool is a kernel-mode measurement agent 46 (FIG. 2) that includes SystemTap monitoring software operative extract, filter, and summarize data associated with nodes 16 of cloud computing system 10. In one embodiment, the SystemTap tool is executed on each node 16. SystemTap is implemented with Linux based operating systems. SystemTap allows a customized monitoring script to be loaded onto each node 16 with customized monitoring configurations, including, for example, the sampling rate and the generation and display of histograms. As illustrated in FIG. 28, with the "Script" tab selected, SystemTap is enabled or disabled by data monitor configurator 82 based on user selection of inputs 536. A SystemTap script file is downloaded to control server 12, added for display in table 538, or removed/deleted from display in table 538 by data monitor configurator 82 based on user selection of respective inputs (buttons) 540. Table 538 comprises display data representing the script files that are available for selection based on user selection of corresponding input(s) 539. Data monitor configurator 82 loads the selected script file of table 538 onto each node 16 upon deployment of the cloud configuration by configurator 22. Other suitable configuration options are available based on user input and selections via tabs 534 for the SystemTap monitoring tool, including configuration of disk I/O, network I/O, and diagnostics, for example. [00156] Referring to FIG. 29, the I/O Time tab 506 provides user access to configure additional monitoring tools, including virtual memory statistics (VMStat) and input/output statistics (IOStat) that are loaded on one or more nodes 16. VMStat collects data associated with availability and utilization of system memory and block I/O controlled with the operating system, the performance of processes, interrupts, paging, etc., for example. For example, VMStat collects data associated with a utilization of system memory such as the amount or percent of time that system memory and/or the memory controller is busy performing read/write operations or is waiting. IOStat collects data associated with statistics (e.g., utilization, availability, etc.) of storage I/O controlled with the operating system, for example. For example, IOStat collects data associated with the percentage of time that processing cores of the processor 40 of the corresponding node 16 is busy executing instructions or waiting to execute instructions. VMStat and IOStat are enabled/disabled by data monitor configurator 82 based on corresponding user selection of respective inputs 546, 548, and the sampling rate (i.e., refresh interval) are selected by data monitor configurator 82 based on values (illustratively in seconds) entered into fields 550, 552. Based on user selection of corresponding "enabled" inputs 546, 548 and values input into fields 550, 552 of tab 506, data monitor configurator 82 configures the VMStat and IOStat monitoring tools, and configurator 22 loads the tools onto each node 16 upon user selection of the corresponding "enabled" inputs 546, 548. [00157] The monitoring tools configured with data monitor configurator 82 cooperate to provide dynamic instrumentation for cloud computing system 10 for monitoring system performance. Based on the data collected via the configured monitored tools, configurator 22 is operative to diagnose system bottlenecks and to determine optimal system configurations (e.g., hardware and network configurations), for example, as described herein. Further, data monitor configurator 82 provides a common user interface by displaying Monitoring module 214 on user interface 200 for receiving user input used to configure each monitoring tool and for displaying monitored data from each tool. [00158] Referring to FIG. 30, the Control and Status module 216 is selected that comprises selectable data. Based on user input to module 216, configurator 22 is operative to launch (i.e., deploy) the system configuration to node cluster 14 by generating multiple configuration files 28 that are loaded onto each node 16. Configurator 22 initiates deployment of the current system configuration (i.e., the system configuration currently identified with modules 202-216) based on user selection of selectable input 560. Batch processor 80 of configurator 22 initiates the batch processing of one or more workloads and/or configurations, i.e., the batch sequence identified in table 482 of FIG. 25, based on user selection of selectable input 562. Workload configurator 78 of configurator 22 initiates the execution of custom workloads, such as custom workloads identified in field 430 of FIG. 22, based on user selection of selectable input 564. Upon deployment of the system configuration based on user selection of input 560, 562, or 564, configurator 22 automatically configures each selected node 16 with the selected node and network settings, workload, workload container module, data monitoring tools, etc., and instructs node cluster 14 to start executing the selected workload and/or batch jobs based on the system configuration information. Configurator 22 terminates or pauses a workload execution before completion based on user selection of respective selectable inputs 566, 568. Configurator 22 restarts a workload currently executing on node cluster 14 based on user selection of selectable input 570. Configurator 22 skips a workload currently executing on node cluster 14 based on user selection of selectable input 572 such that, for example, nodes 16 proceed to execute a next workload of a batch. Based on selection of selectable input 576, data monitor configurator 82 of configurator 22 implements the data monitoring tools, settings, and configuration identified via module 214. In one embodiment, implementing data monitoring settings on nodes 16 comprises generating a corresponding configuration file 28 (FIG. 3) that is provided to each node 16. Based on user selection of input 574, configurator 22 terminates or shuts down the node cluster 14 following completion of the workload execution(s), i.e., following the receipt of a result of the workload execution from node cluster 14 and the collection of all requested data. Inputs 560-572 as well as inputs 582-595 are illustratively selectable buttons. [00159] System status is provided during workload execution via displays 578, 580. Displays 578, 580 shows the progress of the workload execution as well as status information associated with each active node 16 of node cluster 14. The display of the system status is enabled or disabled based on user selection of button 595. [00160] In the illustrated embodiment, node configurator 72, network configurator 74, workload container configurator 76, workload configurator 78, batch processor 80, and data monitor configurator 82 (FIG. 3) each automatically generate at least one corresponding configuration file 28 following deployment initiation via input 560, 562, or 564 to implement their respective configuration functions. The configuration files 28 contain the corresponding configuration data and instructions for configuring each node 16 of the node cluster 14, as described herein. In one embodiment, configurator 22 automatically loads each configuration file 28 onto each node 16 of node cluster 14 following the generation of the files 28. Alternatively, a single configuration file 28 is generated that contains the configuration data and instructions from each component 70-84 of configurator 22, and configurator 22 automatically loads the single configuration file 28 onto each node 16 of node cluster 14 following generation of the configuration file 28. Each image file 92, 94, 96 corresponding to the respective operating system, workload container module, and workload are also loaded onto each node upon launching the configuration deployment with input 560, 562, or 564. Alternatively, nodes 16 may retrieve or request the configuration file(s) 28 and/or image files 92, 94, 96 following the generation of files 28 and image files 92, 94, 96 by configurator 22. [00161] The configuration files 28 deployed to nodes 16, as well as system configuration files saved via input 240 of FIG. 7, include all configuration data and information selected and loaded based on user input to and default settings of modules 202-216. For example, the configuration file 28 generated by node configurator 72 includes the number of nodes 16 to allocate and/or use for node cluster 14 and the hardware requirements and boot time configuration of each node 16, as described herein. The hardware requirements include RAM size, number of CPU cores, and available disk space, for example. The configuration file 28 generated by network configurator 74 includes, for example, global default settings that apply to all nodes 16; group settings including which nodes 16 belong to a given group of node cluster 14, settings for network traffic within the node group, and settings for network traffic to other node groups of node cluster 14; node-specific settings including custom settings for network traffic between arbitrary nodes 16; network parameters including latency, bandwidth, corrupted and dropped packets rates, corrupted and dropped packets correlation and distribution, and rate of reordered packets, as described herein with respect to FIGS. 11-17; and other suitable network parameters and network topology configuration data. The configuration file 28 generated by workload container configurator 76 includes, for example, configuration settings for the primary workload container software used to run the workload. The configuration file 28 generated by workload configurator 78 includes, for example, configuration settings for the selected predefined or synthetic workload to be run on nodes 16. The configuration settings may include synthetic test workload configuration data including a synthetic test workload image file, a maximum instructions count, a maximum iterations count, and a ratio of I/O operations, for example. [00162] Upon initiation of deployment via input 560 (or inputs 562, 564), configurator 22 automatically performs several operations. According to one illustrative embodiment, configurator 22 allocates and starts the desired nodes 16 to select the cluster of nodes 14. Configurator 22 then passes the address (e.g., IP address) of the control server 12 to each node 16 and assigns and passes an identifier and/or address to each node 16. In one embodiment, each node 16 is configured to automatically contact control server 12 and to request the one or more configuration files 28 that describe the job and other configuration information following receipt of the control server 12 address. Each node 16 communicates with control server 12 using any suitable mechanism, including, for example, a specified RMI mechanism (e.g., web-based interface) to communicate directly with control server 12, HTTP requests to interact with control server 12 via Apache HTTP or Tomcat servers, or a remote shell mechanism. [00163] In one embodiment, configurator 22 waits until a request is received from each node 16 of node cluster 14. In one embodiment, if a node 16 fails to start, i.e., based on a lack of request or acknowledgement from the node 16, configurator 22 attempts to restart that node 16. If the node 16 continues to fail to start, configurator 22 identifies and requests another available node 16 not originally included in node cluster 14 to take the place of the failed node 16. The replacement node 16 includes hardware specifications and processing capabilities that are the same or similar to the failed node 16. In one embodiment, configurator 22 continues to monitor nodes 16 throughout workload execution, and restarts nodes 16 (and the workload) that stop responding. Configurator 22 may detect nodes 16 not responding during workload execution based on failed data monitoring or other failed communications. [00164] Upon configurator 22 receiving a request from each node 16 of the node cluster 14, configurator 22 determines that each node 16 is ready to proceed. In one embodiment, configurator 22 then provides each node 16 with the required data, including configuration file(s) 28, the addresses and ID's of other nodes 16 in node cluster 14, and image files 92, 94, 96. Upon receipt of the required data from control server 12, the role of each node 16 in node cluster 14 is determined. In one embodiment, the role determination is made by control server 12 (e.g., automatically or based on user input) and communicated to nodes 16. Alternatively, the role determination is made by node cluster 14 using a distributed arbitration mechanism. In one embodiment, the role determination is dependent on the workload. For example, for a node cluster 14 operating with the Hadoop workload container, a first node 16 may be designated as the master node 16 ("namenode") and the remaining nodes 16 may designated as the slave/worker nodes 16 ("datanodes"). In one embodiment, the role determination of a node 16 further depends on the hardware properties of the node 16. For example, a group of nodes 16 with slower node processors 40 may be designated as database servers for storing data, and another group of nodes 16 with faster node processors 40 may be designated as compute nodes for processing the workload. In one embodiment, the role determination is based on user input provided via configuration file 28. For example, a user may assign a first node(s) 16 to perform a first task, a second node(s) 16 to perform a second task, a third node(s) 16 to perform a third task, and so on. [00165] Each node 16 proceeds to configure its virtual network settings based on the network configuration data received via configuration file(s) 28. This may include, for example, using a network delay and/or a packet loss emulator, as described herein. Each node 16 further proceeds to install and/or configure the user-requested software applications, including the workload container code module received via workload container image file 94. In one embodiment, multiple workload container modules (e.g., multiple versions/builds) are pre- installed at each node 16, and a soft link to the location of the selected workload container module is created based on configuration file 28. If a synthetic test workload is generated and selected at control server 12, each node 16 proceeds to activate the synthetic test workload based on workload image file 96. Each node 16 further proceeds to run the diagnostic and monitoring tools (e.g., Ganglia, SystemTap, VMStat, IOStat, etc.) based on the configuration information. Finally, each node 16 proceeds to start execution of the selected workload. [00166] In the illustrated embodiment, each step performed by configurator 22 and nodes 16 following deployment launch are synchronized across nodes 16 of node cluster 14. In one embodiment, configurator 22 of control server 12 coordinates nodes 16, although one or more nodes 16 of node cluster 14 may alternatively manage synchronization. In one embodiment, the synchronization mechanism used for coordinating node operation causes each node 16 to provide status feedback to control server 12 on a regular basis. As such, nodes 16 failing to report within a specified time are assumed to have crashed and are restarted by configurator 22. Configurator 22 may also provide a status to the user to indicate progress of the job, such as via displays 578, 580 of FIG. 30. [00167] Upon completion of the job, data aggregator 84 (FIG. 3) is operative to collect data from each node 16. In particular, the data collected by the monitoring tools of each node 16 (e.g., job output, performance statistics, application logs, etc.; see module 214) are accessed by control server 12 (e.g., memory 90 of FIG. 3). In one embodiment, data aggregator 84 retrieves the data from each node 16. In another embodiment, each node 16 pushes the data to data aggregator 84. In the illustrated embodiment, the data is communicated to control server 12 in the form of log files 98 from each node 16, as illustrated in FIG. 31 (see also FIG. 3). Each log file 98 includes the data collected by one or more of the various monitoring tools of each node 16. As described herein, data aggregator 84 is operative to manipulate and analyze the collected data from the log files 98 and to display (e.g., via display 21 of FIG. 1) the aggregated data to the user in the form of graphs, histograms, charts, etc. Data aggregator 84 also aggregates data from monitoring tools provided on control server 12, such as the Ganglia monitoring tool described in FIG. 27. [00168] Referring again to FIG. 30, data aggregator 84 is operative to collect and aggregate the performance data from each node 16 and to generate logs, statistics, graphs, and other representations of the data based on user selection of corresponding inputs 582-594 of module 216. Data aggregator 84 gathers raw statistical data provided in the log files 98 and provided with other monitoring tools based on user selection of input 586. Data aggregator 84 downloads, based on user selection of input 588, all log files 98 from nodes 16 to the local file system, where it may be further analyzed, or stored for historical trend analysis. Data aggregator 84 retrieves only the log files associated with the SystemTap monitoring tool based on user selection of input 590. Data aggregator 84 displays one or more of the log files 98 provided by nodes 16 on user interface 200 based on user selection of input 582. Data aggregator 84 displays statistical data on user interface 200 in the form of graphs and charts based on user selection of input 584. The statistical data include performance data associated with, for example, the performance of network 18 and network communication by nodes 16, the performance of various hardware components of node 16, the workload execution, and the performance of the overall node cluster 14. Data aggregator 84 generates one or more graphs for display on user interface 200 illustrating various data collected from nodes 16 and from other monitoring tools based on user selection of input 592. [00169] In one embodiment, data aggregator 84 selects the data to display based on the data selected for monitoring with the monitoring tools configured in Monitoring module 214. In another embodiment, data aggregator 84 selects the data aggregated and displayed based on user inputs to Control and Status module 216. For example, a user selects which log files 98, statistical data, and graphs to display upon selecting respective inputs 582, 584, and 592. In one embodiment, data aggregator 84 selects which data to display in graphs and selects how to display the data (e.g., line graph, bar graph, histogram, etc.) based on user inputs to user interface 200. Exemplary graphical data displayed based on selection of input 592 include processor speed versus added network delay, workload execution speed versus number of processor cores, workload execution speed versus number of processing threads per core, the number of data packets transmitted or received by a particular node 16 over time, the number of data packets of a certain size communicated over time, the time spent by data packets in a network stack, etc. Configuring Boot-Time Parameters of Nodes of the Cloud Computing System [00170] FIG. 36 illustrates a flow diagram 620 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 for configuring a boot-time configuration of cloud computing system 10. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 36. In the illustrated embodiment, configurator 22 configures node cluster 14 of FIG. 1 according to the flow diagram 620 of FIG. 36 based on a plurality of user selections received via user interface 200. At block 622, configurator 22 provides user interface 200 comprising selectable boot-time configuration data. Exemplary selectable boot-time configuration data includes selectable inputs 269, 271 and fields 268, 270, 272, 274, 276 of the displayed screen of FIG. 10. At block 624, node configurator 72 of configurator 22 selects, based on at least one user selection of the selectable boot-time configuration data, a boot-time configuration for at least one node 16 of a cluster of nodes 14 of the cloud computing system 10. [00171] At block 626, configurator 22 configures the at least one node 16 of the cluster of nodes 14 with the selected boot-time configuration to modify at least one boot-time parameter of the at least one node 16. For example, the at least one boot-time parameter includes a number of processing cores (based on input to field 268) of the at least one node 16 that are enabled during an execution of the workload and/or an amount of system memory (based on input to fields 270, 272) that is accessible by the operating system 44 (FIG. 2) of the at least one node 16. Further, a modified boot-time parameter may identify a subset of the plurality of instructions of the workload to be executed by the at least one node 16 based on the number of instructions input to field 274 and selection of the corresponding custom input 271. As such, the workload is executed with the cluster of nodes 14 based on the modification of the at least one boot-time parameter of the at least one node 16. In one embodiment, configurator 22 initiates the execution of the workload, and the cluster of nodes 14 executes the workload with at least one of a reduced computing capacity and a reduced memory capacity based on the modification of the at least one boot-time parameter. In particular, a modification to the number of processing cores with field 268 and selection of corresponding input 271 serves to reduce the computing capacity, and a modification to the number of system memory with fields 270, 272 and selection of corresponding input 271 serves to reduce the memory capacity. [00172] In one embodiment, node configurator 72 selects, based on at least one user selection of the selectable boot-time configuration data, a first boot-time configuration for a first node 16 of the cluster of nodes 14 and a second boot-time configuration for a second node 16 of the cluster of nodes 14. In this embodiment, the first boot-time configuration includes a first modification of at least one boot-time parameter of the first node 16 and the second boot-time configuration includes a second modification of at least one boot-time parameter of the second node 16, and the first modification is different from the second modification. In one example, the first boot-time configuration includes enabling two processing cores of the first node 16, and the second boot-time configuration includes enabling three processing cores of the second node 16. Other suitable modifications of boot-time parameters of each node 16 may be provided as described above. [00173] FIG. 37 illustrates a flow diagram 630 of an exemplary operation performed by a node 16 of the cluster of nodes 14 of FIG. 1 for configuring a boot-time configuration of the node 16. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 37. At block 632, a node 16 of cluster of nodes 14 modifies at least one boot-time parameter of the node 16 based on a boot-time configuration adjustment request provided by cloud configuration server 12. In the illustrated embodiment, the boot-time configuration adjustment request is provided in a configuration file 28 (FIG. 3) and identifies a requested modification to one or more boot-time parameters of node 16 based on user selections made via inputs 270, 271 and fields 268, 270, 272, 274, 276 of FIG. 10, described herein. In the illustrated embodiment, the node 16 has an initial boot-time configuration prior to the modifying the at least one boot-time parameter and a modified boot-time configuration following the modifying the at least one boot-time parameter. The modified boot-time configuration provides at least one of a reduced computing capacity and a reduced memory capacity of the node 16, as described herein. [00174] At block 634, node 16 executes, following a reboot of the node 16 by the node 16, at least a portion of a workload upon a determination by the node 16 following the reboot of the node 16 that the at least one boot-time parameter has been modified according to the boot-time configuration adjustment request. In one embodiment, node 16 obtains the at least a portion of the workload from cloud configuration server 12 and executes the workload based on the modification to the at least one boot-time parameter. In one embodiment, the determination by node 16 is based on a flag (e.g., one or more bits) set by the node 16 following the modification to the at least one boot-time parameter and prior to the reboot of the node 16. A set flag indicates to the node 16 following a restart of the node 16 that the at least one boot-time parameter has already been modified, and thus node 16 does not attempt to modify the at least one boot-time parameter and reboot again. In one embodiment, the determination is based on a comparison of a boot-time configuration of the node 16 and a requested boot-time configuration identified with the boot-time configuration adjustment request. For example, node 16 compares the current boot-time parameters of the node 16 with the requested boot-time parameters identified with the boot-time configuration adjustment request and, if the parameters are the same, does not attempt to modify the at least one boot-time parameter and reboot again. In one embodiment, when node 16 receives a new configuration file containing a new boot-time configuration adjustment request, node 16 clears the flag before implementing the modification to the boot-time parameters according to the new boot-time configuration adjustment request. [00175] FIG. 38 illustrates a flow diagram 650 of an exemplary detailed operation performed by cloud computing system 10 for configuring a boot-time configuration of one or more nodes 16 of node cluster 14. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 38. In the illustrated embodiment, configurator 22 performs blocks 652-656 of FIG. 38, and each configured node 16 performs blocks 658-664 of FIG. 38. At block 652, configurator 22 creates one or more boot-time configuration files 28 (FIG. 3) for corresponding nodes 16 based on user-defined boot-time parameters entered via user interface 200 (FIG. 10), as described herein. In one embodiment, the boot-time configuration file 28 is a patch for one or more configuration files of the node 16 or is in a task-specific file/data format. At block 654, configurator 22 starts the cluster of nodes 14 (e.g., upon user selection of input 560, or inputs 562, 564, of FIG. 30, as described herein). At block 656, configurator 22 distributes the boot- time configuration file(s) to the appropriate nodes 16 of the cluster of nodes 14. In one embodiment, each node 16 receives a boot-time configuration file, and each file may identify unique boot-time parameters for the respective node 16. In one embodiment, the configuration files 28 are pushed to the nodes, such as via a secure shell (SSH) file transfer, via an ftp client, via a user data string in Amazon AWS, or via another suitable file transfer mechanism. In another embodiment, the nodes 16 each query (e.g., via an HTTP request) control server 12 or master node 16 for the boot-time configuration information. At block 658, the node 16 applies the desired boot-time parameter changes specified in the received boot-time configuration file 28. In one example, the node 16 applies a patch to the boot files of the node 16, or the node 16 uses a utility to generate a new set of boot files for the node 16 based on the boot-time parameters specified in the received boot-time configuration file 28. In one embodiment, during or upon applying the desired boot-time changes at block 658, node 16 sets a status flag that indicates that the boot-time configuration has been updated, as described herein. At block 660, the node 16 forces a reboot following the application of the boot-time configuration changes. Upon rebooting, node 16 determines at block 662 that the boot-time configuration of the node 16 has already been updated with the boot-time parameter changes specified in the received boot- time configuration file 28. In one embodiment, node 16 determines the boot-time configuration is updated at block 662 based on the status flag set at block 658 or based on a comparison of the current boot-time configuration of the node 16 to the boot-time configuration file 28, as described herein. As such, node 16 reduces the likelihood of applying the boot-time configuration changes more than once. At block 664, node 16 proceeds with the execution of other tasks, including execution of the workload or the portion of the workload received from control server 12. Modifying and/or Emulating a Network Configuration [00176] FIG. 39 illustrates a flow diagram 700 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 for modifying a network configuration of the allocated cluster of nodes 14 of cloud computing system 10. Reference is made to FIGS. 1 and 3 as well as FIGS. 11-17 throughout the description of FIG. 39. At block 702, network configurator 74 modifies, based on a user selection received via user interface 200, a network configuration of at least one node 16 of cluster of nodes 14 of the cloud computing system 10. Modifying the network configuration of the at least one node 16 at block 702 comprises modifying the network performance of the at least one node 16 on communication network 18 (FIG. 1). The network performance is modified by modifying network parameters such as the packet communication rate, dropped or corrupted packets, reordered packets, etc., as described herein. In the illustrated embodiment, network configurator 74 modifies the network configuration of a node 16 by generating a network configuration file 28 (FIG. 3) based on the user selections and input provided via module 280 of user interface 200, described herein with respect to FIGS. 11-17, and by providing the network configuration file 28 to the node 16 (or the node 16 fetching the file 28). Nodes 16 then implement the changes to the network configuration of the node 16 specified in the accessed network configuration file 28. In the illustrated embodiment, the at least one node 16 has an initial network configuration prior to the modifying and a modified network configuration following the modifying. In one embodiment, the modified network configuration reduces network performance of the at least one node 16 on the communication network 18 during an execution of the selected workload. Alternatively, the modified network configuration increases network performance of the at least one node 16, such as, for example, by decreasing the communication delay value specified via field 302 of FIG. 11. [00177] In one embodiment, network configurator 74 modifies the network configuration of the at least one node 16 by changing at least one network parameter of the at least one node 16 to limit the network performance of the at least one node 16 on communication network 18 during an execution of the workload. In one embodiment, the at least one network parameter that is changed comprises at least one of a packet communication delay, a packet loss rate, a packet duplication rate, a packet corruption rate, a packet reordering rate, and a packet communication rate, which are selectable by a user via tabs 282-294, as described herein. As such, network configurator 74 limits the network performance of the at least one node 16 by generating and providing the node 16 access to a configuration file 28 that identifies a modification to a network parameter (e.g., an increased communication delay between nodes 16, an increased packet loss rate or corruption rate, etc.). [00178] In the illustrated embodiment, configurator 22 provides user interface 200 comprising selectable network configuration data, and network configurator 74 modifies the network configuration of the at least one node 16 based on at least one user selection of the selectable network configuration data, as described herein. Exemplary selectable network configuration data includes inputs 298-301 and corresponding fields 302-312 of FIG. 11, inputs 313, 314 and corresponding fields 315, 316 of FIG. 12, inputs 317, 318 and corresponding fields 319, 320 of FIG. 13, input 321 and corresponding field 322 of FIG. 14, inputs 323, 324 and corresponding fields 325, 326 of FIG. 15, inputs 327-330, 335-338 and corresponding fields 331- 334 of FIG. 16, and input 340 and corresponding field 342 of FIG. 17. In one embodiment, network configurator 74 modifies the network performance by changing (i.e., via the network configuration file 28), based on at least one user selection of the selectable network configuration data, a first network parameter of a first node 16 of cluster of nodes 14 to limit the network performance of the first node 16 on the communication network 18 during the execution of the workload and by changing a second network parameter of a second node 16 of cluster of nodes 14 to limit the network performance of the second node 16 on the communication network 18 during the execution of the workload. In one embodiment, the first network parameter is different from the second network parameter. As such, network configurator 74 is operative to modify different network parameters of different nodes 16 of cluster of nodes 14 to achieve desired network characteristics of cluster of nodes 14 during workload execution. [00179] In the illustrated embodiment, configurator 22 is further operative to select a cluster of nodes 14 for cloud computing system 10 having a network configuration that substantially matches a network configuration of an emulated node cluster, as described herein with respect to FIGS. 40-42. As referenced herein, an emulated node cluster includes any group of networked nodes that has a known network configuration that is to be emulated by the node cluster 14 selected by control server 12. Each node of the emulated node cluster includes one or more processing devices and memory accessible by the processing devices. In one embodiment, the emulated node cluster does not include the available nodes 16 selectable by configurator 22. For example, the emulated node cluster includes nodes that are separate from the available nodes 16 housed in the one or more data center(s) and accessible by configurator 22, such as nodes that are provided by a user. Alternatively, the emulated node cluster may include a group of the available nodes 16. The network topology and network performance characteristics of the emulated node cluster is obtained using one or more network performance tests, as described below. Referring to FIG. 40, a flow diagram 710 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 is illustrated for selecting a cluster of nodes 14 that have network characteristics substantially matching network characteristics of an emulated node cluster. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 40. In the illustrated embodiment, configurator 22 selects and configures node cluster 14 of FIG. 1 according to the flow diagram 710 of FIG. 40 based on user selections received via user interface 200, as described herein. At block 712, node configurator 72 compares a communication network configuration of an emulated node cluster and an actual communication network configuration of the plurality of available nodes 16. At block 714, node configurator 72 selects a cluster of nodes 14 for cloud computing system 10 from a plurality of available nodes 16 coupled to communication network 18 based on the comparison of block 712. The selected cluster of nodes 14 include a subset of the plurality of available nodes 16. At block 716, node configurator 72 configures the selected cluster of nodes 14 to execute a workload such that each node 16 of the cluster of nodes 14 is operative to share processing of the workload with other nodes 16 of the cluster of nodes 14, as described herein. In one embodiment, blocks 712-716 are initiated upon deployment of the cloud configuration based on user input to module 216 of FIG. 30, as described herein. [00180] In the illustrated embodiment, the communication network configuration of the emulated node cluster and the actual communication network configuration of the plurality of available nodes 16 each include communication network characteristics associated with the corresponding nodes. Node configurator 72 selects the cluster of nodes 14 based on similarities between the communication network characteristics of the emulated node cluster and the communication network characteristics of the plurality of available nodes 16. Exemplary communication network characteristics include network topology and network parameters. Exemplary network parameters include communication rates and latencies between nodes, network bandwidth between nodes, and packet error rates. Network topology includes the physical and logical connectivity of the nodes, the identification of which nodes and groups of nodes of the node cluster are physically located near or far from each other, the type of connection between the nodes (e.g., fiber optic link, satellite connection, etc.), and other suitable characteristics. The packet error rate includes dropped or lost packets, corrupted packets, reordered packets, duplicated packets, etc. In one embodiment, node configurator 72 prioritizes the communication network characteristics of the emulated node cluster and selects the cluster of nodes 14 based on the prioritized communication network characteristics, as described herein with respect to FIG. 41. [00181] In the illustrated embodiment, node configurator 72 initiates a network performance test on the available nodes 16 to identify the actual communication network configuration of the available nodes 16. Any suitable network performance test may be used. For example, node configurator 72 may send a request to each available node 16 to execute a computer network administration utility such as Packet Internet Groper ("Ping") to test and collect data regarding the network performance between available nodes 16. Based on the results of the Ping test provided by each node 16, node configurator 72 determines the actual communication network configuration of the available nodes 16. In one embodiment, Ping is used in conjunction with other network performance tests to obtain the actual communication network configuration. Configurator 22 aggregates the network performance test results received from nodes 16 to create a network descriptor data file or object (see data file 750 of FIG. 42, for example) that identifies the actual communication network configuration of the available nodes 16. In one embodiment, configurator 22 initiates the network performance test and aggregates the results based on user input to user interface 200. For example, a user selection of button 586 of FIG. 30 or another suitable input may cause configurator 22 to initiate the test and aggregate the results. [00182] In the illustrated embodiment, node configurator 72 also accesses one or more data files (e.g., data file 750 of FIG. 42) identifying the communication network configuration of the emulated node cluster. In one embodiment, the data file(s) are obtained offline of control server 12 by implementing the one or more network performance tests on the emulated cluster of nodes (e.g., Ping test, etc.). In one embodiment, configurator 22 loads the data file associated with the emulated node cluster into accessible memory (e.g., memory 90 of FIG. 3). For example, configurator 22 may load the data file based on a user identifying the location of the data file via user interface 200, such as via inputs to table 226 of FIG. 7. As such, configurator 22 performs the comparison at block 712 of FIG. 40 by comparing the communication network characteristics identified in the generated data file associated with the available nodes 16 and the accessed data file associated with the emulated node cluster. [00183] An exemplary data file 750 is illustrated in FIG. 42. Data file 750 identifies a network configuration of any suitable networked nodes, such as available nodes 16 accessible by control server 12 or nodes of an emulated node cluster. As shown, data file 750 identifies several groups of nodes illustratively including Groups A, B,...M. Each Group A, B, M includes nodes that are physically near each other, such as nodes on the same physical rack of a data center. Lines 6-11 identify network parameters associated with network communication by nodes of Group A, lines 15-22 identify network parameters associated with network communication by nodes of Group B, and lines 27-34 identify network parameters associated with network communication by nodes of Group M. For example, lines 6 and 7 identify a latency, bandwidth, and error rate associated with communication between nodes of Group A. Lines 8 and 9 identify a latency, bandwidth, and error rate associated with communication between Group A nodes and Group B nodes. Similarly, lines 10 and 11 identify a latency, bandwidth, and error rate associated with communication between Group A nodes and Group M nodes. The network parameters associated with communication by nodes of Groups B and M are similarly identified in data file 750. Data file 750 may identify additional network configuration data, such as network topology data and other network parameters, as described herein. [00184] Referring to FIG. 41, a flow diagram 720 is illustrated of an exemplary detailed operation performed by one or more computing devices, including configurator 22 of FIGS. 1 and 3, for selecting a cluster of nodes 14 that have network characteristics substantially matching network characteristics of an emulated node cluster. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 41. At block 722, a network configuration is requested from each node of the emulated node cluster. For example, the network performance test is initiated on each node, and the test results are received by the computing device, as described herein. At block 724, the network configuration data file (e.g., data file 750) is created based on the network configuration data received from the nodes of the emulated node cluster resulting from the performance test. As described herein, blocks 722 and 724 may be performed offline by a computing system separate from cloud computing system 10, such as with computer 20 of FIG. 1, for example. [00185] At block 726, configurator 22 requests a network configuration from each available node 16 or from a group of available nodes 16 of the data center. For example, configurator 22 initiates the network performance test on the available nodes 16, and configurator 22 aggregates the configuration data resulting from the network performance tests, as described herein. At block 728, configurator 22 creates a network configuration data file (e.g., data file 750) based on the network configuration data received from available nodes 16. As such, configurator 22 has access to two configuration data files, including a data file describing the emulated node cluster and a data file describing the available nodes 16. Configurator 22 selects suitable nodes 16 from the available nodes 16 that have similar network characteristics as the emulated node cluster based on the comparison of the network properties identified in the two data files, as represented at block 730. In one embodiment, configurator 22 further selects suitable nodes at block 730 based on a comparison of the node hardware characteristics (e.g., processing capacity, memory capacity, etc.) of the emulated node cluster and the available nodes 16, as described herein. [00186] At block 732, configurator 22 tunes the selected nodes 16 based on the desired network configuration parameters identified in the data file associated with the emulated node cluster. For example, the network characteristics of the selected nodes 16 may not exactly match the network characteristics of the emulated node cluster, and further network tuning may be required or desired. As such, the operating system 44, network topology driver 48, and/or other network components and network parameters of each node 16 are tuned to further achieve the desired network performance of the emulated node cluster. In one embodiment, configurator 22 tunes the selected nodes 16 automatically based on the network characteristics identified in the data file. In one embodiment, network parameters are tuned further based on user input provided via module 206 of user interface 200, as described herein with respect to FIGS. 11-17, for example. [00187] In one exemplary embodiment, configurator 22 selects the suitable nodes 16 at block 730 using the following "best matching" technique, although other suitable methods and algorithms may be provided. Configurator 22 considers Z network properties (i.e., characteristics) when comparing the network configuration data of the data files (e.g., latency - po, bandwidth - pi, error rate - pz), and nodes Xi, X 2, .. . , XQ are the nodes on the emulated node cluster. Configurator 22 selects a subset of available nodes 16 (e.g., nodes Yi, Y 2, .. . , YQ) that are most similar to nodes X lsX 2, .. . , XQ with respect to network properties pi, p 2, ... , p x. Although other algorithms may be used to perform the selection, one exemplary algorithm implemented by configurator 22 for fmdable a suitable subset of available nodes 16 includes prioritizing the network properties. In an exemplary prioritization, property po has higher priority than property p lsand property p khas higher priority than property p k+1. As such, in the illustrated example, latency is given a higher priority than bandwidth during the node selection, and bandwidth is given a higher priority than error rate during the node selection. A function P(N,X,Y) with inputs N (network property), X (node), and Y (node) may be configured to return the value of network property N between network nodes X and Y. Such a function may be implemented using the network descriptor data files/objects (e.g., data files 750) created at blocks 724, 728. An initial list of nodes L = {Y lsY 2, Y 3j... } contains all of the available nodes 16. For each node Y gin the cloud where 1 g≤ R (R is the total number of nodes in L, ®≥ Q), the following equation (1) applies: [00188] For each node Xi in the emulated node cluster, where I≤ ί≤ £? (Q is the number of nodes in the emulated node cluster), the following equation (2) applies: [00189] The algorithm proceeds to find an available node Y wfor the cloud computing system 10 such that fs ) - = %ψέ®^ $,ξ ψξψ — &Q¾. As such, node Y wis used to simulate original node X;, and node Y wis removed from list L. The algorithm proceeds until a full set of available nodes 16 are selected. Other suitable methods and algorithms for selecting the nodes 16 at block 730 may be provided. [00190] In one exemplary embodiment, configurator 22 tunes the selected nodes 16 at block 732 using the following method, although other methods and algorithms may be provided. With this method, configurator runs a configuration application that automatically creates the appropriate network simulation layer on each node 16. If using a Netem network delay and loss emulator, the following algorithm is implemented by configurator 22. For each node in the emulated node cluster, G sis the node group that the emulated node belongs to (i.e., each node group comprises nodes that are physically near each other, e.g., same rack). For each group Gi where 1 <= i <= E and E is the total number of groups defined in the data file associated with the emulated node cluster, the following is performed by configurator 22. Configurator 22 looks up the desired network properties p 0.. -P Nfor outgoing traffic from node G sto node Gi. Configurator 22 creates a new class of service, such as by using the command "tc class add dev," for example. Configurator 22 creates a new queuing discipline, such as by using the command "tc qdisc add dev," for example. Configurator 22 sets the desired network properties to the class or queuing discipline "qdisc." The bandwidth and burst network properties are specified at the class, and all other properties (latency, error rate, etc.) are specified at the queuing discipline. For each node Y n, Gy nis the group that the node Y nbelongs to. Configurator 22 configures a filter based on the destination IP address (address of node Y n) and assigns it to class Gy n. This can be done, for example, using the command "tc filter add dev." [00191] As a result, if the Netem emulator is turned on, the selected node cluster 14 will have similar network performance to the emulated node cluster with respect to at least the following network properties: minimum latency, maximum bandwidth, maximum burst rate, minimum packet corruption rate, minimum packet loss rate, and minimum packet reordering rate. Other suitable methods and algorithms for tuning the nodes 16 at block 732 may be provided. [00192] In one embodiment, blocks 726-732 of FIG. 41 are repeated with different groups of available nodes 16 until the full cluster of nodes 14 is selected that corresponds to the emulated node cluster. In one embodiment, the emulated node cluster is theoretical in that the physical nodes 16 may or may not exist, but the desired network configuration is known and provided as input to configurator 22 for performing the node selection. In one embodiment, upon selecting the cluster of nodes 14 based on the emulated node cluster, configurator 22 is operative to test various workloads with the selected cluster of nodes 14 having the desired network configuration, such as using batch processor 80 described herein. Allocating a Cluster of Nodes based on Hardware Characteristics [00193] FIG. 43 illustrates a flow diagram 760 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 for allocating a cluster of nodes 14 for cloud computing system 10. Reference is made to FIGS. 1-3 throughout the description of FIG. 43. At block 762, configurator 22 (e.g., data monitor configurator 82) initiates a hardware performance assessment test on a group of available nodes 16 of one or more data centers to obtain actual hardware performance characteristics of the group of available nodes 16. At block 764, node configurator 72 compares the actual hardware performance characteristics of the group of available nodes 16 and desired hardware performance characteristics identified based on user selections via user interface 200. At block 766, node configurator 72 selects a subset of nodes 16 for the cloud computing system 10 from the group of available nodes 16 based on the comparison at block 764. The subset of nodes 16, such as node cluster 14 or a group of nodes 16 of the node cluster 14, are operative to share processing of a workload, as described herein. The number of nodes 16 in the subset of nodes 16 is less than or equal to a number of nodes 16 requested by the user for the node cluster 14, as described herein. [00194] In one embodiment, node configurator 72 receives a user request via user interface 200 requesting a cluster of nodes for the cloud computing system 10 having the desired hardware performance characteristics. The user request identifies the desired hardware performance characteristics based on, for example, user selections of selectable hardware configuration data, such as selection boxes 259, inputs 262, and field 256 of FIG. 8 and selectable inputs 265 of FIG. 9. In one embodiment, the fields of table 264 of FIG. 9 are selectable/modifiable to further identify desired hardware performance characteristics. Node configurator 72 may identify the desired hardware performance characteristics based on other suitable selectable inputs and fields of user interface 200. Node configurator 72 selects the group of available nodes 16 for testing with the hardware performance assessment test based on the user request of the cluster of nodes and the desired hardware performance characteristics identified in the request (e.g., based on hardware similarities between the available nodes 16 and the requested cluster of nodes). In the illustrated embodiment, the number of nodes 16 of the group of available nodes 16 is greater than the number of nodes 16 of the cluster of nodes requested with the user request. [00195] An exemplary hardware performance characteristic includes the computer architecture of a node 16, such as whether the node 16 has a 64-bit processor architecture or a 32-bit processor architecture to support a workload that requires native 32-bit and/or 64-bit operations. Other exemplary hardware performance characteristics include a manufacturer of the processor(s) 40 of the node 16 (e.g., AMD, Intel, Nvidia, etc.), an operating frequency of the processor(s) 40 of the node 16, and a read/write performance of the node 16. Still other exemplary hardware performance characteristics include: a system memory capacity and a disk space (storage capacity); number and size of processors 40 of the node 16; a cache size of the node 16; available instruction sets of the node 16; disk I/O performance, hard drive speed of the node 16; the ability of the node 16 to support emulating software; the chipset; the type of memory of the node 16; the network communication latency/bandwidth between nodes 16; and other suitable hardware performance characteristics. In the illustrated embodiment, each of these hardware performance characteristics may be specified as desired by a user based on the user request provided via user interface 200. Further, one or more hardware performance assessment tests are operative to determine these actual hardware performance characteristics of each selected available node 16. [00196] In one embodiment, node configurator 72 initiates the hardware performance assessment test at block 762 by deploying one or more hardware performance assessment tools to each node 16 that are operative to identify or determine the hardware performance characteristics of the node 16 and to generate hardware configuration data representative of these characteristics. Data aggregator 84 is then operative to aggregate the hardware performance data provided by the hardware performance assessment tools such that node configurator 72 can determine the actual hardware performance characteristics of each node 16 based on the aggregated data. An exemplary assessment tool includes a CPU identification tool ("CPUID"), which is known in the art, that includes an executable operation code for identifying the type of processor(s) of the node 16 and various characteristics/features of the processor (e.g., manufacturer, processor speed and capacity, available memory and disk space, etc.). Another exemplary monitoring tool includes a software code module that when executed by the node 16 is operative to test for an instruction set extension or instruction type to determine the instruction set compatible with the node 16 and/or the manufacturer of the processor(s). Another exemplary monitoring tool includes software code modules that when executed by the node 16 are operative to test whether a node 16 has 64-bit or 32-bit architecture. For example, such a test may involve issuing a command or processing request and measuring how long the processor takes to complete the request. Other suitable assessment tools may be provided. [00197] In one embodiment, the number of nodes 16 of the subset of nodes 16 selected at block 766 is less than the number of nodes 16 identified in the user request. As such, configurator 22 repeats steps 762-766 to obtain additional subsets of nodes 16 until the number of selected nodes 16 is equal to the number of nodes 16 requested with the user request. In one embodiment, after selecting the first subset of nodes 16 at block 766, node configurator 72 selects a second group of available nodes 16 different from the first group of available nodes 16 initially tested at block 762. Data monitor configurator 82 initiates the hardware performance assessment test on the second group of available nodes 16 to obtain actual hardware performance characteristics of the second group of available nodes 16, and node configurator 72 selects a second subset of nodes 16 for the cloud computing system 10 from the second group of available nodes 16 based on a comparison by the node configurator 72 of the actual hardware performance characteristics of the second group of available nodes and the desired hardware performance characteristics. In one embodiment, upon the combined number of nodes of the selected subsets of nodes 16 being equal to the number of nodes 16 requested with the user request, node configurator 72 configures the selected subsets of nodes 16 as the cluster of nodes 14 of cloud computing system 10 (i.e., configures the node cluster 14 with user-specified configuration parameters and runs workloads on the node cluster 14, etc.). [00198] Referring to FIG. 44, a flow diagram 770 is illustrated of an exemplary detailed operation performed by one or more computing devices, including configurator 22 of FIGS. 1 and 3, for selecting a cluster of nodes 14 that have hardware characteristics substantially matching desired hardware characteristics specified by a user. Reference is made to FIGS. 1-3 throughout the description of FIG. 44. At block 772, node configurator 72 receives a user request for N nodes 16 having desired hardware performance characteristics, where N is any suitable number of desired nodes 16. In one embodiment, the user request is based on user selection of selectable hardware configuration data (e.g., FIGS. 8 and 9), as described herein with respect to FIG. 43. At block 774, node configurator 72 requests or reserves N+M nodes 16 from the available nodes 16 of the accessed data center(s) or cloud. M is any suitable number such that the number (N+M) of reserved available nodes 16 exceeds the number N of requested nodes 16. For example, M may equal N or may equal twice N. Alternatively, node configurator 72 may request N available nodes 16 at block 774. In one embodiment, the (N+M) nodes 16 are allocated or reserved using an application specific API (e.g., an Amazon AWS API, an OpenStack API, a custom API, etc.). Node configurator 72 requests the available nodes 16 at block 774 (and block 788) based on the available nodes 16 having similar hardware characteristics as the desired cluster of nodes. For example, node configurator 72 may reserve available nodes 16 that have the same node type (e.g., small, medium, large, x-large, as described herein). [00199] At block 776, data monitor configurator 82 initiates the hardware performance assessment test on each reserved node 16 by deploying one or more hardware performance assessment tools, and data aggregator 84 aggregates (e.g., collects and stores) hardware performance data resulting from the hardware performance assessment tests initiated on each node 16, as described herein with respect to FIG. 43. In one embodiment, the hardware performance assessment tools are software code modules preinstalled at nodes 16 or installed on nodes 16 using SSH, HTTP, or some other suitable protocol/mechanism. [00200] At block 780, node configurator 72 compares the desired hardware performance characteristics of the user request (block 772) with the actual hardware performance characteristics resulting from the hardware performance assessment tests. Based on similarities in the actual and desired hardware performance characteristics, node configurator 72 at block 782 selects X nodes 16 from the (N+M) reserved nodes 16 that best match the desired hardware characteristics, where X is any number that is less than or equal to the number N of requested nodes 16. Any suitable algorithm may be used to compare the hardware characteristics and to select best-matching nodes 16, such as the "best matching" technique described herein with respect to FIG. 41 based on hardware characteristics. At block 784, node configurator 72 releases the remaining unselected available nodes 16 (e.g., (N+M)-X) back to the data center(s) or cloud, such as by using application specific APIs, for example, so that the unselected available nodes 16 are available for use with other cloud computing systems. Upon the selected number X of nodes 16 being less than the requested number N of nodes 16 at block 786, node configurator 72 requests or reserves additional nodes 16 from the data center(s)/cloud at block 788. Configurator 22 then repeats steps 776-786 until the total number of selected nodes 16 (i.e., the combined number of nodes 16 resulting from all iterations of the selection method) is equal to the number N of requested nodes 16. The selected nodes 16 are then configured as the cluster of nodes 14 for performing the cloud computing tasks assigned by the user. [00201] In one embodiment, the method of FIG. 44 operates in conjunction with the method of FIG. 41 to select a cluster of nodes 14 having desired hardware characteristics and network characteristics. In one embodiment, the method of FIG. 44 selects nodes 16 further based on the nodes 16 having close network proximity. In one embodiment, the hardware characteristics identified with the user request at block 772 are prioritized prior to selecting nodes 16 for the node cluster 14. In one embodiment, the method of FIG. 44 (and FIG. 43) is run automatically by configurator 22 to find a suitable match of the actual selected cluster of nodes 14 with the desired cluster of nodes specified by the user. Alternatively, a user may be given the option by configurator 22 to initiate the operations of FIGS. 43 and 44 based on selectable inputs of user interface 200, for example. Selecting and/or Modifying a Hardware Configuration of the Cloud Computing System [00202] FIG. 45 illustrates a flow diagram 800 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 for selecting a hardware configuration of the cluster of nodes 14 of cloud computing system 10. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 45. At block 802, node configurator 72 determines, based on a shared execution of a workload by cluster of nodes 14 of the cloud computing system 10, that at least one node 16 of the cluster of nodes 14 operated at less than a threshold operating capacity during the shared execution of the workload. The threshold operating capacity is illustratively based on the hardware utilization by the at least one node 16, e.g., the utilization of processor 40 and/or memory 42 during workload execution. The threshold operating capacity may be any suitable threshold, such as, for example, a maximum operating capacity (100%) or a 90% operating capacity. At block 804, node configurator 72 selects a modified hardware configuration of the cluster of nodes 14 based on the determination at block 802 such that the cluster of nodes 14 with the modified hardware configuration has at least one of a reduced computing capacity and a reduced storage capacity. [00203] In one embodiment, node configurator 72 selects the modified hardware configuration by selecting at least one different node 16 from a plurality of available nodes 16 of the data center and replacing the at least one node 16 of the cluster of nodes 14 with the at least one different node 16. The different node 16 has at least one of a reduced computing capacity and a reduced storage capacity compared with the replaced node 16 of the cluster of nodes 14. For example, node configurator 72 selects a different node 16 from the available nodes 16 that has a slower processor 40, fewer processing cores, less memory capacity, or any other suitable reduced hardware characteristic as compared with the replaced node 16. For example, the replaced node 16 has more computing power or memory capacity than is required to process the workload such that portions of the hardware of the replaced node 16 are underutilized during workload execution. In the illustrated embodiment, the different node 16 is selected such that it is operative to process the workload with a similar performance (e.g., similar execution speed, etc.) as the one or more replaced nodes 16 but also more efficiently due to the reduced computing and/or storage capacity of the different node 16. As such, the cluster of nodes 14 modified with the different node 16 executes the workload more efficiently due to the reduced computing and/or storage capacity of the different node 16 while exhibiting little or no overall performance loss. For example, the node cluster 14 executes the workload at a substantially similar speed with the different node 16 as with the replaced node 16. [00204] In one embodiment, node configurator 72 selects and implements the modified hardware configuration of block 804 by selecting and removing one or more nodes 16 from the cluster of nodes 14 without replacing the removed nodes 16 with different nodes 16. For example, node configurator 72 determines that one or more nodes 16 of the node cluster 14 are not needed for the remaining nodes 16 of node cluster 14 to execute the workload with a similar execution performance. Node configurator 72 thus removes these one or more nodes 16 from the node cluster 14 and releases these nodes 16 back to the data center. In one embodiment, node configurator 72 selects and implements the modified hardware configuration of block 804 by reducing at least one of the computing capacity and memory capacity of one or more nodes 16 of the node cluster 14 (e.g., by adjusting the boot-time parameters described herein). [00205] In the illustrated embodiment, configurator 22 has access to hardware usage cost data that identifies the hardware usage cost associated with using various hardware resources (e.g., nodes 16) for the node cluster 14. For example, the cloud computing service (e.g., Amazon, OpenStack, etc.) charges a usage cost based on the hardware, such as the computing capacity and memory capacity, of each selected node 16 of the node cluster 14. As such, in one embodiment, node configurator 72 selects the at least one different node 16 to replace one or more nodes 16 of the node cluster 14 further based on a comparison by the node configurator 72 of usage cost data associated with using the at least one different node 16 in the cluster of nodes 14 and usage cost data associated with using the at least one replaced node 16 in the cluster of nodes 14. In one embodiment, node configurator 72 selects the at least one different node 16 upon the usage cost of the at least one different node 16 being less than the usage cost of the replaced node 16. For example, node configurator 72 calculates the cost of the hardware resources (e.g., nodes 16) used in the cluster of nodes 14 and determines cost benefits associated with potential hardware configuration changes of the cluster of nodes 14. For example, node configurator 72 selects one or more different nodes 16 that will result in a more efficient use of allocated hardware resources of the node cluster 14 at a lower usage cost and with minimum performance loss. In one embodiment, configurator 22 configures the network configuration or other configuration parameters based on a similar cost analysis. [00206] In the illustrated embodiment, configurator 22 monitors the hardware utilization of each node 16 by deploying one or more hardware utilization monitoring tools to each node 16 of the cluster of nodes 14. Execution of the hardware utilization monitoring tools by each node 16 is operative to cause the at least one processor 40 of each node 16 to monitor a utilization or usage of the computer hardware (e.g., processor 40, memory 42, memory controller, etc.) during the execution of the workload. The monitoring tools then cause the nodes 16 to provide hardware utilization data accessible by configurator 22 that is associated with the hardware utilization of each node 16 during execution of the workload. Data aggregator 84 of configurator 22 is operative to aggregate the hardware utilization data provided by each node 16 such that configurator 22 determines the hardware utilization of each node 16 based on the aggregated hardware utilization data. Exemplary hardware monitoring tools are described herein with respect to Monitoring module 214 of FIGS. 26-29. For example, the IOStat and VMStat tools include code modules executable by the node processor 40 to monitor the percentage of time the processor 40, virtual memory, and/or memory controller is busy executing instructions or performing I/O operations during workload execution, the percentage of time these components are waiting/stalled during workload execution, and other suitable utilization parameters. Based on the determined hardware utilization of a node 16, node configurator 72 may determine that less memory and or less computing power is needed for that node 16 than was initially requested and allocated and may replace or remove the node 16 from the cluster 14, as described herein. [00207] In one embodiment, node configurator 72 displays selectable hardware configuration data on user interface 200 that represents the selected modified hardware configuration selected at block 804. Based on user selection of the selectable hardware configuration data, node configurator 72 modifies the hardware configuration of the cluster of nodes 14, e.g., replaces or removes a node 16 of the node cluster 14. Exemplary selectable hardware configuration data is illustrated in table 258 of FIG. 8 with selectable inputs 259, 262. For example, node configurator 72 may display the recommended modified hardware configuration of node cluster 14 in table 258 by listing the recommended nodes 16 of the node cluster 14 including one or more different nodes 16 or removed nodes 16. The user selects the inputs 259 corresponding to the listed nodes 16 to accept the hardware changes, and node configurator 72 configures the modified node cluster 14 based on the accepted changes upon initiation of workload deployment, described herein. In one embodiment, the hardware usage cost is also displayed with user interface 200 for one or more recommended hardware configurations of the node cluster 14 to allow a user to select a configuration for implementation based on the associated usage cost. Other suitable interfaces may be provided for displaying the modified hardware configuration of the cluster of nodes 14. In one embodiment, node configurator 72 automatically configures the cluster of nodes 14 with the modified hardware configuration selected at block 804 without user input or confirmation, and initiates further executions of the workload with the modified node cluster 14. [00208] Referring to FIG. 46, a flow diagram 810 is illustrated of an exemplary detailed operation performed by one or more computing devices, including configurator 22 of FIGS. 1 and 3, for selecting a hardware configuration of the cluster of nodes 14 of cloud computing system 10. Reference is made to FIGS. 1-3 throughout the description of FIG. 46. At block 812, configurator 22 provides user interface 200 including selectable node data to allow for user selection at block 814 of a desired cluster of nodes 14 with a desired hardware configuration, as described herein. At block 816, configurator 22 selects and configures the selected cluster of nodes 14 and deploys the workload to the cluster of nodes 14, as described herein. At block 818, configurator 22 installs and/or configures the hardware utilization monitoring tools on each node 16 of the node cluster 14. In one embodiment, the monitoring tools are selected by a user via Monitoring module 214 of FIGS. 26-29. Alternatively, configurator 22 may automatically deploy one or more monitoring tools, such as the IOStat and VMStat tools, based on initiation of the method of FIG. 46. At block 820, workload configurator 78 initiates execution of a workload on the cluster of nodes 14, and at block 822, following or during the execution, data aggregator 84 collects and stores the hardware utilization data provided by the monitoring tools of each node 16. [00209] Upon completion of the workload execution by the node cluster 14, node configurator 72 determines the hardware utilization of each node 16 based on the hardware utilization data, as represented with block 824. At block 826, node configurator 72 determines whether the hardware utilization of each node 16 met or exceeded a utilization threshold (e.g., 100% utilization, 90% utilization, or any other suitable utilization threshold). In one embodiment, node configurator 72 compares multiple utilization measurements to one or more utilization thresholds at block 826, such as processor utilization, memory utilization, memory controller utilization, etc. If yes at block 826, the node cluster 14 is determined to be suitable for further workload executions, i.e., no adjustments to the hardware configuration of the node cluster 14 are made by configurator 22. For each node 16 that does not meet or exceed the utilization threshold at block 826, node configurator 72 identifies a different, replacement node 16 from the available nodes 16 of the data center that has hardware that is suitable for execution of the workload (i.e., similar performance to the replaced node(s) 16) while having less computing or memory capacity as the replaced node 16, as described herein with respect to FIG. 45. At block 830, node configurator 72 provides feedback to a user of any recommended hardware configuration changes identified at block 828 by displaying the recommended hardware configuration of the cluster of nodes 14 on user interface 200, as described with respect to FIG. 45. At block 832, node configurator 72 applies the recommended hardware configuration changes for future executions of the workload by removing and/or replacing nodes 16 of the original node cluster 14 with the different nodes 16 identified at block 828. [00210] In one embodiment, a selection by the user of a selectable input of user interface 200 causes node configurator 72 to run the hardware configuration method described with FIGS. 45 and 46 to find a suitable configuration of node cluster 14 for executing the workload. Alternatively, configurator 22 may automatically implement the method of FIG. 45 and 46, such as upon initiation of a batch processing job, for example, to find a suitable alternative configuration of the cluster of nodes 14 that does not significantly limit workload performance. Tuning the Cloud Computing System [00211] FIG. 47 illustrates a flow diagram 850 of an exemplary operation performed by configurator 22 of FIGS. 1 and 3 for selecting a suitable configuration of the cluster of nodes 14 of cloud computing system 10 from a plurality of available configurations. Reference is made to FIGS. 1 and 3 throughout the description of FIG. 47. At block 852, configurator 22 (e.g., batch processor 80) initiates a plurality of executions of a workload on cluster of nodes 14 based on a plurality of different sets of configuration parameters for the cluster of nodes 14. The configuration parameters, provided as input to nodes 16 by configurator 22 (e.g., via one or more configuration files 28 as described herein), are adjustable by configurator 22 to provide the different sets of configuration parameters, and the workload is executed by the cluster of nodes 14 with each different set of configuration parameters. In one embodiment, configurator 22 adjusts the configuration parameters for each workload execution based on user input provided via user interface 200, as described herein. In one embodiment, the configuration parameters include at least one of the following: an operational parameter of the workload container of at least one node 16, a boot-time parameter of at least one node 16, and a hardware configuration parameter of at least one node 16. [00212] At block 854, node configurator 72 selects a set of configuration parameters for the cluster of nodes 14 from the plurality of different sets of configuration parameters. At block 856, workload configurator 78 provides (e.g., deploys) the workload to the cluster of nodes 14 for execution by the cluster of nodes 14 configured with the selected set of configuration parameters. As such, future executions of the workload are performed by the cluster of nodes 14 having a configuration that is based on the selected set of configuration parameters. [00213] The selection of the set of configuration parameters at block 854 is based on a comparison by the node configurator 72 of at least one performance characteristic of the cluster of nodes 14 monitored (e.g., with monitoring tools) during each execution of the workload and at least one desired performance characteristic of the cluster of nodes 14. For example, in one embodiment node configurator 72 selects the set of configuration parameters that result in performance characteristics of the node cluster 14 during workload execution that best match desired performance characteristics specified by a user. In the illustrated embodiment, the desired performance characteristics are identified by node configurator 72 based on user input provided via user interface 200. For example, user interface 200 includes selectable performance data, such as selectable inputs or tillable fields, that allow a user to select desired performance characteristics of the cluster of nodes 14 when executing a selected workload. See, for example, flllable field 276 of FIG. 10 or any other suitable selectable input or field of user interface 200 configured to receive user input identifying desired performance characteristics. In another example, node configurator 72 may load a user-provided file containing data identifying the desired performance characteristics, such as based on user selection of inputs 238, 228, 230, 232 of FIG. 7 and/or button 494 of the batch processor module 212 of FIG. 25, for example. [00214] Exemplary performance characteristics specified by the user and monitored during workload execution include a workload execution time, a processor utilization by a node 16, a memory utilization by a node 16, a power consumption by a node 16, a hard disk input/output (I/O) utilization by a node 16, and a network utilization by a node 16. Other suitable performance characteristics may be monitored and/or specified by a user, such as the performance characteristics monitored with the monitoring tools described herein with respect to FIGS. 26-29. [00215] In one embodiment, the selection of the set of configuration parameters at block 854 is further based on a determination by node configurator 72 that a value associated with one or more performance characteristics monitored during an execution of the workload falls within a range of acceptable values associated with one or more corresponding desired performance characteristics. For example, ranges of acceptable values (e.g., input by a user or set by node configurator 72) associated with corresponding desired performance characteristics may include 85% to 100% processor utilization and 85% to 100% memory utilization. Accordingly, node configurator 72 selects a set of configuration parameters that result in 95% processor utilization and 90% memory utilization but rejects a set of configuration parameters resulting in 80% processor utilization and 75% memory utilization. Upon multiple sets of configuration parameters resulting in performance characteristics that meet the acceptable range of values, node configurator 72 selects the set of configuration parameters based on additional factors, such as the best performance values, the lowest usage cost, priorities of the performance characteristics, or other suitable factors. Upon no sets of configuration parameters resulting in performance characteristics that fall within the acceptable ranges, node configurator 72 selects the set that results in the best matching performance characteristics, automatically further adjusts configuration parameters until an appropriate set is found, and/or notifies the user that no sets of configuration parameters were found to be acceptable. [00216] In one embodiment, node configurator 72 assigns a score value to each different set of configuration parameters based on the similarities of the monitored performance characteristics to the desired performance characteristics. As such, the selection of the set of configuration parameters at block 854 is further based on the score value assigned to the selected set of configuration parameters. For example, node configurator 72 selects the set of configuration parameters resulting in the highest score value. The score value ranks the sets of configuration parameters based on how closely the performance characteristics of the node cluster 14 match the desired performance characteristics. [00217] In one embodiment, the selection of the set of configuration parameters at block 854 is further based on a comparison of usage cost data associated with using different available nodes 16 or network configurations with the cluster of nodes 14. For example, node configurator 72 may select a set of configuration parameters that result in a processor and memory utilization greater than a threshold utilization level and a usage cost less than a threshold cost level. Any other suitable considerations of usage cost may be applied to the selection at block 854. [00218] In one embodiment, configurator 22 initiates a first execution of the workload on node cluster 14 based on an initial set configuration parameters provided by a user (e.g., via user interface 200). In this embodiment, to find a set of configuration parameters resulting in the desired performance characteristics, node configurator 72 steps through different sets of configuration parameters by automatically adjusting at least one configuration parameter of the initial set and initiating additional executions of the workload based on the modified initial sets. Any suitable design space exploration method or algorithm may be used to explore different sets of configuration parameters in this fashion. [00219] In one embodiment, data monitor aggregator 82 deploys one or more node and network performance monitoring tools (described with FIGS. 26-29, for example) to each node 16 of the cluster of nodes 14. The monitoring tools when executed by each node 16 (or by control server 12) are operative to monitor performance characteristics of each node 16 during each execution of the workload, as described herein. The executed monitoring tools generate performance data representing the performance characteristics of the corresponding node 16 that are accessible by configurator 22. Data aggregator 84 aggregates the performance data provided by the performance monitoring tools of each node 16, and node configurator 72 selects the set of configuration parameters at block 854 based on the aggregated performance data. [00220] As described herein, the different sets of configuration parameters of the cluster of nodes 14 include at least one of an operational parameter of the workload container, a boot- time parameter, and a hardware configuration parameter. Exemplary operational parameters of the workload container are described herein with respect to FIGS. 4-6, 19, and 20 and include, for example, operational parameters associated with at least one of a read/write operation, a file system operation, a network socket operation, and a sorting operation. The operational parameters are selected and modified by workload container configurator 76 based on user selections of the selectable data (e.g., inputs and fields) illustrated in FIGS. 19 and 20 and described herein. Exemplary operational parameters associated with the read/write operation include a memory buffer size for the read/write operation and a size of a data block transferred during the read/write operation. Exemplary operational parameters associated with the file system operation comprises at least one of a number of file system records stored in memory of each node 16 and a number of processing threads of each node 16 allocated for processing requests for the file system. An exemplary operational parameter associated with the sorting operation includes a number of data streams to merge when performing the sorting operation. Other suitable operational parameters of a workload container may be provided. [00221] Exemplary boot-time parameters are described herein with respect to FIGS. 10 and 36-38 and include, for example, a number of processing cores of a node 16 that are enabled during an execution of the workload and an amount of system memory of a node 16 that is accessible by an operating system 44 of the node 16. The boot-time parameters are selected and modified by node configurator 72 based on user selection of the selectable data (e.g., inputs and fields) illustrated in FIG. 10 and described herein. Other suitable boot-time parameters may be provided. Exemplary hardware configuration parameters are described herein with respect to FIGS. 8, 9, and 43-46 and include, for example, at least one of a number of processors 40 of a node 16, an amount of system memory of a node 16, and an amount of hard disk space of a node 16. The hardware configuration parameters are selected and modified by node configurator 72 based on user selection of the selectable data (e.g., inputs and fields) illustrated in FIGS. 8 and 9 and described herein. Other suitable hardware configuration parameters may be provided. [00222] Referring to FIG. 48, a flow diagram 860 is illustrated of an exemplary detailed operation performed by one or more computing devices, including configurator 22 of FIGS. 1 and 3, for selecting a suitable configuration of the cluster of nodes 14 of cloud computing system 10 from a plurality of available configurations. Reference is made to FIGS. 1-3 throughout the description of FIG. 48. In the illustrated embodiment of FIG. 48, configurator 22 stops searching for a suitable set of configuration parameters upon the actual performance of the node cluster 14 meeting or exceeding the desired performance. In another embodiment, configurator 22 tries each set of identified configuration parameters before selecting a set of configuration parameters that are a best match based on the desired performance characteristics and/or other suitable factors (e.g., usage cost). [00223] At block 862, configurator 22 receives one or more sets of configuration parameters as well as the desired performance characteristics associated with the workload execution based on user input received via user interface 200, as described herein. At block 864, configurator 22 allocates a cluster of nodes 14 and configures the cluster of nodes 14 with a set of configuration parameters received at block 862. In one embodiment, configurator 22 deploys one or more configuration files 28 to nodes 16 identifying the configuration parameters at block 864, as described herein. Configurator 22 installs and/or configures one or more monitoring tools (e.g., selected by a user via module 214, for example) on each node 16 at block 866 and initiates an execution of the workload by the cluster of nodes 14 at block 868. Upon or during execution of the workload, configurator 22 aggregates the performance data generated by the one or more monitoring tools of each node 16 at block 870. Based on the aggregated performance data, at block 872 configurator 22 compares the desired performance characteristics identified at block 862 with the actual performance characteristics of the cluster 14 identified with the aggregated performance data, as described herein. At block 874, configurator 22 determines if the performance characteristics are suitable as compared with the desired performance characteristics (e.g., within an acceptable range, having a suitable score value, etc.), as described herein. If yes at block 874, configurator keeps the current configuration parameters last implemented at block 864 for future executions of the workload. If the performance characteristics are not as desired at block 874 and if the available different sets of configuration parameters are not exhausted at block 876, configurator 22 selects a different set of configuration parameters at block 878, and repeats the functions of blocks 864-876. For example, configurator 22 may implement a different set of configuration parameters identified at block 862 or an incrementally adjusted set of parameters provided by configurator 22, as described above. The process repeats until the configurator 22 finds a suitable set of configuration parameters at block 874 or the configuration parameter options are exhausted at block 876. If the configuration options are exhausted at block 876, configurator 22 selects the set of configuration parameters that provided the best performance characteristics and other identified characteristics (e.g., usage cost) at block 880. [00224] Among other advantages, the method and system allow for the selection, configuration, and deployment of a cluster of nodes, a workload, a workload container, and a network configuration via a user interface. In addition, the method and system allow for the control and adjustment of cloud configuration parameters, thereby enabling performance analysis of the cloud computing system under varying characteristics of the nodes, network, workload container, and/or workload. Other advantages will be recognized by those of ordinary skill in the art. [00225] While this invention has been described as having preferred designs, the present invention can be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this disclosure pertains and which fall within the limits of the appended claims. |
Interleaved cache controllers with shared metadata are disclosed and described. A memory system may comprise a plurality of cache controllers and a metadata store interconnected by a metadata store fabric. The metadata store receives information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata. The metadata store provides shared access of the shared distributed metadata hosted to the plurality of cache controllers. |
CLAIMSWhat is claimed is:1. A memory system, comprising:a plurality of cache controllers with circuitry configured to:access memory controllers which access memory;a metadata store in communication with the at least one cache controller with circuitry configured to:receive information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata;provide shared access of the shared distributed metadata hosted to the plurality of cache controllers; anda metadata store fabric disposed between the plurality of cache controllers and the at least one metadata store to facilitate the shared access.2. The memory system of Claim 1 , wherein the information is related to a task assigned to one of the plurality of cache controllers.3. The memory system of Claim 2, wherein the metadata store fabric further comprises a common logic block to manage the task assigned to one of the plurality of cache controllers.4. The memory system of Claim 2, wherein the metadata store further comprises a logic block to manage the task assigned to one of the plurality of cache controllers.5. The memory system of Claim 1 , wherein the metadata store is one of a plurality of metadata stores.6. The memory system of Claim 1 , wherein the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores corresponds to the number of the plurality of cache controllers.7. The memory system of Claim 1 , wherein the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores is greater than the number of the plurality of cache controllers.8. The memory system of Claim 1 , wherein the metadata store is a static random-access memory (SRAM) array.9. The memory system of Claim 1 , wherein one of the tasks assigned to the metadata store comprises maintaining least recently used (LRU) indications.10. The memory system of Claim 1 , wherein one of the tasks assigned to the metadata store comprises re-allocating an entry based on the least recently used (LRU) indication when a new system memory address is to be cached.11. The memory system of Claim 1 , wherein the shared distributed metadata hosted by the metadata store comprises valid bits and dirty bits.12. The memory system of Claim 1, wherein the shared distributed metadata hosted by the metadata store comprises lock bits pertaining to the plurality of cache controllers.13. The memory system of Claim 12, wherein a lock bit is to assert that the valid bits and dirty bits of a given cache controller are locked and are not changed except by the given cache controller.14. The memory system of Claim 1, wherein one of the plurality of cache controllers, upon completion of all transactions relating to a metadata entry, is to update the metadata store of appropriate valid bits and dirty bits and cause a lock bit to be cleared.15. The memory system of Claim 14, wherein a logic block is configured to identify dirty entries for a scrubbing operation wherein the logic block is associated with the metadata store fabric or the metadata store.16. A system, comprising:one or more processors configured to process data;an input output subsystem configured to receive input data and to output data;a plurality of memory controllers to access a plurality of memory;a plurality of cache controllers with circuitry configured to:access memory controllers which access memory;a cache controller fabric disposed between the system fabric and the plurality of cache controllers;a metadata store in communication with the plurality of cache controllers with circuitry configured to:receive information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata;provide shared access of the shared distributed metadata hosted to the plurality of cache controllers;a metadata store fabric disposed between the plurality of cache controllers and the plurality of metadata stores; anda system fabric configured to connect the one or more processors and the input output subsystem to the plurality of memory controllers and the plurality of cache controllers.17. The system of Claim 16, wherein the information is related to a task assigned to one of the plurality of cache controllers.18. The system of Claim 17, wherein the metadata store fabric further comprises a common logic block to manage the task assigned to one of the plurality of cache controllers.19. The system of Claim 17, wherein the metadata store further comprises a logic block to manage the task assigned to one of the plurality of cache controllers.20. The system of Claim 16, wherein the metadata store is one of a plurality of metadata stores.21. The system of Claim 16, wherein the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores corresponds to the number of the plurality of cache controllers.22. The system of Claim 16, wherein the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores is greater than the number of the plurality of cache controllers.23. The system of Claim 16, wherein the metadata store is a static random-access memory (SRAM) array.24. The system of Claim 16, wherein one of the tasks assigned to the metadata store comprises maintaining least recently used (LRU) indications.25. The system of Claim 16, wherein one of the tasks assigned to the metadata store comprises re-allocating an entry based on the least recently used (LRU) indication when a new system memory address is to be cached.26. The system of Claim 16, wherein the shared distributed metadata hosted by the metadata store comprises valid bits and dirty bits.27. The system of Claim 16, wherein the shared distributed metadata hosted by the metadata store comprises lock bits pertaining to the plurality of cache controllers.28. The system of Claim 27, wherein a lock bit is to assert that the valid bits and dirty bits of a given cache controller are locked and are not changed except by the given cache controller.29. The system of Claim 16, wherein one of the plurality of cache controllers, upon completion of all transactions relating to a metadata entry, is to update the metadata store of appropriate valid bits and dirty bits and cause a lock bit to be cleared.30. The system of Claim 18, wherein a logic block is configured to identify dirty entries for a scrubbing operation wherein the logic block is associated with the metadata store fabric or the metadata store.31. A method, comprising:connecting a metadata store with a plurality of cache controllers via a metadata store fabric; receiving information at the metadata store from at least one of the plurality of cache controllers;storing the information as shared distributed metadata in the metadata store;providing shared access of the shared distributed metadata to the plurality of cache controllers; andassigning a task to a logic block wherein the task executed at the logic block operates on the shared distributed metadata.32. The method of Claim 31, wherein the metadata store is one of a plurality of metadata stores.33. The method of Claim 31, wherein the plurality of cache controllers and the metadata store are interconnected via a metadata store fabric.34. The method of Claim 31, wherein the plurality of cache controllers and a plurality of metadata store are interconnected via a metadata store fabric.35. The method of Claim 31, wherein a metadata store fabric comprises a common logic block to manage metadata operations.36. The method of Claim 31, wherein the metadata store further comprises a logic block to manage metadata operations.37. The method of Claim 31, wherein the metadata store stores metadata in a static random- access memory (SRAM) array.38. The method of Claim 31, wherein the task assigned to a logic block comprises maintaining least recently used (LRU) indications.39. The method of Claim 31, wherein the task assigned to a logic block comprises re-allocating a clean entry with a higher least recently used (LRU) indication when a new system memory address is to be cached.40. The method of Claim 31, wherein the shared distributed metadata hosted by the metadata store comprises tag bits, valid bits, and dirty bits.41. The method of Claim 40, further comprising:locking the valid bits and dirty bits of a given cache controller via a lock bit indicating that the valid bits and dirty bits of the given cache controller are not be changed except by the given cache controller.42. The method of Claim 31, further comprising:upon completion of relevant transactions at a given cache controller, updating an appropriate metadata store of appropriate valid bits and dirty bits and cause a lock bit to be cleared. |
INTERLEAVED CACHE CONTROLLERS WITH SHARED METADATA AND RELATEDDEVICES AND SYSTEMSBACKGROUNDComputer and electronic devices have become integral to the lives of many and include a wide range of uses from social media activity to intensive computational data analysis. Such devices can include smart phones, tablets, laptops, desktop computers, network servers, and the like. Memory systems and subsystems play an important role in the implementation of such devices, and are one of the key factors affecting performance. Accordingly, memory systems and subsystems are the subject of continual research and development.BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of the embodiments will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example, embodiment features; and, wherein:FIG. 1 is a schematic view of an exemplary memory system;FIG. 2 is a schematic view of an exemplary memory system;FIG. 3 is a schematic view of an exemplary memory system;FIG. 4 is a schematic view of an exemplary memory system;FIG. 5 is a schematic view of an exemplary memory system.FIG. 6 is a schematic view of an exemplary memory system;FIG. 7A is a schematic view of an exemplary memory system;FIG. 7B is a schematic view of an exemplary memory system;FIG. 7C is a schematic view of an exemplary memory system;FIG. 8A is a representation of an exemplary metadata entry;FIG. 8B is a representation of an exemplary shared metadata entry;FIG. 9 is a schematic view of an exemplary system; andFIG. 10 is a representation of steps of an exemplary method of a memory system with shared metadata.Reference will now be made to the exemplary embodiments illustrated, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation on invention scope is thereby intended.DESCRIPTION OF EMBODIMENTSAlthough the following detailed description contains many specifics for the purpose of illustration, a person of ordinary skill in the art will appreciate that many variations and alterations to the following details can be made and are considered included herein. Accordingly, the following embodiments are set forth without any loss of generality to, and without imposing limitations upon, any claims set forth. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. It should also be understood that terminology employed herein is used for describing particular examples or embodiments only and is not intended to be limiting. The same reference numerals in different drawings represent the same element. Numbers provided in flow charts and processes are provided for clarity in illustrating steps and operations and do not necessarily indicate a particular order or sequence. Furthermore, the described features, structures, or characteristics can be combined in any suitable manner in one or more embodiments.Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.As used in this written description, the singular forms "a," "an" and "the" include support for plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a bit line" includes support for a plurality of such bit lines.In this application, "comprises," "comprising," "containing" and "having" and the like can have the meaning ascribed to them in U. S. Patent law and can mean "includes," "including," and the like, and are generally interpreted to be open ended terms. The terms "consisting of or "consists of are closed terms, and include only the components, structures, steps, or the like specifically listed in conjunction with such terms, as well as that which is in accordance with U. S. Patent law. "Consisting essentially of or "consists essentially of have the meaning generally ascribed to them by U.S. Patent law. In particular, such terms are generally closed terms, with the exception of allowing inclusion of additional items, materials, components, steps, or elements, that do not materially affect the basic and novel characteristics or function of the item(s) used in connection therewith. For example, trace elements present in a composition, but not affecting the compositions nature or characteristics would be permissible if present under the "consisting essentially of language, even though not expressly recited in a list of items following such terminology. When using an open ended term in this written description, like "comprising" or "including," it is understood that direct support should be afforded also to "consisting essentially of language as well as "consisting of language as if stated explicitly and vice versa."The terms "first," "second," "third," "fourth," and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Similarly, if a method is described herein as comprising a series of steps, the order of such steps as presented herein is not necessarily the only order in which such steps may be performed, and certain of the stated steps may possibly be omitted and/or certain other steps not described herein may possibly be added to the method.The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.As used herein, "enhanced," "improved," "performance-enhanced," "upgraded," and the like, when used in connection with the description of a device or process, refers to a characteristic of the device or process that provides measurably better form or function as compared to previously known devices or processes. This applies both to the form and function of individual components in a device or process, as well as to such devices or processes as a whole.As used herein, "coupled" refers to a relationship of electrical or physical connection or attachment between one item and another item, and includes relationships of either direct or indirect connection or attachment. Any number of items can be coupled, such as materials, components, structures, layers, devices, objects, etc.As used herein, "directly coupled" refers to a relationship of electrical or physical connection or attachment between one item and another item where the items have at least one point of direct physical contact or otherwise touch one another. For example, when one layer of material is deposited on or against another layer of material, the layers can be said to be directly coupled.Objects or structures described herein as being "adjacent to" each other may be in physical contact with each other, in close proximity to each other, or in the same general region or area as each other, as appropriate for the context in which the phrase is used.As used herein, the term "substantially" refers to the complete or nearly complete extent or degree of an action, characteristic, property, state, structure, item, or result. For example, an object that is "substantially" enclosed would mean that the object is either completely enclosed or nearly completely enclosed. The exact allowable degree of deviation from absolute completeness may in some cases depend on the specific context. However, generally speaking, the nearness of completion will be so as to have the same overall result as if absolute and total completion were obtained. The use of "substantially" is equally applicable when used in a negative connotation to refer to the complete or near complete lack of an action, characteristic, property, state, structure, item, or result. For example, a composition that is "substantially free of particles would either completely lack particles, or so nearly completely lack particles that the effect would be the same as if it completely lacked particles. In other words, a composition that is "substantially free of an ingredient or element may still actually contain such item as long as there is no measurable effect thereof.As used herein, the term "about" is used to provide flexibility to a numerical range endpoint by providing that a given value may be "a little above" or "a little below" the endpoint. However, it is to be understood that even when the term "about" is used in the present specification in connection with a specific numerical value, that support for the exact numerical value recited apart from the "about" terminology is also provided.As used herein, a plurality of items, structural elements, compositional elements, and/or materials may be presented in a common list for convenience. However, these lists should be construed as though each member of the list is individually identified as a separate and unique member. Thus, no individual member of such list should be construed as a de facto equivalent of any other member of the same list solely based on their presentation in a common group without indications to the contrary.Concentrations, amounts, and other numerical data may be expressed or presented herein in a range format. It is to be understood that such a range format is used merely for convenience and brevity and thus should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or subranges encompassed within that range as if each numerical value and sub-range is explicitly recited. As an illustration, a numerical range of "about 1 to about 5" should be interpreted to include not only the explicitly recited values of about 1 to about 5, but also include individual values and subranges within the indicated range. Thus, included in this numerical range are individual values such as 2, 3, and 4 and sub-ranges such as from 1-3, from 2-4, and from 3-5, etc., as well as 1, 1.5, 2, 2.3, 3, 3.8, 4, 4.6, 5, and 5.1 individually.This same principle applies to ranges reciting only one numerical value as a minimum or a maximum. Furthermore, such an interpretation should apply regardless of the breadth of the range or the characteristics being described.Reference throughout this specification to "an example" means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment. Thus, appearances of the phrases "in an example" in various places throughout this specification are not necessarily all referring to the same embodiment.Example EmbodimentsAn initial overview of the embodiments is provided below and specific embodiments are then described in further detail. This initial summary is intended to aid readers in understanding the disclosure more quickly, but is not intended to identify key or essential technological features, nor is it intended to limit the scope of the claimed subject matter. In computing, interleaved memory is a design made to compensate for the relatively slow speed of dynamic random-access memory (DRAM) memory, by spreading memory addresses evenly across memory channels. In this way, contiguous memory read and write operations use each memory channel in turn, resulting in higher memory throughputs. This is achieved by allowing memory channels to perform the desired operations in parallel, yet not forcing individual non-contiguous memory transactions into issuing the excessively large transactions that would result if the data bus to memory were to be merely widened. Memory systems, including one level (ILM) memory systems that implement high bandwidth using multiple memory controllers, such as DRAM, can interleave memory transactions between controllers.An operating system (OS) allocates memory in chunks. For example, a program executing on the OS may request an allocation of memory for its data and the OS will provide this allocation as a non-sequential series of chunks of a specified size. The use of fixed-size chunks when allocating memory allows large allocations of memory to be made even where, as a result of continuous software operations, memory has become highly fragmented. In one embodiment, a typical OS will allocate memory in 4K bytes or chunks (4KByte).A system may implement a plurality of memory controllers to increase efficiency.However, it is not desirable that interleave granularity be 4K between memory controllers, as this may result in a read of an entire 4K chunk being serviced by only a single memory controller, and single memory channel. Therefore, requests can be interleaved at a size smaller than the size allocated by the OS. For example, requests for 256 bytes of data interleaved between controllers at 128 byte granularity can be serviced by more than one memory controller in parallel. Similarly, a request to read an entire 4Kbyte OS page could be serviced by multiple controllers in parallel.A memory system with two cache controllers connected to two memory controllers may maintain tags within each cache controller for half-OS-pages rather than OS-pages, causing 100% size/cost impact for the large tag arrays. A different memory system may limit the interleave between cache controllers to OS page size, causing a 50% loss in stream bandwidth. A different memory system may, in addition to limiting the interleave between cache controllers to OS page size, add a memory fabric between cache controllers and memory controllers, causing a multi-cycle latency penalty.One or more cache controllers may be implemented in memory systems to control local storage of cached data. In adapting such a system to include a memory-side cache, such as in a two level memory (2LM) system, bandwidth requirements typically necessitate the use of multiple cache controllers. The memory may store all the data but may be slow and therefore a portion of the data stored in the memory will be stored locally in the cache and managed by the cache controllers. In one embodiment, the cache controllers are capable of holding entries that relate to 4Kbyte of memory allocations, in line with the allocation granularity of an OS. The cache controllers may store data locally and hold the metadata on-die in a static random-access memory (SRAM) array to allow quick identification of the data stored locally. The cache controllers may store metadata that will typically include cache tags. Each cache controller has an upper limit of how many cache tags or pieces of metadata may be stored. Various embodiments provide a metadata store fabric that provides a plurality of cache controllers with shared access to a plurality of metadata stores. A metadata store fabric may be hardware that is a set of connections between metadata stores and cache controllers that allow an exchange of data between the metadata stores and the cache controllers.From a metadata storage perspective, efficient implementation of a design with multiple cache controllers requires interleaving between the cache controllers at OS page granularity or greater. In one embodiment, reconciling this with the desire to interleave memory controllers as sub-OS page granularity may involve trade-offs in performance. Embodiments exemplified herein include memory devices, systems and methods that re-distribute storage and handling of memory- side cache metadata utilizing a mesh structure between multiple cache controllers and multiple metadata stores. The mesh structure may be a hardware structure and may also be referred to as a "metadata store fabric" or simply "fabric". The metadata stores may store the metadata or cache tags as shared distributed metadata. The shared distributed metadata allows a first cache controller to send information such as cache tags or metadata to a metadata store connected through the metadata store fabric. The metadata store then converts or stores the cache tag to a shared distributed metadata and provides a shared access to the shared distributed metadata allowing a second cache controller to access the shared distributed metadata that is based on the information from the first cache controller. This allows the second cache controller to carry out an operation based on cache tags or metadata without the need to allocate an additional metadata entry. Thus the second cache controller, or all of the cache controllers in the memory system, may be able to operate more efficiently at a higher bandwidth without increasing the capacity or size of the local store of the cache controller. For example, 256-byte requests being handled by two cache controllers in parallel and handled by two memory controllers in parallel. In one embodiment, the present disclosure utilizes tag and valid bits. The tags and valid bits are part of the metadata or shared distributed metadata that allow operations on the memory to occur. The shared distributed metadata also introduces lock bits that lock the shared distributed metadata until the lock bit is cleared by the associated cache controller. This ensures that the shared distributed metadata is not cleared from a metadata store before it is no longer needed for operations and possible update by a given cache controller. The mesh structure allows for efficient operation with OS-page-granularity cache entries, and hence metadata entries, in terms of metadata usage. The mesh also allows for efficient memory interleaving between cache controllers at sub-OS-page-size granularity in terms of optimized data path. The use of metadata stores, metadata store fabric, and shared distributed metadata allow the data to flow through a cache controller without requiring the cache controller to locally store all metadata because it is being stored in the metadata store. In one embodiment, the present disclosure may be combined with various techniques to achieve zero additional latency for all cache hit transactions even when sub-page interleaving is used.FIG. 1 shows a system-on-chip (SOC) 102 with a basic 1LM system. The SOC 102 includes a central processing unit (CPU) 104 for processing data in a computer system. It should be appreciated that CPU 104 may comprise integrated caches, not pictured, which are integrated into subsystems of CPU 104. The SOC 102 also comprises an integrated display controller 106, a controller to control output data to a user being displayed on a display such as a screen. The SOC 102 additionally comprises an IO subsystem 108, which is an input output system for inputting and outputting data for system 102. The SOC 102 also comprises a system fabric 110, which can be a hardware fabric for connecting a memory controller 112 and other components of the SOC 102 together. The memory controller 112 is a dedicated hardware incorporated into the SOC 102 for controlling the memory 114. In one embodiment, the memory 114 is DRAM, but it should be appreciated that the memory 114 may be other types of memory as well. In one embodiment, FIG. 1 shows a 1LM system where the operating system employs a 4KByte page and memory 114 has a 4KByte page size. In one example, two adjacent OS-allocated pages of data, "A" and "B" are shown stored in memory 114 as shown. While the illustration of FIG. 1 shows a system-on-chip (SOC) 102, it may equally apply to a computer system built with more discrete components, for example where display controller 106 and an IO subsystem 108 are outside the boundary of element 102, and where element 102 represents a CPU with integrated system fabric 110 and memory controller 112.FIG. 2 shows a 1LM system with a SOC 200 that has multiple memory controllers. The SOC 200 may include some of the components of SOC 102. In one embodiment, the SOC 200 includes two memory controllers, specifically the memory controller 204 and the memory controller 206 that are connected to the system fabric 110 via a memory fabric 202. The memory fabric 202 is hardware configured to interleave across the two memory controllers as well as the memory 208 and the memory 210. For example, the interleave may occur every 4K bytes. In one configuration, when the system is reading from page A, only the memory controller 204 and the memory 208 are servicing the requests, and likewise, when it is reading from page B, only the memory controller 206 and the memory 210 are servicing the requests. Thus, although the memory controller and memory bandwidth has theoretically been doubled, the peak stream bandwidth of the system of FIG. 2 will remain little changed when compared to the system of FIG. 1.FIG. 3 shows a 1LM system with a SOC 300 that has multiple memory controllers. The SOC 300 may comprise some of the components of the SOC 102 and/or 200 and illustrates how the memory fabric 202 of SOC 200 may be configured differently in FIG. 3. In one embodiment, the SOCs 102, 200, and 300 depict examples where a SOC may issue multiple read requests simultaneously. However, the SOC 300 depicts embodiments that improve or optimize the performance 'stream bandwidth' where such multiple read requests exist. For example, the system may request to read 256 bytes, which may be one sixteenth of a memory page such as a DRAM page. In embodiments of system 300, each OS page has been sliced - such that A becomes A0and Ai. For example, A0contains data for bytes 0-127, 256-383, 512-639, 768-895, 1024-1151, 1280- 1407, 1536-1663, 1792-1919, 2048-2175, 2304-2431, 2560-2687, 2816-2943, 3072-3199, 3328- 3455, 3584-3711, 3840-3967 and Ai contains data for bytes 128-255, 384-511, 640-767, 896-1023, 1152-1279, 1408-1535, 1664-1791, 1920-2047, 2176-2303, 2432-2559, 2688-2815, 2944-3071, 3200-3327, 3456-3583, 3712-3839, 3968-4095 within the page. Thus, a request to read 256 sequential bytes, such as from address 512 to address 767, will be serviced by both the memory controller 204 and the memory 302 (bytes 512-639) and the memory controller 206 and the memory 304 (bytes 640-767), realizing a doubling of bandwidth compared to the SOC 102 of FIG. 1.FIG. 4. shows a 2LM system with a SOC 400. The SOC 400 may include some of the components of the SOCs 102, 200, and/or 300. The SOC 400 depicts embodiments which further include a cache controller 408 and a cache controller 410 disposed between the system fabric 110 and the memory controller 204 and the memory controller 206 respectively. FIG. 4 depicts a memory 402 further comprising a memory 404 and SOC memory controller 406. The memory controller 406 is connected to the cache controller 408 and the cache controller 410. FIG. 4 also depicts a memory 418 comprising the memory 414 and memory 416 connected to SOC memory controller 204 and SOC memory controller 206 respectively. Memory 414 and 416 provide relatively fast data storage for the cache controller 408 and the cache controller 410, thereby allowing fast access to cached data of memory 404. The storage of pages A and B in the memory 414 and 416 may be similar to what is described in the system of FIG. 3. However, the position of the pages within each memory may be influenced by the organizational policies, such as the use of ways 0, 1, 2, 3, and 4, of the cache controller 408 and the cache controller 410.In system SOC 400, separate arrays of cache tags (not shown) exist in each of the cache controller 408 and the cache controller 410 or are stored separate from cache controller 408 and the cache controller 410 but are accessible to cache controller 408 and the cache controller 410. The cache tags are references to which portions of the main memory 404 are held in which pages of the cache and are maintained by each cache controller. Thus, for a single OS page "in use" such as A, there is a double overhead of assigning, storing, looking-up, and maintaining tags where the cache controller 408 is for maintaining the tag for Ao and the cache controller 410 is for maintaining the tag for Ai. One design approach to avoid this double overhead is to use a single cache controller. However, in many cases, due to bus throughput or other scaling issues, memory controller location, or the integration of the memory controller/cache controller fabric into the system fabric, this approach of only a single cache controller is impractical. Thus in a practical system, multiple cache controllers are matched to multiple memory controllers. Another approach to solving the double tag problem of the system shown in FIG. 4 is the system of FIG. 5.FIG. 5 shows a 2LM system with a SOC 500. The SOC 500 may comprise some of the components of the SOCs 102, 200, 300, and/or 400. The SOC 500 depicts a larger interleave between the two cache controllers (for example 4 KByte) as compared to the SOC 400 of FIG. 4, such that an entire OS page is handled by a single cache controller. However, this large interleave causes the bandwidth limitations similar to the SOC 200 of FIG. 2, as only one memory controller handles each OS page.FIG. 6 shows a 2LM system with a SOC 600. The SOC 600 may comprise some of the components of the SOCs 102, 200, 300, 400, and/or 500. The SOC 600 connects to the memory 604 and memory 608. The SOC 600 depicts an additional fabric, a memory fabric 602, which is disposed between the cache controllers 408 and 410 and the memory controllers 204 and 206. Memory fabric 602 provides interleaving at the memory with sub-OS-page granularity (for example, 128 bytes or other values), while still allowing the cache controllers to be interleaved by memory fabric 202 at OS-page granularity (for example 4 KByte). However, there is the added latency impact of the memory fabric 602, and the desire that individual cache controllers each be capable of handling the full bandwidth of both memory controllers.FIG. 7A, shows a 2LM system with a SOC 700 in accordance with various embodiments. The SOC 700 may comprise some of the components of the SOCs 102, 200, 300, 400, 500, and/or 600. The SOC 700 depicts embodiments of the present disclosure that may overcome at least some of the described limitations of the SOCs 400, 500, and/or 600. The SOC 700 depicts a metadata store fabric 702, a metadata store 704, and a metadata store 706. FIG. 7A further depicts SOC 700 connected to memories 708 and 710. In one embodiment, the metadata stores 704 and 706 are on- die metadata storage blocks that service the cache controllers, but are separated from the cache controllers and each serve a multiplicity of cache controllers. In one embodiment, the metadata store is a static random-access memory (SRAM) array. In other words, metadata storage is extracted out or away from the cache controllers with the implementation of the metadata stores. Each metadata store may serve a multiplicity of the cache controllers in the SOC 700. In one embodiment, the metadata stores 704 and 706 are assembled as separate metadata storages but can be implemented in the same or different memory devices. It should be appreciated that the SOC 700 depicts two cache controllers and two metadata stores, but any number or combination of cache controllers and metadata stores can be used. In a given SOC for example, the number of cache controllers may be greater than the number of metadata stores, the number of metadata stores may be greater than the number of cache controllers, or the system may include only one metadata store for a plurality of cache controllers.In one embodiment, within each metadata store, a logic block is added that is assigned responsibility for some of the tasks that would generally be assigned to a cache controller. For example, these tasks may include maintaining least recently used (LRU) indications, and reallocating the clean entry with the highest LRU when a cache allocation to a new system memory address is required. Various embodiments may achieve the same interleave as shown in the SOC 600 of FIG. 6, but without the latency and wide data paths of the memory fabric. The additional latency of the metadata store fabric may be mitigated by the use of various techniques. For example, identically-offset fragments of the pages stored in multiple ways of a cache set are stored together in a single DRAM page, facilitating the ability of the memory controller to issue the DRAM page open requests on the assumption that the requested data will be found in the cache, but prior to knowing in which way it is to be found.FIG. 7B shows a 2LM system with a SOC 701 upon which embodiments of the present disclosure are implemented. The SOC 701 may be described as an alternate configuration of SOC 700 of FIG. 7A. For example, the metadata stores 704 and 706 in SOC 701 are each physically co- located with the cache controllers 408 and 410 respectively. Despite the physical proximity of metadata stores 704 and 706 to the cache controllers 408 and 410, the presence of the metadata store fabric 702 allows the metadata stores 704 and 706 to logically operate in a similar manner to what was described for SOC 700 of FIG. 7A. For example, FIG. 7B depicts different locations of the metadata stores 704 and 706 relative to the locations of the metadata stores 704 and 706 for FIG. 7A. However, these different locations of the metadata stores 704 and 706 need not affect the general connectivity of the metadata stores 704 and 706 to the cache controllers 408 and 410 for the described operations. It should be appreciated that the physical proximity of each metadata store to a cache controller may allow simplified construction of derivative designs, for example a 'chopped' derivative containing only one cache controller, one metadata store, and one memory controller, or a 'high-end' derivative containing four cache controllers, four metadata stores, and four memory controllers.FIG. 7C shows a 2LM system with a SOC 703 upon which embodiments of the present disclosure are implemented. SOC 703 is an alternate configuration of the SOC 700 of FIG. 7A or SOC 701 of FIG. 7C. The system 703 further comprises the common logic block 710. In one embodiment, the common logic block 710 is a logic block connected with the metadata store fabric 702. The common logic block 710 is added to SOC 703 so that each of the metadata stores are not required to comprise their own logic block that is responsible for tasks. For example, these tasks may include scrubbing, maintaining least recently used (LRU) indications, and re-allocating the clean entry with the highest LRU when a cache allocation to a new system memory address is required. For example, the tasks may be tasks that would be assigned to the cache controller, but instead are executed by common logic block 710 operations on the metadata stores. In one embodiment, FIGS. 7A-C of the present disclosure depict the development of a "shared metadata entry" that allows cache controllers to each access shared, distributed metadata without the risk of corrupting metadata used by the other cache controllers sharing that metadata entry.FIG. 8A is a representation of one type of a standard metadata entry. For example, a metadata 802 is a standard or typical metadata entry and may be used in a set-associative sectored cache. In one embodiment, the metadata 802 employs fourteen tag bits for address matching. In such an embodiment, eight valid bits each report on the validity of 512 bytes of data of that entry. Likewise, eight dirty bits indicate whether the data must be scrubbed to main memory before the entry is re-allocated. Three LRU bits track order of use (in relation to other entries of the same cache set), and one Pinned "P" bit captures software has requested that the entry not be reallocated.FIG. 8B depicts shared metadata entry 804 which may be shared metadata entry among cache controllers as employed by embodiments of the present disclosure. A division of "valid" and "dirty" bits occurs according to each controller. For example, valid[3:0] may refer to bytes 0-127, 256-383, 512-639 and 768-895, all of which may be handled by cache controller 408 of FIG. 7A. Additionally, valid[7:4] may refer to bytes 128-255, 384-511, 640-767, and 896-1023, all of which may be handled by cache controller 410 of FIG. 7 A. "Lock" bits are included for each cache controller. It should be appreciated that "lock" bits relate to the valid and dirty bits of a given cache controller. For example, lock 0 (depicted as L[0]) would relate to valid[3:0] and dirty[3:0] for cache controller 408 of FIG. 7A. Lock 1 (depicted as L[l]) would relate to valid[7:4] and dirty [7:4] for cache controller 410 of FIG. 7A. For example, an assertion of a "lock" bit indicates that the respective controller has taken a local copy of its "dirty" and "valid" bits for that entry, and that these should not be changed except by that cache controller.In one embodiment, the shared metadata entry 804 may be further enhanced by the addition of a lock bit related to the common logic block 710 of FIG. 7C. Such a lock bit is not strictly needed but may be optionally added. An additional lock bit L[2] (not depicted) may be added to the metadata entry 804. The additional lock bit L[2] may force the metadata store to know to request that the common logic block 710 completed its task and released this lock prior to giving the metadata from the metadata store to the requesting cache controller.In one embodiment of a system with multiple cache controllers, any entry, which is not in use by any controller, will have its "lock" bits clear. The metadata store is free to initiate scrub of the dirty data for that entry, and, for clean entries, re-allocate at will. For example, a re-allocation may occur according to an least recently used (LRU) protocol or other algorithm. When one of the cache controllers receives a transaction to a memory address, it sends a request to the appropriate metadata store to check the appropriate tags for a match (indicating that this memory address has been allocated in the cache), such tags are common to the cache controllers. If a match is found, a copy of the contents of that entry is delivered by the metadata store to the requesting cache controller and the lock bit pertaining to the requesting cache controller is set in the entry at the metadata store. In such an embodiment, the copy of the contents delivered to the requesting cache controller need not include valid or dirty bits belonging to one of the other controllers. The receiving cache controller serves that transaction as well as any further ones to the other parts of same OS page that are assigned to it due to the chosen interleave. In one embodiment, the cache controller may update the values of its local copy of the "valid" and "dirty" bits for that entry to reflect the cache operations it has performed. In one embodiment, when the cache controller has completed handling all transactions relating to this entry, it will send an update to the metadata store of the appropriate "valid" and "dirty" bits for that cache controller. In one embodiment, the receipt of this update causes the lock bit for the requesting cache controller to be cleared in the entry at the metadata store.By virtue of the assignment shown for shared metadata entry 804 regarding which parts of the "valid" and "dirty" fields may be updated by a given cache controller, avoids the problem of stale metadata belonging to one cache controller being written as part of an update by one of the other cache controllers. Such a mechanism allows multiple cache controllers independently and simultaneously, with no synchronization or communication between them, to access and update a single shared metadata entry, without risk of corrupting the "valid" or "dirty" bits relating to data of the entry handled by one of the other cache controllers because the shared meta data entry is locked.In one embodiment, once the "lock" bits are clear, the metadata store will again be able to perform scrubbing and re-allocation of entries. As one approach to prevent deadlock cases, the metadata store may also have a mechanism or protocol to instruct a cache controller to send its update in order to release the lock bit.In reference to tasks and metadata entries in metadata stores, scrubbing is the process of taking a 'dirty' cache data entry (i.e., one that contains newer data than the main memory) and making it 'clean' (i.e., containing the same data as main memory). Conversely, a 'clean' cache data entry may become 'dirty' as a result of a write command with new data being received from the CPU. Scrubbing is accomplished by copying the data from the cache to the main memory, which results in the data of both cache and main memory being once again identical, hence this cache data entry can now be considered 'clean'.In one embodiment, scrubbing dirty cache data while a lock bit for that entry is set may be possible, provided that the cache controller that set the lock bit indicates to the metadata store whether additional writes were received to that data while the entry was "locked". For example, this may be because the cache controller has taken a local copy of its "dirty" and "valid" bits for that entry. It is sufficient for a cache controller to notify a metadata store whether additional writes (for example from the CPU) were received to cache data that was already dirty to allow the metadata store to decide whether entry that was scrubbed while 'locked' may remain clean (if no additional writes were received, and thus cache data is the same data as main memory), or should be dirty (if additional writes were received and written to the cache data, cache data is not expected to be the same data as main memory).In one embodiment, when serving transaction requests from an agent that may be expected to access a stream of data, the metadata store may choose to pro-actively send metadata also to cache controller(s) that did not request it, and to set the appropriate lock bit. For example, the stream of data may be a display controller streaming data to the display as advised to the metadata store by the cache controller.The non-requesting cache controllers may then match incoming cache access requests against the metadata and know not to send a metadata request to the metadata store because they already have the results for such a metadata request. This will allow those controllers to be prepared should they receive a request to the same OS page as was requested in the initial request.In one embodiment, logic of the metadata store could request that the cache controllers perform the scrubbing. For example, the logic of the metadata store could send a request to the cache controller to write the cache data for a particular entry to main memory and notify the metadata store when that was done. In another embodiment, the metadata store may read the data cached by the cache controllers from the memory accessed by the memory controllers, either directly or via request to the cache controllers, and write this to main memory. This may be done directly by the metadata store sending requests to the memory controllers (either directly, or by sending requests to the cache controllers to be forwarded to the memory controllers) and receiving data from the memory controllers (either directly, or by the memory controllers sending data to the cache controllers which in turn would send it to the metadata store for reception) and, having received that data from the memory controller, writing it to main memory.FIG. 9 depicts an exemplary system upon which embodiments of the present disclosure may be implemented. For example, the system of FIG. 9 may be a computer system. The system can include a memory controller 902, a plurality of memory 904, a processor 906, and circuitry 908. The circuitry can be configured to implement the hardware described herein for system 700, 701 , and/or 703 of FIG. 7A-C. Various embodiments of such systems for FIG. 9 can include smart phones, laptop computers, handheld and tablet devices, CPU systems, SoC systems, server systems, networking systems, storage systems, high capacity memory systems, or any other computational system. The system can also include an I/O (input/output) interface 910 for controlling the I/O functions of the system, as well as for I/O connectivity to devices outside of the system. A network interface can also be included for network connectivity, either as a separate interface or as part of the I/O interface 910. The network interface can control network communications both within the system and outside of the system. The network interface can include a wired interface, a wireless interface, a Bluetooth interface, optical interface, and the like, including appropriate combinations thereof. Furthermore, the system can additionally include various user interfaces, display devices, as well as various other components that would be beneficial for such a system.The system can also include memory in addition to memory 904 that can include any device, combination of devices, circuitry, and the like that is capable of storing, accessing, organizing and/or retrieving data. Non-limiting examples include SANs (Storage Area Network), cloud storage networks, volatile or non-volatile RAM, phase change memory, optical media, hard- drive type media, and the like, including combinations thereof.The processor 906 can be a single or multiple processors, and the memory can be a single or multiple memories. The local communication interface can be used as a pathway to facilitate communication between any of a single processor, multiple processors, a single memory, multiple memories, the various interfaces, and the like, in any useful combination.Although not depicted, any system can include and use a power supply such as but not limited to a battery, AC-DC converter at least to receive alternating current and supply direct current, renewable energy source (e.g., solar power or motion based power), or the like.The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. Portions of the disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).Reference to storage, stores, memory, or memory devices can refer to memory whose state is in-determinate if power is interrupted to the device (e.g., DRAM) or to memory devices whose state is determinate even if power is interrupted to the device. In one embodiment, such an additional memory device can comprise a block addressable mode memory device, such as planar or multi-dimensional NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory, NOR flash memory, and the like. A memory device can also include a byte- addressable three dimensional crosspoint memory device, or other byte addressable write-in-place nonvolatile memory devices, such as single or multi-level Phase Change Memory (PCM), memory devices that use chalcogenide phase change material (e.g., chalcogenide glass), resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque (STT)-MRAM.FIG. 10 depicts a flowchart of a method for sharing metadata and metadata stores. The method can be executed as instructions on a machine, where the instructions are included on at least one computer readable medium or one non-transitory machine-readable storage medium. In one embodiment, the circuitry 908 of FIG. 9 is configured to carry out the steps of FIG. 10. Moreover, the systems depicted in FIGS. 7A-C may be employed to carry out the steps of FIG. 10. The method can include the operation of: connect a metadata store with a plurality of cache controllers via a metadata store fabric, as in block 1002. The method can include the operation of: receive information at the metadata store from at least one of the plurality of cache controllers, as in block 1004. The method can include the operation of: store the information as shared distributed metadata in the metadata store, as in block 1006. The method can include the operation of:providing shared access of the shared distributed metadata to the plurality of cache controllers, as in block 1008. The method can include the operation of: assign a task to a logic block wherein the task executed at the logic block operates on the shared distributed metadata, as in block 1010. The method can include the operation of: lock the valid bits and dirty bits of a given cache controller via a lock bit indicating that the valid bits and dirty bits of the given cache controller should not be changed except by the given cache controller, as in block 1012. The method can include the operation of: upon completion of relevant transactions at a given cache controller, update the appropriate metadata store of appropriate valid bits and dirty bits which causes a lock bit to be cleared, as in block 1014. It should be appreciated that the steps of FIG. 10 may not include all of the steps depicted nor in the order in which they are depicted.ExamplesThe following examples pertain to specific embodiments and point out specific features, elements, or steps that can be used or otherwise combined in achieving such embodiments.In one example, there is provided, a memory system, comprising:a plurality of cache controllers with circuitry configured to:access memory controllers which access memory;a metadata store in communication with the at least one cache controller with circuitry configured to:receive information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata;provide shared access of the shared distributed metadata hosted to the plurality of cache controllers; and a metadata store fabric disposed between the plurality of cache controllers and the at least one metadata store to facilitate the shared access.In one example of a memory system, the information is related to a task assigned to one of the plurality of cache controllers.In one example of a memory system, the metadata store fabric further comprises a common logic block to manage the task assigned to one of the plurality of cache controllers.In one example of a memory system, the metadata store further comprises a logic block to manage the task assigned to one of the plurality of cache controllers.In one example of a memory system, the metadata store is one of a plurality of metadata stores.In one example of a memory system, the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores corresponds to the number of the plurality of cache controllers.In one example of a memory system, the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores is greater than the number of the plurality of cache controllers.In one example of a memory system, the metadata store is a static random-access memory (SRAM) array.In one example of a memory system, one of the tasks assigned to the metadata store comprises maintaining least recently used (LRU) indications.In one example of a memory system, one of the tasks assigned to the metadata store comprises re-allocating an entry based on the least recently used (LRU) indication when a new system memory address is to be cached.In one example of a memory system, the shared distributed metadata hosted by the metadata store comprises valid bits and dirty bits.In one example of a memory system, the shared distributed metadata hosted by the metadata store comprises lock bits pertaining to the plurality of cache controllers.In one example of a memory system, a lock bit is to assert that the valid bits and dirty bits of a given cache controller are locked and are not changed except by the given cache controller.In one example of a memory system, one of the plurality of cache controllers, upon completion of all transactions relating to a metadata entry, is to update the metadata store of appropriate valid bits and dirty bits and cause a lock bit to be cleared.In one example of a memory system, a logic block is configured to identify dirty entries for a scrubbing operation wherein the logic block is associated with the metadata store fabric or the metadata store.In one example, there is provided, a system, comprising: one or more processors configured to process data;an input output subsystem configured to receive input data and to output data;a plurality of memory controllers to access a plurality of memory;a plurality of cache controllers with circuitry configured to:access memory controllers which access memory;a cache controller fabric disposed between the system fabric and the plurality of cache controllers;a metadata store in communication with the plurality of cache controllers with circuitry configured to:receive information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata;provide shared access of the shared distributed metadata hosted to the plurality of cache controllers;a metadata store fabric disposed between the plurality of cache controllers and the plurality of metadata stores; anda system fabric configured to connect the one or more processors and the input output subsystem to the plurality of memory controllers and the plurality of cache controllers.In one example of a system, the information is related to a task assigned to one of the plurality of cache controllers.In one example of a system, the metadata store fabric further comprises a common logic block to manage the task assigned to one of the plurality of cache controllers.In one example of a system, the metadata store further comprises a logic block to manage the task assigned to one of the plurality of cache controllers.In one example of a system, the metadata store is one of a plurality of metadata stores. In one example of a system, the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores corresponds to the number of the plurality of cache controllers.In one example of a system, the metadata store is one of a plurality of metadata stores and the number of the plurality of metadata stores is greater than the number of the plurality of cache controllers.In one example of a system, the metadata store is a static random-access memory (SRAM) array.In one example of a system, one of the tasks assigned to the metadata store comprises maintaining least recently used (LRU) indications. In one example of a system, one of the tasks assigned to the metadata store comprises reallocating an entry based on the least recently used (LRU) indication when a new system memory address is to be cached.In one example of a system, the shared distributed metadata hosted by the metadata store comprises valid bits and dirty bits.In one example of a system, the shared distributed metadata hosted by the metadata store comprises lock bits pertaining to the plurality of cache controllers.In one example of a system, a lock bit is to assert that the valid bits and dirty bits of a given cache controller are locked and are not changed except by the given cache controller.In one example of a system, one of the plurality of cache controllers, upon completion of all transactions relating to a metadata entry, is to update the metadata store of appropriate valid bits and dirty bits and cause a lock bit to be cleared.In one example of a system, a logic block is configured to identify dirty entries for a scrubbing operation wherein the logic block is associated with the metadata store fabric or the metadata store.In one example, there is provided, a method comprising:connecting a metadata store with a plurality of cache controllers via a metadata store fabric; receiving information at the metadata store from at least one of the plurality of cache controllers;storing the information as shared distributed metadata in the metadata store;providing shared access of the shared distributed metadata to the plurality of cache controllers; andassigning a task to a logic block wherein the task executed at the logic block operates on the shared distributed metadata.In one example of a method, the metadata store is one of a plurality of metadata stores.In one example of a method, the plurality of cache controllers and the metadata store are interconnected via a metadata store fabric.In one example of a method, the metadata store fabric comprises a common logic block to manage the task assigned to one of the plurality of cache controllers.In one example of a method, the metadata store further comprises a logic block to manage the task assigned to one of the plurality of cache controllers.In one example of a method, the metadata store is a static random-access memory (SRAM) array.In one example of a method, the task assigned to the metadata store comprises maintaining least recently used (LRU) indications. In one example of a method, the task assigned to the metadata store comprises re-allocating a clean entry with a higher least recently used (LRU) indication when a new system memory address is to be cached.In one example of a method, the shared distributed metadata hosted by the metadata store comprises lock bits, valid bits, and dirty bits.In one example of a method, locking the valid bits and dirty bits of a given cache controller via a lock bit indicating that the valid bits and dirty bits of the given cache controller are not be changed except by the given cache controller.In one example of a method, upon completion of relevant transactions at a given cache controller, updating the appropriate metadata store of appropriate valid bits and dirty bits and cause a lock bit to be cleared. |
PROBLEM TO BE SOLVED: To provide technologies for de-duplicating encrypted content.SOLUTION: The technologies include fragmenting a file into blocks on a computing device 102, encrypting each block, and storing each encrypted block on a content data server 106 with associated keyed hashes and member identifications. The computing device additionally transmits each encrypted block with an associated member encryption key and member identification to a key server 108. As part of the de-duplication process, the content data server stores only one copy of the encrypted data for a particular associated keyed hash, and the key server similarly associates a single member encryption key with the keyed hash. To retrieve the file, the computing device receives the encrypted blocks with their associated keyed hashes and member identifications from the content data server and receives a corresponding member decryption key from the key server, so as to decrypt each block and generate the file.SELECTED DRAWING: Figure 1 |
A system for block deduplication, comprising: one or more processors and one or more memories storing a plurality of instructions, said instructions being executed by said one or more processors Identifying the identification information of the requesting entity and the encrypted version of the block of the fragmented file; and judging whether the block is stored by the system based on the encrypted hash of the block Storing an encrypted version of the block in response to a determination that the block is not stored by the system; storing the encrypted version of the block in response to determining that the block is not stored by the system; As an entity's permission, the requesting entity associated with the encrypted version of the block Identifying the second identification information of the second requesting entity and the encrypted version of the block of the fragmented file; storing the second identification information of the second requesting entity and the encrypted version of the fragment of the fragment of the second requesting entity In response to identifying identification information of the block, determining that the block is stored by the system based on an encrypted hash of the block; and responding to a determination that the block is stored by the system Storing the second identification information of the second requesting entity in association with the encrypted version of the block as permission of the second requesting entity for accessing the encrypted version of the block And causing the one or more processors to execute the steps.Wherein the step of determining whether the block is stored by the system includes comparing an encrypted hash of the block with an encrypted hash of a block stored by the system. 1.Wherein the step of specifying an encrypted version of the block includes the step of specifying an encrypted version of the block from the requesting entity for each block of the fragmented file and judging whether the block is stored The step of determining for each block of the fragmented file whether or not the block is stored by the system based on an encrypted hash of the block, storing an encrypted version of the block The method of claim 1, wherein the step of storing, for each of the fragmented blocks, storing an encrypted version of the block in response to a determination that the block is not stored by the system System.Wherein the plurality of instructions are further permitted to access the encrypted version of the block by the second requesting entity in response to a request from the second requesting entity for the encrypted block In response to determining that the second requesting entity is permitted to access the encrypted version of the block, decrypting the encrypted version of the block with the second requestor And providing the one or more processors with the one or more processors to the entity.The step of determining whether the second requesting entity is permitted to access the encrypted version of the block is performed in such a way that the second identification information of the second requesting entity is related to the encrypted version of the block Determining whether or not to perform the operation.The step of determining whether the second requesting entity is permitted to access the encrypted version of the block comprises determining whether to permit access to the encrypted version of the second requesting entity and the encrypted version of the block 6. The system according to claim 5, comprising the step of comparing the ownership list of the device identification information.The system according to claim 1, wherein the step of identifying the identification information of the requesting entity includes a step of identifying the identification information of the computing device.Identifying the identification information of the requesting entity and the encrypted version of the block of the fragmented file; determining whether the block is stored by the system based on the encrypted hash of the block; Storing an encrypted version of the block in response to a determination that a block is not stored by the system; storing, as permission of the requesting entity to access the encrypted version of the block Storing the identification information of the requesting entity in association with an encrypted version of the block and storing the second identification information of the second requesting entity and an encrypted version of the block of the fragmented file Identifying the second requesting entity In response to identifying the second identification information of the user, determining that the block is stored by the system based on the encrypted hash of the block; and determining that the block is stored by the system In response to the encrypted version of the block, as second permission of the second requesting entity for accessing the encrypted version of the block, second authorization information of the second requesting entity And a step of storing the program in the storage unit.The method according to claim 8, wherein the step of determining whether the block is stored by the system includes comparing an encrypted hash of the block with an encrypted hash of a block stored by the system. The program as stated.Wherein the step of specifying an encrypted version of the block includes the step of specifying an encrypted version of the block from the requesting entity for each block of the fragmented file and judging whether the block is stored The step of determining for each block of the fragmented file whether or not the block is stored by the system based on an encrypted hash of the block, storing an encrypted version of the block The method of claim 8, wherein the step of storing for each fragmented block comprises storing an encrypted version of the block in response to a determination that the block is not stored by the system. The program.Determining whether the second requesting entity is permitted to access an encrypted version of the block in response to a request from the second requesting entity for the encrypted block; Providing an encrypted version of the block to the second requesting entity in response to a determination that the second requesting entity is authorized to access an encrypted version of the block; And causes the processor to further execute the program.The step of determining whether the second requesting entity is permitted to access the encrypted version of the block is performed in such a way that the second identification information of the second requesting entity is related to the encrypted version of the block Determining whether or not to execute the program.The step of determining whether the second requesting entity is permitted to access the encrypted version of the block comprises determining whether to permit access to the encrypted version of the second requesting entity and the encrypted version of the block And an owner ship list of device identification information.The program according to claim 8, wherein the step of identifying identification information of the requesting entity includes a step of specifying identification information of the computing device.A method for block deduplication implemented by a processor in a computer system, the processor comprising the steps of: identifying an identity of a requesting entity and an encrypted version of a block of fragmented files; Determining whether the block is stored by the system based on an encrypted hash of the block; and in response to a determination that the block is not stored by the system Storing the encrypted version of the block; and allowing the processor to request, as permission of the requesting entity to access the encrypted version of the block, Entity's Identifying a second identification information of a second requesting entity and an encrypted version of a block of said fragmented file; and said processor is further operable to: Determining that the block is stored by the system based on an encrypted hash of the block in response to identifying a second identity of the requesting entity of the requesting entity; In response to a determination that the second requesting entity has access to the encrypted version of the block, the permission of the second requesting entity to access the encrypted version of the block, And storing the second identification information of the entity.Wherein the step of determining whether the block is stored by the system includes comparing an encrypted hash of the block with an encrypted hash of a block stored by the system. 15.Wherein the step of specifying an encrypted version of the block includes the step of specifying an encrypted version of the block from the requesting entity for each block of the fragmented file and judging whether the block is stored The step of determining for each block of the fragmented file whether or not the block is stored by the system based on an encrypted hash of the block, storing an encrypted version of the block The step of storing for each block of the fragmented file a step of storing an encrypted version of the block in response to a determination that the block is not stored by the system 15.Wherein in response to a request from the second requesting entity for the encrypted block, the processor determines whether the second requesting entity is authorized to access the encrypted version of the block In response to a determination that the second requesting entity is permitted to access an encrypted version of the block, encrypting the encrypted version of the block with the second request And providing to the original entity.The step of determining whether the second requesting entity is permitted to access the encrypted version of the block is performed in such a way that the second identification information of the second requesting entity is related to the encrypted version of the block A step of determining whether or not to make a decision.The step of determining whether the second requesting entity is permitted to access the encrypted version of the block comprises determining whether to permit access to the encrypted version of the second requesting entity and the encrypted version of the block 20. The method of claim 19 including comparing an owner ship list of device identification information.The method according to claim 15, wherein the step of identifying identification information of the requesting entity includes the step of identifying the identification information of the computing device.A computer-readable storage medium for storing the program according to any one of claims 8 to 14. |
Community based deduplication of encrypted dataIn today's society, large amounts of data are transmitted between computing devices and stored everyday. Also, there are efforts to reduce unnecessary overhead in various aspects of computation. For example, data deduplication is a process that significantly reduces storage consumption when stored in places with large amounts of data (eg, as a backup system). Data deduplication allows developers to replace large redundant data blocks with relatively small reference points to a single copy of a data block.In many implementations, the stored data may be encrypted or stored in a secure manner. Encryption and hashing algorithms allow secure transmission and storage of digital data. Ideally, the encryption algorithm should generate data that appears pseudo-random and the same data encrypted with different encryption keys should produce significantly different encrypted data. Also, typical data deduplication does not work with data encrypted with different encryption keys.The concepts described herein are illustrated by way of illustration and not by way of limitation in the figures of the accompanying drawings. For simplicity of illustration, the elements shown in the drawings are not necessarily shown scaled. If considered appropriate, reference labels are repeated between the drawings to indicate corresponding or similar elements. FIG. 1 is a simplified block diagram of at least one embodiment of a system for deduplication of encrypted data. 2 is a simplified block diagram of at least one embodiment of the computing device environment of the system of FIG. 1; FIG. FIG. 3 is a simplified flow diagram of at least one embodiment of a method of storing content in a content data server utilizing the computing device of the system of FIG. 1. FIG. 4 is a simplified flow diagram of at least one embodiment of a method for extracting content from a content data server utilizing the computing device of the system of FIG. 1. FIG. 5 is a simplified flow diagram of at least one embodiment of a method for deduplicating encrypted content of a content data server of the system of FIG. 1. FIG. 6 is a simplified flow diagram of at least one embodiment of a method for deduplicating the cryptographic key of the key server of the system of FIG. 1. 7 is a simplified flow diagram of at least one embodiment of a method for providing requested content of a content data server of the system of FIG. 1; FIG. 8 is a simplified flow diagram of at least one embodiment of a method for providing a requested key of a key server of the system of FIG. 1; FIG.While the concepts of the present disclosure allow various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that it is not intended to limit the concepts of the present disclosure to the particular forms disclosed, but on the other hand, that the intention covers all modifications, equivalents, and alternatives consistent with this disclosure and the appended claims It should be understood that.References in the specification such as "one embodiment", "an example", "an example embodiment", etc., may be understood as meaning that the described embodiments may include specific features, configurations or characteristics, Embodiments of the present invention may or may not necessarily include the particular feature, structure or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment. Moreover, when a particular feature, structure, or characteristic is described with reference to an embodiment, practicing such feature, structure, or characteristic with respect to other embodiments, whether explicitly described or not Are within the knowledge of those skilled in the art.The disclosed embodiments may be implemented in some cases by hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried or stored by a temporary or non-transitory machine-readable (eg, computer readable) storage medium that may be read and executed by one or more processors. The machine-readable storage medium may be embodied as any storage device, mechanism or other physical configuration for storing or transmitting information in a form readable by the machine (eg, volatile or nonvolatile memory, media Disk or other media device).In the drawings certain features or method features may be indicated by specific arrangements and / or orders. It should be understood, however, that such specific arrangement and / or ordering may not be required. Rather, in some embodiments, such features may be arranged in different ways and / or sequences than those shown in the figures shown. Furthermore, including features or method features in a particular drawing is not intended to imply that the feature is required in all embodiments, in some embodiments it is not included Or may be combined with other features.Referring now to FIG. 1, a system 100 for deduplication of encrypted data includes one or more computing devices 102, a network 104, a content data server 106, and a key server 108. In use, computing device 102 may store and / or extract encrypted data from content data server 106, as described in more detail below. Further, the content data server 106 and the key server 108 sequentially function to eliminate duplicate stored encrypted data. Although only one network 104, one content data server 106 and one key server 108 are shown illustratively in FIG. 1, the system 100 includes any number of networks 104, content data servers 106 and key servers 108 May be provided. Further, as shown, the system 100 may include one, two or more computing devices 102. In some embodiments, the computing device 102 is implemented as a member device (or simply "member") in a community (eg, an enterprise, a business unit, a family member or other collective entity). Membership of the community may be established in various ways depending on the implementation. For example, in some embodiments, content data server 106 and / or key server 108 determines which computing device 102 is a member of a community. In other embodiments, a whitelist and / or blacklist may be determined (eg, by content data server 106 and / or key server 108) in determining membership. As described below, the computing device 102 of the member in the community shares the community encryption key used in the deduplication system 100.Each computing device 102 may be implemented as any type of computing device capable of establishing a communication link with content data server 106 and key server 108 and performing the functions described herein. For example, computing device 102 may be implemented as a cell phone, smart phone, tablet computer, laptop computer, PDA (Personal Digital Assistant), mobile internet device, desktop computer, server and / or any other computing / It is also good. In some embodiments, the computing device 102 may be implemented as an encryption device (eg, an appliance utilized by an organization to back up to a cloud provider rather than a specific device). 1, the exemplary computing device 102 includes a processor 110, an input / output ("I / O") subsystem 112, a memory 114, a communication circuit 118, one or more peripheral devices 120, a data storage 122, Security engine 124. Of course, the computing device 102 may, in other embodiments, have other or additional components, such as those typically found in typical computing devices (eg, various input / output devices). Further, in some embodiments, one or more of the exemplary components may be mounted on another component, or may be from a portion. For example, the memory 114, or a portion thereof, may be mounted on the processor 110 in some embodiments. In another embodiment, the security engine 124 may be mounted on the communication circuit 118.Processor 110 may be implemented as any type of processor capable of performing the functions described herein. For example, the processor may be implemented as a single or multi-core processor, digital signal processor, microcontroller or other processor or processing / control circuit. Similarly, memory 114 may be implemented as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In processing, the memory 114 may store various data and software utilized during the execution of the computing device 102, such as the operating system, applications, programs, libraries and drivers. In some embodiments, the memory 114 includes a secure memory 116. The secure memory 116 may be, for example, a secure partition of the memory 114, or may be independent of the memory 114. Furthermore, the secure memory 116 may store private or secure cryptographic keys. Memory 114 is coupled to processor 110 via I / O subsystem 112, which may be implemented as circuitry and / or components to perform input / output processing with processor 110, memory 114 and other components of computing device 102 Communication connected. For example, the I / O subsystem 112 may include a memory controller hub, an input / output control hub, a firmware device, a communication link (ie, point to point link, bus link, wire, cable, light guide, printed circuit board trace, And / or as other components and subsystems for performing input / output processing. In some embodiments, the I / O subsystem 112 constitutes a part of a system on chip (SoC) and, together with the processor 110, the memory 114 and other components of the computing device 102, a single integrated circuit It may be mounted on a chip.The communication circuit 118 of the computing device 102 may be any computing device that enables communication between the computing device 102 and other remote devices (eg, the content data server 106 and the key server 108) via the network 104 A communication circuit, an apparatus, or a collection thereof. Communication circuitry 118 may be configured to utilize any one or more of communication techniques (eg, wireless or wired communication) and associated protocols for performing the communication. In some embodiments, communication circuitry 118 is implemented as a network interface card (NIC). Further, in some embodiments, one or more of the functions described herein may be offloaded to communication circuit 118 or NIC.The peripheral device 120 of the computing device 102 may have any number of additional peripheral or interface devices. The particular device included in the peripheral device 120 may depend, for example, on the type and / or intended use of the computing device 102, etc. Data storage 122 is implemented as any type of device configured for storing short or long term data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid state drivers or other data storage devices May be used.Security engine 124 is configured to perform various security and cryptographic functions of computing device 102, as described in more detail below. Security engine 124 may be implemented as a security coprocessor or similar device separate from processor 110. In this embodiment, the security engine 124 may operate in an out-of-band manner for the processor 110 (eg, the security engine 124 may communicate with a remote device, independent of the power state of the processor 110 May be included). In other embodiments, the security engine 124 may be or may be implemented as an encryption accelerator mounted on the processor 110 or stand alone encryption software / firmware. Additionally, the security engine 124 and / or the communication circuit 118 may be configured to perform standard authentication and access remote devices and control protocols.The network 104 may be implemented as any number of various wired and / or wireless communication networks. Also, the network 104 may comprise one or more networks, routers, switches, computers and / or other intervening devices. For example, the network 104 may be implemented as one or more cellular networks, a telephone network, a local or wide area network, a publicly available global network (eg, the Internet), or any combination thereof You may do.Content data server 106 and key server 108 may each be implemented as any type of computing device or server capable of performing the functions described herein. For example, in some embodiments, content data server 106 and / or key server 108 may be similar to computing device 102 described above. That is, the content data server 106 and the key server 108 may be an enterprise level server computer, a desktop computer, a laptop computer, a tablet computer, a mobile phone, a smart phone, a PDA, a mobile internet device, and / or any other computing / May be realized. In addition, the content data server 106 and / or the key server 108 may have components similar to those of the computing device 102 described above. The description of these components of the computing device 102 is equally applicable to the description of the content data server 106 and the components of the key server 108 and is not repeated here for the sake of brevity. In addition, the content data server 106 and / or the key server 108 may not be described above with reference to the computing device 102, and other components, subcomponents and components typically found in computing devices or servers not described herein for the sake of brevity of description But it should be understood that it may also include devices.As shown in FIG. 1, the exemplary content data server 106 has an encrypted block 150, a hashed block 152 and a member identification (ID) 154. In addition, the key server 108 has a decryption key 156, a hashed block 152 and a member identification 154. As explained in detail below, the computing device 102 encrypts and generates a hash of the blocks of files to be stored in the system 100, and encrypts both the encrypted block 150 and the hashed block 152 To the content data server 106. Further, in some embodiments, each member computing device 102 in the community has a unique member identification that is sent to both the content data server 106 and the key server 108. As described in more detail below, each computing device 102 also has a file encryption key and a decryption key that can be utilized to encrypt the data block. In some embodiments, the file decryption key is also sent to the key server 108.As explained in more detail below, the system 100 allows for a balance between security provided by encryption and benefits provided by data deduplication. The system 100 establishes trust relationships between community members and allows deduplication of content by the community due to minimal risk of unauthorized or undesired data exposures.2, during use, each computing device 102 of system 100 establishes an environment 200 that stores content on content data server 106 and extracts content from content data server 106. The environment 200 in the exemplary embodiment comprises a file management module 202, an encryption module 204, a communication module 206 and a secure memory 116. Each of the file management module 202, the encryption module 204, and the communication module 206 may be realized as hardware, software, firmware, or a combination thereof.The file management module 202 performs file disassembly, reconstruction, compression, decompression and other file management functions. For example, the file management module 202 fragments or divides a given file (eg, a digital file) stored in the content data server 106 into one or more blocks or chunks. The file may for example be realized as a digital file, program or application, an atomic part of code or data or other suitable data structure. In some embodiments, the file management module 202 is configured to fragment the file into fixed length blocks, while in other embodiments the file management module 202 may fragment the file into variable length blocks Good. In the case of fixed-length blocks, the block size may conform to the standards determined by the computing device 102 and / or the content data server 106. Further, the file management module 202 is configured to subsequently combine (ie, reverse the fragmentation) the fragmented blocks into a file. In order to perform proper composition or reconstruction of the block, when the file is fragmented, the file management module 202 may generate a list of blocks associated with the original file. For example, in one embodiment, file management module 202 may fragment files into blocks X, Y, and Z, which may be stored in non-adjacent sections of memory of computing device 102102 and / or content data server 106 over time And may be stored. Also, the list of blocks provides mechanisms for identifying the blocks associated with the file and their correct order for reconstruction. In an exemplary embodiment, the list of blocks includes a list of keyed hashes of blocks belonging to the file and may include other information (eg, their order). As will be described later, the computing device 102 encrypts each block. Thus, in other embodiments, the list may relate to encrypted blocks and / or unencrypted blocks.The encryption module 204 performs security and encryption procedures. In some embodiments, the encryption module 204 may access the encryption key stored in the secure memory 116 (eg, the member key 208, the community key 210, and the file key 212). In addition, the encryption module 204 may generate an encryption key (ie, file key 212) to encrypt and / or decrypt various file blocks. In some embodiments, the encryption module 204 encrypts another file key 212 (or another encryption key pair if an asymmetric key is used) for each file to be encrypted "on the Fly "on the fly. For example, the encryption module 204 encrypts each block of the fragmented file using the file encryption key 212, encrypts the list of blocks generated by the file management module 202 using the member encryption key 208, . Of course, the encryption module 204 may also perform decryption procedures. Each member key 208 is implemented as an encryption key specifically associated with a particular member of the community (ie, the particular computing device 102). Community key 210 is an encryption key shared by each computing device 102 of system 100 (ie, shared by community members). For example, the computing device 102 in an enterprise environment may be a member of the same community and therefore share the same community key 210. Of course, a single computing device 102 may be a member of multiple communities, and in some embodiments may maintain multiple community keys 210. It should further be appreciated that in various embodiments, each of the member key 208 and community key 210 may be a symmetric or asymmetric encryption key. Of course, if a symmetric encryption algorithm is used, the encryption and decryption encryption keys are the same key. Also, the file key, the member key 208 and / or the community key 210 may be used as an "encryption" or "decryption" key, depending on the understanding that symmetric encryption is utilized, that the key may be the same key It is sometimes described later.Further, the encryption module 204 is configured to generate a keyed hash of each block using the community key 210. Any suitable hash function and key processing method may be used. For example, in various embodiments, even if a secure hash algorithm (eg, SHA - 0, SHA - 1, SHA - 2, SHA - 3) or a message digest algorithm (eg, MD 5) is used as a hash function Good. The keyed hash may be generated, for example, by executing processing on the data block and the community key 210 and inputting the result of the processing into the hash function. For example, in one embodiment, a keyed hash may be generated by concatenating or attaching a community key 210 to a data block and using the result of the concatenation process as a hash function input. In other embodiments, an exclusive-or (ie, XOR) operation may be applied to the data block and the community key prior to hashing the data. In a further embodiment, a message authentication code (MAC) algorithm may be used to generate a keyed hash for each block. Of course, any suitable method of generating a keyed hash based on the community key 210 may be used.It should be appreciated that member key 208 and community key 210 may be distributed to computing device 102 via any suitable key distribution mechanism. In one embodiment, a key distribution server (eg, content data server 106, key server 108 or other key server) or Oracle may be used to distribute the key. In other embodiments, a conference key distribution system may be utilized, for example, to establish a community key 210. In some embodiments, the file key 212 may also be distributed from the key distribution server to the computing device 102.Communication module 206 handles communications between computing device 102 and remote devices (eg, content data server 106 and key server 108) over network 104. For example, in some embodiments, communication module 206 handles secure and / or non-secure communication between computing device 102 and a remote device. As described above, in some embodiments, the communication module 206 may perform the communication for the security engine 124 regardless of the power state of the processor 110.With reference to FIG. 3, during use, each computing device 102 of system 100 may perform a method 300 of storing content on content data server 106. For example, the computing device 102 may store the content on the content data server 106 (eg, if the data was lost or corrupted in the computing device 102) for secure maintenance and subsequent extraction. The exemplary method 300 begins at block 302 where the computing device 102 identifies the file to be stored on the content data server 106 and fragments the file into blocks. As described above, the computing device 102 may divide the file into fixed-length blocks whose block length / size is determined by the content data server 106. At block 304, the computing device 102 utilizes the shared community key 210 to generate a keyed hash for each block. As described above, any mechanism suitable for generating a keyed hash may be implemented as long as the community key 210 is used as a key for the hash function.At block 306, the computing device 102 generates a file encryption key 212 and encrypts each block using the file encryption key 212 of the computing device 102 selected for use with the file. As described above, in some embodiments, the computing device 102 "encrypts the file encryption key 212 (and decryption key in the case of asymmetric encryption)" on the fly "to encrypt each file block . Of course, in other embodiments, the file encryption key 212 may be selected from a set of previously generated file encryption keys 212, or may be received from the key distribution server. The computing device 102 generates a list of blocks belonging to the fragmented file at block 306. In the exemplary embodiment, the list identifies the hashed block (ie, the output of block 304). As described above, in some embodiments, the list of blocks is determined as to which block is associated with the fragmented file and the blocks are composited (ie, placed together) when decrypted And the order in which it should be taken. In another embodiment, the generated list identifies an unencrypted block or encrypted block, not a hashed block. Nonetheless, at block 310, the computing device 102 encrypts the list with its member encryption key 208. In some embodiments, computing device 102 also encrypts the file decryption key with its member encryption key 208 for additional security at block 312.At block 314, the computing device 102 sends the encrypted block, the keyed hash of each block and the member identification of the computing device 102 to the content data server 106. In addition, at block 316, the computing device 102 sends the keyed hash of each block, the file decryption key 212 of the computing device 102, and the member identification of the computing device 102 to the key server 108. As mentioned above, in some embodiments, the file decryption key 212 sent to the key server 108 is further encrypted with the member encryption key 208 of the computing device 102. Of course, in order to improve efficiency, the computing device 102 may simultaneously transmit the above information for a plurality of blocks. In addition, some embodiments of the methods and systems described above may be used for high-performance parties (eg, businesses (eg, businesses), etc. because of the expectation that other members are available to decrypt the encrypted file decryption key And organizations) that may be involved. Less highly performing parties, in some embodiments, may implement the disclosed technology without further encrypting the file decryption key.As will be described later, the content data server 106 and the key server 108 may store a specific encrypted block and file decryption key 212, or a keyed hash already stored with the member ID of the computing device 102 Execute the deduplication method to decide whether to associate (for example, if a keyed hash of a particular block matches a keyed hash already stored). Depending on the particular implementation (eg which device stores which type of data), at block 318 the computing device 102 sends the encrypted list of blocks associated with the file to the content data server 106 and / or the key server 108. Of course, in some embodiments, a method similar to that described herein may be utilized to implement client side deduplication prior to encrypting data.With reference to FIG. 4, during use, each computing device 102 of system 100 may perform a method 400 of extracting content from content data server 106. The method 400 begins at block 402 where the computing device 102 determines whether a file has been requested. It should be appreciated that in some embodiments, the computing device 102 can only extract files previously stored in the content data server 106. When computing device 102 requests a file from content data server 106, computing device 102 receives an encrypted list of blocks associated or corresponding to the requested file from content data server 106 at block 404 . Of course, the computing device 102 may extract the list from the memory 114 or the data storage 122 (eg due to a damage or natural disaster) in embodiments where the list is stored locally and the data is not lost. In an embodiment where the computing device 102 generates a list specifying only unencrypted blocks of the fragmented file at block 306 (see FIG. 3), the content data server 106 determines that the encrypted block is the first When received from one of the computing devices 102 in an instance, a list of encrypted blocks associated with each file may be generated. At block 406, the computing device 102 uses its member decryption key 208 to decrypt the list of blocks. From the list of blocks, as described above, the computing device 102 can determine which block is associated with the desired file.At block 408, the computing device 102 determines whether a block associated with the desired file has been requested. If the computing device 102 requests a data block associated with the desired file, at block 410, the computing device 102 sends the corresponding encrypted block, keyed hash and member ID (ie, encrypted to the content data server 106 From the content data server 106, the member ID of the computing device 102 which stores each of the blocks). As will be described later, the content data server 106 associates a concrete keyed hash and member ID with the encrypted block. Likewise, the key server 108 associates a concrete keyed hash and member ID with a particular file decryption key 212. At block 412, the computing device 102 requests the file decryption key 212 associated with each encrypted block from the key server 108. In doing so, computing device 102 provides a keyed hash and member ID to key server 108 for each requested block as received from content data server 1065, at block 414. As described above, each block stored in the content data server 106 is encrypted using the file encryption key 212 generated by one of the computing devices 102. Also, the corresponding file decryption key 212 may be used to decrypt the encrypted block. As discussed above, file encryption key 212 and file decryption key 212 can be the same key when symmetric encryption is utilized. In some embodiments, the computing device 102 only sends a keyed hash to the key server 108 when requesting a corresponding file decryption key 212.As mentioned above, in some embodiments, the file decryption key 212 is further encrypted with the member encryption key 208 of the computing device 102 that initially stored the encrypted block in the content data server 106 3 block 310). In such an embodiment, if any of the file decryption keys are encrypted with the member encryption key 208 of the different computing device 102 (ie, for deduplication), the computing device 102 that accessed the encrypted block , It can not decrypt it without the member decryption key 208 of the other computing device 102 and therefore can not decrypt the file block. Also, the computing device 102 may request assistance from the corresponding member device in decrypting the encrypted file decryption key, at block 416. In one embodiment, the computing device 102 sends the encrypted file decryption key 212 to the corresponding member device, which uses the member's private member decryption key 208 to decrypt the encrypted file decryption key 212 , And the member device transmits the decrypted file decryption key 212 to the computing device 102 for use in decrypting the encrypted file block. Of course, the computing device 102 may request the corresponding member device to forward the decrypted version of the encrypted file decryption key 212.At block 418, the computing device 102 decrypts each block encrypted using the corresponding file decryption key 212 received from the key server 108. In the embodiment in which the file decryption key 212 is further encrypted with the member encryption key 208 as described above, the computing device 102 can use the file decryption key 212 to decrypt the encrypted block, Decryption key 212 using the member decryption key 208 to decrypt the file decryption key 212. If the encrypted block was first stored in the content data server 106 by the computing device 102, the file decryption key 212 would have been encrypted with the member encryption key 208 of the computing device 102. In these situations, the computing device 102 may simply decrypt the file decryption key 212 using its own member decryption key 208. However, in some embodiments, the file decryption key 212 may be encrypted with a member encryption key 208 of a member device other than the computing device 102. The computing device 102 may also request a decrypted version of the file decryption key 212 from the member device (eg, by sending an encrypted file decryption key 212 for decryption by the member device).Further, in some embodiments, the computing device 102 may verify the integrity of the decrypted block at block 420. To do this, the computing device 102 uses the community key 210 to generate a keyed hash of each decrypted block (ie, hash with a reference key). The computing device 102 then compares the generated keyed hash with the keyed hash received from the content data server 106, at block 410. If these keyed hashes match, the decrypted data block is authentic and has not been modified. At block 422, the computing device 102 combines the decrypted blocks to obtain the reconstructed desired file.With reference to FIG. 5, during use, the content data server 106 may perform a method 500 for deduplicating encrypted content. The method 500 determines whether the content data server 106 has transmitted file information for storage on the content data server 106 by the member device 102. In some embodiments, the content data server 106 determines whether the computing device 102 provided the appropriate information needed to store the encrypted block in the content data server 106 (ie, encryption A related block, an associated keyed hash, an associated member ID and possibly an encrypted list). As mentioned above, the member device 102 fragments the file into blocks and encrypts each block of the file. If file information is received, at block 504, the content data server 106 accesses the next encrypted block of the file, the associated keyed hash and the member ID.At block 506, the content data server 106 compares the received keyed hash with the other stored keyed hash (ie, stored on the content data server 106). If no match is found at block 508, the content data server 106 stores the received encrypted block, keyed hash and member ID in the content data server 106 at block 510. In some embodiments, the content data server 106 also associates the keyed hash, the member ID and the encrypted block with each other. However, if a match is detected at block 508, the content data server 106 compares the member ID of the computing device 102 with the existing encrypted block and keyed hash (ie, a keyed hash that matched) at block 512, . That is, in some embodiments, the content data server 106 determines which members have stored encrypted blocks associated with a particular keyed hash or attempted to store (eg, these members (Using, for example, a list or other tracking mechanism).That is, the received encrypted block, keyed hash and member ID are stored only if there is no matching keyed hash already stored in the content data server 106. If a matching keyed hash is already stored in the content data server 106, the deduplication mechanism is utilized. Since the keyed hash matches, the possibility that the unencrypted blocks are identical is dominant. Also, instead of storing the duplicate information, the content data server 106 associates or maps the member ID of the computing device 102 to the already stored encrypted block and keyed hash. In such an example, the stored encrypted block is encrypted using the member encryption key 208 of the different computing device 102 (ie, the first encrypted block to the content data server 106 is encrypted File encryption key 212 generated by the stored computing device). The member ID of the member that originally stored the encrypted block needs to identify which member corresponds to the appropriate member decryption key 208 to decrypt the file decryption key 212 as described herein . The content data server 106 may utilize any data structure (eg, table) suitable for organizing or associating the keyed hash, encrypted block and member ID with one another.At block 514, the content data server 106 determines if additional encrypted blocks remain. That is, the content data server 106 determines whether the keyed hash associated with each encrypted block identified in the file information has been stored from the member device 102 that sent the file, or from the file storage of the other member device 102 It is determined whether or not it has been previously stored in the content data server 106. If more encrypted data remains, the method 500 returns to block 504 where the content data server 106 extracts the next encrypted block, keyed hash and member ID of the file. However, if the content data server 106 determines at block 514 that all of the encrypted blocks identified in the file information has been stored at the content data server 106, the content data server 106, at block 516, Store the list. It should be appreciated that the method described herein applies equally to embodiments in which the content data server 106 has received only a portion of the block for the file for storage.With reference to FIG. 6, during use, the key server 108 may perform a method 600 of deduplicating cryptographic keys. The method 600 begins at block 602 where the key server 108 determines from the computing device 102 whether a keyed hash, a member ID, and a file decryption key 212 have been received. If so, the mandatory information for storing the key is sent to the key server 108 and the key server 108 sends the received keyed hash and other keyed hash stored in the key server 108 at block 604 .As with method 500, if a match (ie, they are the same hash) between the received keyed hash and one of the stored keyed hashes is not detected at block 606, the key server 108 Stores the received keyed hash, file decryption key 212 and member ID in the key server 108 at block 608 and associates each received information with each other. However, if a match is detected at block 606, the key server 108 may store the member ID of the computing device 102 and the existing file decryption key (s) instead of storing duplicate information at block 610 in some embodiments 212 and a keyed hash (ie, a matching keyed hash). In other embodiments, nothing is stored in the key server 108 in the case of a match. It should be appreciated that key server 108 may utilize any suitable data structure to organize or associate keyed hash, member ID and file decryption key with each other.With reference to FIG. 7, during use, the content data server 106 may perform a method 700 of providing the requested content to the computing device 102. The method 700 begins at block 702 where the content data server 106 determines whether one of the computing devices 102 has requested content from the content data server 106. If so, the content data server 106 verifies at block 704 that the computing device 102 is authorized to access the requested content (ie, the requested encrypted block). In doing this, in some embodiments, the content data server 106 compares the member ID of the computing device 102 with the list of associated member IDs of each requested encrypted block in block 706 . As described above, in some embodiments, the content data server 106 may store, for each encrypted block, the keyed hash corresponding to the content data server 106, or store each of the encrypted hashes Maintain a list of member devices' member IDs. The list also indicates which computing device 102 had the previous "ownership" of the decrypted data associated with the encrypted block. Of course, in other embodiments, means for determining whether a particular member is authorized to access the encrypted block may be utilized. For example, in some embodiments, standard authentication and access control protocols may be performed (eg, before block 702). At block 708, the content data server 106 determines whether access is permitted. If so, the content data server 106, at block 710, associates with the requested encrypted block, the keyed hash associated with the requested encrypted block and the requested encrypted block And transmits or provides the member ID to the requesting computing device 102 (ie, the member device). Of course, a computing device 102 different from the computing device 102 that originally sent the block to the content data server 106 can request a particular data block.With reference to FIG. 8, during use, the key server 108 may perform a method 800 of providing the requested key to the computing device 102. The method 800 begins at block 802 where the key server 108 determines whether one of the computing devices 102 has the requested file decryption key 212 from the key server 108. If so, the key server 108 receives the keyed hash and member ID associated with the desired file decryption key 212, at block 804. At block 806, the key server 108 sends or provides the requested file decryption key 212 associated with the provided keyed hash and member ID to the requesting computing device 102 (ie, the member device). In addition, in some embodiments, the key server 108 may be used by the content data server 106 to verify that the requesting computing device 102 is authorized to access the requested file decryption key 212 Perform a similar method (see block 704 in FIG. 7). As described above, the requesting computing device 102 may apply the file decryption key 212 to the encrypted block received from the content data server 106 to decrypt the encrypted data block. The decrypted block may then be reconstructed to produce the desired file. Of course, in embodiments where the file decryption key 212 is further encrypted with the member encryption key 208, the computing device 102 utilizes the member IDs of these member devices to request their corresponding member decryption keys 208 It is also good.Illustrative examples of the techniques disclosed herein are provided below. Embodiments of the technology may include any one or more of the specific examples described below and any combination thereof.Concrete example 1 is a computing device for storing content in a content data server in a data deduplication system, comprising: (i) encrypting each block of a file fragmented using a file encryption key generated by a computing device, (Ii) an encryption module for generating a keyed hash of each block using the community key, (iii) an encryption module for encrypting the file decryption key using the member encryption key of the computing device, and A communication module for sending each block, a keyed hash of each block, and a member identification identifying the computing device to the content data server and sending the keyed hash, the encrypted file decryption key and the member identification of each block to the key server And a computing device.Concrete example 2 further includes a file management module that contains the subject matter of example 1 and fragments the file of the computing device to generate each block of the fragmented file.Example 3 further includes a file management module that includes the subject matter of Examples 1 and 2 and generates a list of each keyed hash associated with each block belonging to the fragmented file.Concrete example 4 includes the subject matter of any of examples 1 to 3, the encryption module encrypts the list using the member encryption key, and the communication module transmits to at least one of the content data server and the key server And transmits the encrypted list.Example 5 includes the subject matter of any of Examples 1 to 4, and the keyed hash has a keyed secure hash algorithm (SHA).In Specific Example 6, the file encryption key and the file decryption key are the same symmetric encryption key.Concrete example 7 is a computing device for extracting content from a content data server in a data deduplication system, comprising: (i) an encrypted block of a fragmented file, (ii) encrypted A keyed hash associated with each block, and (iii) a member identification of each encrypted block that identifies the computing device that previously stored the encrypted block corresponding to the content data server, In response to sending the keyed hash and member identification associated with each encrypted block to the key server, it receives the encrypted file decryption key of each encrypted block from the key server, Encrypted with the member encryption key corresponding to the other member devices other than the device, encrypted Each file decryption key is transmitted to another member device for decryption by the member decryption key of the other member device and received from the other member device and encrypted with the member encryption key of the other member device is encrypted A communication module for receiving the decrypted file decryption key corresponding to each file decryption key, (i) a decryption key decrypting unit for decrypting each file decryption key encrypted with the member encryption key of the computing device by the corresponding member decryption key of the computing device And (ii) an encryption module for decrypting each block encrypted using the decrypted file decryption key associated with each corresponding encrypted block.Example 8 includes the subject matter of Example 7 and the communication module receives an encrypted list of each keyed hash associated with each encrypted block of fragmented file and the encryption module , And decrypts the encrypted list using the member decryption key of the computing device.Concrete Example 9 further includes a file management module including the subject matter of Examples 7 and 8 and generating a file by synthesizing the decrypted blocks based on the decrypted list.Concrete Example 10 includes the subject of any of Specific Examples 7 to 9, the encryption module generates a hash with a reference key of each block decrypted using the community key, and a reference of each decrypted block Compare the keyed hash with the received keyed hash associated with each encrypted block and verify the integrity of each block.Concrete Example 11 is a method of storing content in a content data server in a data deduplication system in which each computing device encrypts each block of a file fragmented using a file encryption key generated by a computing device Generating a keyed hash of each block using a community key on the computing device and encrypting the file decryption key using the member cryptographic key of the computing device on the computing device (I) transmitting each encrypted block, (ii) a keyed hash of each block and (iii) a member identification identifying the computing device to the content data server from the computing device; , (I) a keyed hash of each block, (ii) an encrypted file decryption key and (iii It has a method and a step of transmitting the members identified key server.Concrete example 12 includes the subject matter of example 11 and further includes the step of fragmenting the file of the computing device on the computing device and generating each block of the fragmented file.Embodiment 13 further comprises the step of generating a list of each keyed hash associated with each block belonging to the fragmented file on the computing device, including the subject matter of any of embodiments 11 and 12.Concrete Example 14 includes the subject matter of any of Specific Examples 11 to 13 and includes encrypting the list using the member encryption key on the computing device and encrypting the encrypted list from the computing device with content data And transmitting to at least one of the server and the key server.Concrete example 15 includes the subject matter of any of examples 11 to 14, and the step of generating a keyed hash is to generate a keyed secure hash algorithm (SHA) of each block using the community key including.Concrete Example 16 includes the subject of any of Specific Examples 11 to 15, and the file encryption key and file decryption key are the same symmetric encryption key.Embodiment 17 comprises a computing device having a processor and a memory storing a plurality of instructions which, when executed by the processor, cause the computing device to perform the method of any of embodiments 11-16.Example 18 has one or more machine-readable storage media storing a plurality of instructions that, in response to being performed, cause the computing device to perform the method of any of Examples 11-16.Concrete Example 19 is a method of extracting content from a content data server in a data deduplication system, the method comprising the steps of (i) an encrypted block of a fragmented file, (ii) A keyed hash associated with each encrypted block, and (iii) a member identification of each encrypted block that identifies the computing device that previously stored the encrypted block corresponding to the content data server Responsive to sending the keyed hash and member identification associated with each corresponding encrypted block by the computing device to the key server, encrypted encrypted blocks of each block encrypted from the key server, A step of receiving a file decryption key; and a step of, by the computing device, Sending each received encrypted file decryption key encrypted with the member encryption key corresponding to the member device to another member device for decryption by the member decryption key of the other member device; Receiving a decrypted file decryption key corresponding to each received encrypted file decryption key encrypted with another member device's member encryption key from another member device; Decrypting each file decryption key encrypted with the member cryptographic key of the computing device with a corresponding member decryption key of the computing device, decrypting the decrypted file associated with each corresponding encrypted block on the computing device And decrypting each encrypted block using the decryption key.Example 20 includes the subject matter of Example 19 and receiving by means of a computing device an encrypted list of each keyed hash associated with each encrypted block of the fragmented file, And decrypting the encrypted list using the member decryption key of the computing device.Concrete example 21 further includes the step of generating a file by combining the decrypted blocks based on the decrypted list, including the subject matter of any of examples 19 and 20.Concrete Example 22 includes the steps of generating a hash with a reference key of each block including the subject of any one of Specific Examples 19 to 21 on a computing device and decrypted using the community key, And comparing each reference blocked hash of each block with the received keyed hash associated with each encrypted block to verify the integrity of each block.Embodiment 23 comprises a computing device having a processor and a memory storing a plurality of instructions which, when executed by the processor, cause the computing device to perform the method of any of embodiments 19-22.Example 24 includes one or more machine-readable storage media storing a plurality of instructions that, in response to being performed, cause the computing device to perform the method of any of Examples 19-22.Concrete example 25 has a method of deduplicating encrypted content on a content data server of a data deduplication system, wherein the content data server provides (i) a keyed hash of the fragment of the fragmented file, ( ii) an encrypted version of the block, and (iii) a member identification from the first member device of the deduplication system, the member identification identifying the first member device, - receiving on the content data server , Comparing the keyed hash with the other keyed hash stored in the content data server and responding to the fact that the keyed hash did not match any of the other stored keyed hashes On the content data server, encrypted blocks, keyed hash and men Storing a member identification and a stored encrypted block and a keyed hash on the content data server in response to a match of the keyed hash with the keyed hash in which the keyed hash is stored; , Allowing the second member device of the deduplication system to access the encrypted block in response to the request from the second member device for the encrypted block on the content data server In response to determining that the second member device is authorized to access the encrypted block, the content data server encrypts the encrypted block, the keyed hash And providing the member identification.Example 26 includes the subject matter of Example 25 and includes the step of storing an encrypted list of each keyed hash associated with each encrypted block of the fragmented file received from the first member device .Concrete example 27 includes the subject matter of any of examples 25 and 26 and the step of determining whether the second member device is authorized to access the encrypted block may include the member identification of the second member device and With a list of authorized member identities of the encrypted block.Embodiment 28 comprises a computing device having a processor and a memory storing a plurality of instructions which, when executed by the processor, causes the computing device to perform the method of any of embodiments 25-27.In example 29, in response to being executed, it has one or more machine-readable storage media storing a plurality of instructions that cause the computing device to perform the method of any of embodiments 25-27.A specific example 30 is a method for deduplicating an encryption key on a key server of a data deduplication system, the method comprising the steps of: (i) extracting, from a first member device of the deduplication system, (Ii) a keyed hash, and (iii) a member identification from a first member device of the deduplication system, the member identification identifying a first member device, and , Comparing the keyed hash with the other keyed hash stored on the key server on the key server and the keyed hash does not match any of the other stored keyed hashes In response, storing the encrypted file decryption key, keyed hash and member identification on the key server, encrypting the encrypted file And providing an encrypted file decryption key corresponding to the keyed hash and the member identification by the key server in response to the request from the second member device of the deduplication system for the decryption key .Example 31 includes storing the encrypted list of each keyed hash associated with each encrypted block of the fragmented file received from the first member device, including the subject matter of Example 30 .Concrete example 32 includes the subject matter of any of Examples 30 and 31, and in response to matching a keyed hash in which a keyed hash is stored, a member identification and stored on the key server Further comprising the step of associating an encrypted file decryption key and a keyed hash, wherein the step of providing an encrypted file decryption key comprises the steps of: permitting a second member device member identification to decrypt the encrypted file decryption key In response to having judged that it is in the list of member identifications that were made.Embodiment 33 comprises a computing device having a processor and a memory storing a plurality of instructions which, when executed by the processor, cause the computing device to perform the method of any of embodiments 30-32.Example 34 has one or more machine-readable storage media storing a plurality of instructions that cause the computing device to perform the method of any of Examples 30-32 in response to being performed.Concrete Example 35 is a computing device for storing content in a content data server in a data deduplication system and encrypts each block of a fragmented file using the file encryption key generated by the computing device Means for generating a keyed hash of each block using the community key, means for encrypting the file decryption key using the member encryption key of the computing device, (i) means for encrypting each encrypted (Ii) a keyed hash of each block, and (iii) a member identification that identifies the computing device to (i) a keyed hash of each block, (ii) encrypted And (iii) means for sending the member identification to the key server.Example 36 includes the subject matter of Example 35 and further comprises means for fragmenting the file of the computing device and generating each fragmented fragment.Embodiment 37 further comprises means for generating a list of each keyed hash associated with each block belonging to the fragmented file, including the subject matter of any of embodiments 35 and 36.Concrete example 38 includes the subject of any of examples 35 to 37 and includes means for encrypting the list using the member encryption key and means for encrypting the encrypted list in at least one of the content data server and the key server And means for transmitting.Concrete example 39 includes the subject matter of any of examples 35 to 38 and the means for generating a keyed hash comprises means for generating a keyed secure hash algorithm (SHA) for each block using the community key .Concrete example 40 includes the subject matter of any of examples 35 to 38, and the file encryption key and file decryption key are the same symmetric encryption key.Concrete example 41 is a computing device for extracting content from a content data server in a data deduplication system, comprising: (i) an encrypted block of a fragmented file, (ii) encrypted Means for receiving a keyed hash associated with each block and (iii) a member identification of each encrypted block identifying a computing device which previously stored an encrypted block corresponding to the content data server, Means for receiving an encrypted file decryption key for each block encrypted from the key server in response to sending a keyed hash and a member identification associated with each corresponding encrypted block to the key server And the received encrypted data encrypted with the member encryption key corresponding to the other computing device other than the computing device Means for transmitting each decrypted file decryption key to another member device for decryption by means of a member decryption key of another member device and means for encrypting each decryption key encrypted by another member device's member encryption key Means for receiving a decrypted file decryption key corresponding to each received encrypted file decryption key; means for decrypting each file decryption key encrypted with the member cryptographic key of the computing device according to a corresponding member decryption of the computing device Means for decrypting with a key and means for decrypting each block encrypted using the decrypted file decryption key associated with each corresponding encrypted block.Example 42 includes the subject of example 41 and includes means for receiving an encrypted list of each keyed hash associated with each encrypted block of the fragmented file and means for decrypting the members of the computing device And means for decrypting the encrypted list using the key.Example 43 includes the subject matter of any of Examples 41 and 42 and further comprises means for generating a file by combining the decrypted blocks based on the decrypted list.Concrete example 44 includes a means for generating a hash with a reference key of each block including the subject of any one of concrete examples 41 to 43 and decrypted using a community key and a hash with a reference key of each decrypted block And a received keyed hash associated with each encrypted block and verifying the integrity of each block.Example 45 is a computing device of a deduplication system that deduplicates encrypted content, comprising: (i) a keyed hash of a block of fragmented files, (ii) an encrypted version of the block, And (iii) a member identification from a first member device of the deduplication system, the member identification identifying a first member device; means for receiving a member identification identifying a first member device; And stores the encrypted block, keyed hash and member identification in response to the fact that the keyed hash did not match any of the other stored keyed hashes In response to a match with a keyed hash in which the keyed hash is stored, the member identification and the stored Means for associating the encrypted block with the encrypted block and the keyed hash; and means responsive to a request from the second member device for the encrypted block, wherein the second member device of the deduplication system accesses the encrypted block Means responsive to determining that the second member device is authorized to access the encrypted block, means for decrypting the encrypted block, the keyed hash and the encrypted block, Means for providing member identification and a computing device.Embodiment 46 includes means for storing the encrypted list of each keyed hash associated with each encrypted block of the fragmented file received from the first member device, including the subject matter of example 45 .Concrete example 47 includes the subject matter of any of examples 45 and 46 and the means for determining whether the second member device is authorized to access the encrypted block may include the member identification of the second member device and With a list of authorized member identities of the encrypted block.Concrete example 48 is a computing device of a data deduplication system that deduplicates cryptographic keys, comprising: (i) a first member device of a deduplication system that receives a file decryption encrypted with a member encryption key of a first member device Means for receiving a member identification from a first member device of the deduplication system, the member identification identifying a first member device, means for receiving a member identification identifying a first member device, a keyed hash and a key server Means for comparing the keyed hash with another keyed hash stored in the encrypted file decryption means in response to the keyed hash not matching any of the other stored keyed hashes, Means for storing a key, a keyed hash and a member identification; means for storing the key, keyed hash and member identification, In response to the preparative, it has a computing device having means for providing an encrypted file decryption key corresponding to the keyed hash and member identification.Concrete example 49 includes means for storing the encrypted list of each keyed hash associated with each encrypted block of the fragmented file received from the first member device, including the subject matter of example 48 .Example 50 includes the subject matter of any of Examples 48 and 49 and in response to matching a keyed hash in which a keyed hash is stored, the member identification and the stored encrypted file Means for associating a decryption key and a keyed hash, wherein the means for providing an encrypted file decryption key comprises means for decrypting the encrypted file decryption key of the second member device, It responds to judging that it is in the list. |
A circuit having a local power block (408) for leakage reduction is disclosed. The circuit has a first portion (402) and a second portion (404, 406). The first portion (402) is configured to operate at a substantially greater operating frequency than the operating frequency of the second portion (404, 406). The second portion has a local power block (408) configured to decouple the second portionif the second portion is inactive to reduce leakage current associated with the second portion without sacrificing performance of the first portion. |
1.A circuit including:Part I; andthe second part,Wherein the first portion is configured to operate at an operating frequency substantially greater than the operating frequency of the second portion, andWherein the second portion includes a local power block configured to decouple the second portion if the second portion is inactive.2.The circuit of claim 1, wherein the local power block is configured to decouple the second portion in response to a control signal input to the second portion.3.The circuit of claim 2, wherein the control signal is a pre-existing signal configured to control operation of the second portion.4.The circuit of claim 1, wherein the local power block includes a local header circuit.5.The circuit of claim 4, wherein the second portion includes a scan flip-flop circuit, and wherein the first portion includes a functional latch circuit.6.The circuit of claim 5, wherein the local head circuit is configured to cause all of the local head circuits in response to a non-shifted signal supplied to the scan flip-flop circuit for performing a scan program to test operation of the circuit. Decoupling the scan trigger circuit is described.7.The circuit of claim 1, wherein the local power block includes a local foot circuit.8.The circuit of claim 7, wherein the second portion includes a scan flip-flop circuit, and wherein the first portion includes a functional latch circuit.9.The circuit of claim 8, wherein the local foot circuit is configured to cause the local foot circuit to respond to a shift signal supplied to the scan flip-flop circuit for performing a scan program to test operation of the circuit. Decoupling the scan trigger circuit.10.The circuit of claim 1, wherein the local power block is an element in the circuit configured to both serve as the local power block and perform an operational function in the second section.11.A processor including a sequential circuit, the sequential circuit includes:Part I; andthe second part,Wherein the second portion includes a local power block configured to decouple the second portion in response to a control signal input to the second portion, andWherein the control signal is a pre-existing signal configured to control the operation of the second portion.12.The processor of claim 11, wherein the first portion is configured to operate with a duty cycle that is substantially larger than a duty cycle of the second portion.13.The processor of claim 11, wherein the first portion is configured to operate at an operating frequency that is substantially greater than an operating frequency of the second portion.14.The processor of claim 11, wherein the local power block includes a local head circuit.15.The processor of claim 14, wherein the second portion includes a scan flip-flop circuit, and wherein the first portion includes a functional latch circuit.16.The processor of claim 15, wherein the pre-existing signals include non-shifted signals.17.The processor of claim 11, wherein the local power block includes a local foot circuit.18.The processor of claim 17, wherein the second portion includes a scan flip-flop circuit, and wherein the first portion includes a functional latch circuit.19.The processor of claim 18, wherein the pre-existing signal comprises a shift signal.20.The processor of claim 1, wherein the local power block is an element in the sequential circuit configured to both serve as the local power block and perform an operational function in the second section.21.A method of reducing leakage in a circuit having a first part and a second part, the method comprising:Providing a local power block configured to decouple the second portion if the second portion is inactive; andUsing the local power block to decouple power from the second part in response to a control signal input to the second part,Wherein the control signal is a pre-existing signal configured to control the operation of the second portion.22.The method of claim 21, wherein the local power block is a device that both decouples the power source from the second portion and functionally operates in the second portion. |
Circuit with local power block for reducing leakageTechnical fieldThe present invention generally relates to a method and system for reducing leakage current in circuit design, and more particularly, to a method and method for reducing leakage current in low-activity circuits while maintaining performance of high-activity circuits and system.Background techniqueAs feature sizes become smaller in circuit designs, power leakage is becoming a more significant portion of the total power consumed by circuits (eg, sequential circuits). Power leakage in circuit design is an important issue, especially because power leakage can account for a significant proportion of the total power of an IC.For example, FIG. 1 illustrates a conventional device (eg, an inverter) 100 having an input a, an output nz, a voltage source, and a ground. The capacitor 102 is charged by the power supply current Vdd. In theory, once the capacitor 102 is fully charged, no current will flow through the circuit, and there will be no power leakage in the circuit. However, this is not the case, as the device may leak. Even assuming the device is closed or inactive, there may still be some current flow / leakage in the device. Therefore, there will be power leakage in the device.As technology shrinks and becomes faster, this problem becomes more common. The smaller the circuit is designed and the faster it operates, the greater the leakage. Therefore, as circuit density increases, leakage due to increased devices also increases. There is a need to reduce such leaks because they always occur, regardless of whether the device is performing activities or whether the device's central processing unit (CPU) is on or off. As long as a power source is connected to the circuit, leakage can occur. As a result, leakage can account for a significant portion of the total power consumed by integrated circuits (ICs) that are inactive or have a large number of inactive circuits.This is not a big problem during the activity (when the dynamic power is large). However, if no activity is being performed, then the dynamic power is small (for example, it may be zero). Therefore, in the inactive state, the leakage current will dominate the total power of the IC. This is particularly problematic in battery-powered devices where power is limited.In low-power circuit designs, it is necessary to minimize leakage current without sacrificing performance. For example, conventionally, a device can be added to an entire circuit to obstruct the path from a voltage source to the circuit, or the path from the circuit to ground in order to limit or reduce leakage in the circuit. In one conventional system, a global head or global foot is added to the power supply path from the voltage source to the circuit to limit leakage. In other words, decouple the power supply from the circuit to reduce leakage during periods of inactivity in the circuit. The global head is a decoupling device coupled between Vdd and the circuit, and the foot is a decoupling device coupled between the circuit and Vss.However, the conventional global head / foot must be scaled to pass and control larger currents, and use additional control signals connected to many locations in the circuit design. This design requirement results in increased costs, for example, in terms of the area occupied by the circuits on the IC and increased routing complexity. Such conventional designs may also reduce the performance of the IC, for example, by reducing the speed and performance of the circuit, as described in more detail below.High threshold voltage (high VT, or HVT) devices are used in conventional global heads and global feet to limit leakage. These high VT devices cannot reduce leakage to zero. However, high VT devices can at least significantly reduce leakage. This reduction in leakage is especially true when compared to low threshold voltage (low VT, or LVT) circuits that can be used in operating circuits supplied by the head or feet. By convention, a global head or global foot is used because the combination of global head and global foot is redundant and does not provide substantial benefits. Similarly, the additional head / foot portion can further increase the area and cost of the circuit.In addition, conventional systems using global heads or feet may be undesirable because global heads / feet act like a series resistor. Therefore, every time a conventional circuit draws current, the current passes through the head / foot, which is equivalent to a series resistor, thereby reducing the efficiency and performance of the circuit. Therefore, instead of having Vdd / Vss directly supplied to the circuit, but turning on the head circuit and charging it, this can lead to an increase in total power consumption during operation as the global head / foot is scaled to supply the coupling High current drawn by many circuit elements of the global head / foot.In addition, conventional systems using global heads or feet can have considerable voltage / current spikes due to turning on the large global head / foot needed to couple / decouple the circuit to the power source. Therefore, some conventional systems use different ways to turn on the global head / foot to avoid spikes. For example, some conventional systems use an intermediate device to turn on the head / foot to ramp the voltage up to avoid spikes and noise spikes under Vdd and VVdd. This depends on the circuit configuration which may take several cycles and further increase the complexity of the entire system. This conventional method is also undesirable because there is a wake-up time associated with this method.For at least the foregoing reasons, a conventional global head / foot may be expensive to implement and may degrade performance significantly. Other conventional systems that use two power supplies (i.e., one for high VT devices and one for low VT devices) are impractical or undesirable because such configurations significantly increase the cost of circuit design, such as In terms of area, complexity, having multiple power grids, etc.Other conventional systems use high VT devices to try to limit or reduce leakage because such devices require significantly higher voltages to turn on and therefore may leak less than low VT or normal threshold voltage devices. However, the performance of a high VT device may be substantially lower than a low VT or normal VT device. However, if performance is not an issue for a particular application, a high VT device may be appropriate. In addition, high VT devices do not function very well (ie, satisfactorily) at low voltages due to higher threshold voltages. Once the voltage drops, the device will not function very well (if it does). Therefore, in many, if not most, high VT devices cannot be a practical alternative for reducing or limiting leakage.For at least the reasons stated above, conventional global head / footers can be very expensive and require additional (or dedicated) control signals to be connected to many locations in the circuit design, which adds cost, such as in ICs On the area occupied by the circuit. Such conventional designs may also increase costs in the performance of the IC, for example, by reducing the speed and performance of the circuit.Therefore, what is needed is a method and system for reducing leakage while maintaining the performance of a circuit.Summary of the inventionExemplary embodiments of the present invention are directed to a system and method for reducing leakage current in a circuit design, and more specifically, to a method and system for reducing leakage current while maintaining performance of a circuit.In one embodiment, a circuit for reducing leakage is disclosed. The circuit may include a first portion and a second portion. The first portion may be configured to operate at an operating frequency that is substantially greater than an operating frequency of the second portion. The second portion may include a local power block configured to decouple the second portion if the second portion is inactive.In another embodiment, a circuit may include a first portion and a second portion. The second portion may include a local power block configured to decouple the second portion in response to a control signal input to the second portion. The control signal may be a pre-existing signal configured to control operation of the second portion.In another embodiment, a method of reducing leakage in a circuit having, for example, a first portion and a second portion is disclosed. The method may include providing a local power block configured to decouple the second portion if the second portion is inactive; and in response to an input to the second portion A control signal is used to decouple the power source from the second portion using the local power block. The control signal may be a pre-existing signal configured to control operation of the second portion.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings are presented to assist in describing the embodiments of the present invention, and the drawings are provided only to illustrate the embodiments and not to limit the embodiments.FIG. 1 is a schematic diagram illustrating a conventional inverter circuit.2 is a schematic diagram illustrating one embodiment of a circuit having a local power block configured to decouple a second portion that operates at a low operating frequency.3 is a schematic diagram illustrating another embodiment of a circuit having a local power block configured to decouple a second portion operating at a low operating frequency.FIG. 4 is a schematic diagram illustrating another embodiment of a circuit having a local power block configured to decouple a second portion operating at a low operating frequency.5 is a schematic diagram illustrating another embodiment of a circuit having a local power block configured to decouple a second portion operating at a low operating frequency.FIG. 6 is a flowchart illustrating an embodiment of a method of reducing leakage in a circuit.detailed descriptionAspects of the invention are disclosed in the following description and related drawings directed to exemplary embodiments of the invention. Alternative embodiments may be devised without departing from the scope of aspects of the invention. In addition, well-known elements of the embodiments of the present invention will not be described in detail or omitted so as not to obscure the relevant details of the exemplary embodiments of the present invention.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiment" or "embodiment of the invention" does not require that all embodiments of the invention include the features, advantages, or modes of operation discussed.Exemplary embodiments recognize and / or take into account differences between portions of the circuit that operate at low operating frequencies and portions of the circuit that operate at high operating frequencies. For example, one embodiment may include a circuit having first and second portions. The embodiments may further include decoupling only those portions of the circuit that are configured to have no high performance (i.e., portions of the circuit that are configured to operate at low operating frequencies or that are configured for lower activity) Instead of having a global head or foot for all parts of the circuit.A local power block may decouple a portion of the circuit from, for example, interrupting the path from Vdd to the circuit or from the circuit to ground while the portion of the circuit configured to operate at a low operating frequency is inactive. The local power block may be, for example, a local head or a local foot, or other means for establishing a voltage block. For example, a local head circuit may block the voltage potential between Vdd and an artificial reference (eg, VVdd), and a local foot circuit may block the voltage potential between Vss and an artificial reference (eg, VVss).FIG. 2 illustrates one embodiment of a circuit 200 having at least a first portion 202 and a second portion 204. The first portion 202 (eg, the higher performance portion) may be configured to operate at an operating frequency that is substantially greater than the operating frequency of the second portion 204 (eg, the lower performance portion). The second portion 204 may have a local power block 208 (eg, a local head) that is configured to decouple the second portion 204 in the event that the second portion 204 is inactive to reduce the associated with the second portion 204 Leakage current without sacrificing performance of the first portion 202, the first portion 202 is configured to operate at an operating frequency that is substantially greater than the operating frequency of the second portion 204.In another embodiment, the circuit 200 may have at least a first portion 202 and a second portion 204. In this embodiment, the second portion 204 may have a local power block 208 (eg, a local head) configured to cause the second portion 204 to respond to a control signal (not shown) input to the second portion 204. Decoupling. In this embodiment, the control signal may be a pre-existing signal configured to control the operation of the second portion 204.FIG. 2 also exemplarily illustrates a third portion 206 of the circuit, which may be configured to operate at low operating frequencies. The third portion 206 may have another local power block 210 (eg, a local head) that is configured to decouple the third portion 206 in the event that the third portion 206 is inactive to reduce correlation with the third portion 206 Associated with the leakage current without sacrificing the performance of the first portion 202, the first portion 202 may be configured to operate at an operating frequency that is substantially greater than the operating frequency of the third portion 206. In another exemplary embodiment, a local power block (e.g., 208 or 210) may be shared between two or more sections (e.g., 204, 206) configured to operate at a low operating frequency.As another example, FIG. 3 illustrates an embodiment of a circuit 300 having at least a first portion 302 and a second portion 304. The first portion 302 may be configured to operate at an operating frequency that is substantially greater than the operating frequency of the second portion 304. The second portion 304 may have a local power block 308 (eg, a local head) that is configured to decouple the second portion 304 in the event that the second portion 304 is inactive to reduce the associated with the second portion 304 Leakage current without sacrificing performance of the first portion 302, the first portion 302 may be configured to operate at an operating frequency that is substantially greater than the operating frequency of the second portion 304.In another embodiment, the circuit 300 may have at least a first portion 302 and a second portion 304. In this embodiment, the second portion 304 may have a local power block 308 (eg, a local head) configured to cause the second portion 304 to go in response to a control signal (not shown) input to the second portion. Couple. In this embodiment, the control signal may be a pre-existing signal configured to control the operation of the second portion 304. An example of using a pre-existing signal to control a local power block will be provided in the following discussion of FIGS. 4 and 5.FIG. 3 also illustrates a third portion 306 of the circuit that can be configured to operate at low operating frequencies. The third section 306 may have another local power block 310 (eg, a local head) that is configured to decouple the third section 306 in the event that the third section 306 is inactive to reduce the correlation with the third section 306 Associated leakage current without sacrificing performance of the first portion 302, the first portion 302 may be configured to operate at an operating frequency that is substantially greater than the operating frequency of the third portion 306. In another exemplary embodiment, a local power block (e.g., 308 or 310) may be shared by two or more portions (e.g., 304, 306) that may be configured to operate at a low operating frequency.In one aspect of the embodiment illustrated in FIG. 2 and FIG. 3, a local power block (e.g., 208, 210, 308, 310) may receive a control signal that is input to control the operation of the second portion 204, 304 (Not shown). The local power block may be configured to decouple the second portion 204, 304 and the third portion 206, 306 in response to a control signal input to the second portion 204, 304.In another embodiment, the local power block may be configured to be input to the second portion 204, 304, and / or the third portion in response to a designed operation for the second portion 204, 304 and / or the third portion 206, 306 The pre-existing control signals (not shown) of the sections 206, 306 decouple the second sections 204, 304 and / or the third sections 206, 306. This may include, for example, signals that have been used to control the second portion 204, 304 and / or the third portion 206, 306, regardless of the presence of local power blocks in the circuit. Therefore, no additional control signals will need to be generated or routed.The embodiment is not limited to the arrangements illustrated in FIGS. 2 and 3. Other exemplary embodiments include a local head / foot for a sequential circuit (e.g., a latch or flip-flop) that has a scan-based circuit associated with it. Examples of these embodiments are illustrated in FIGS. 4 and 5 and will be described below.FIG. 4 illustrates a latch circuit 400 (eg, a latch) having a functional latch section 402, a scan flip-flop section 404, and a scan output section 406. FIG. 5 illustrates a similar latch circuit 500 having a functional latch section 502, a scan flip-flop section 504, and a scan output section 506. The scan trigger portions 404, 504 and the scan output portions 406, 506 may be used during testing of the device or during a scan operation. For example, a scan or test operation may be performed on the latch circuits 400, 500 at a foundry to determine whether the latch circuits 400, 500 are functioning properly before being shipped out of the foundry. After the latch circuits 400, 500 are shipped, the circuits associated with the scan or test operations (e.g., the scan trigger sections 404, 504 and the scan output sections 406, 506) may no longer be used.In the embodiments of FIGS. 4 and 5, the function latching sections 402, 502 may be used in the latch circuits 400, 500 for a dual purpose or purpose to save resources. The function latch sections 402, 502 may have an input (in) port (a) and a data output (out) port (q). For a scan operation, the scan trigger sections 404 and 504 may be master circuits, and the function latch sections 402 and 502 may be slave circuits. During normal operation (eg, not in test / scan mode), the scan trigger sections 404, 504 and the scan output sections 406, 506 may not be used (ie, configured as inactive). Instead, only the functional latch portions 402, 502 may be configured to operate.As a practical matter, even when the scan trigger sections 404, 504, the function latches 402, 502, and the scan output sections 406, 506 are inactive (i.e., closed), these sections may leak. There is a need to reduce the leakage in the example illustrated in Figures 4 and 5, in which after the circuit has been tested and the device has been shipped from the foundry, the scan trigger sections 404, 504 may not be used at all And / or scan the output portions 406, 506. That is, even if the scan trigger sections 404, 504 and / or the scan output sections 406, 506 may not be used after the device is shipped from the foundry, the scan trigger sections 404, 504 of the circuit are used whenever the circuit is powered. The sum scan output sections 406, 506 may also leak with the function latches 402, 502. Therefore, leaks in the scan trigger sections 404, 504 and scan output sections 406, 506 may cause a large amount of power leakage, and even if these parts of the circuit may no longer be used, the leakage may account for the total power consumed by the IC. Salient part.When the device is not in the scan mode, the function latch sections 402, 502 receive data from the data port a, and output data from the data port q. The function latch sections 402, 502 can be optimized for performance because this section can be used to perform the functions of the circuit. For example, one embodiment may use a low VT device in the function latch sections 402, 502 to maximize performance. To reduce or minimize leakage in the scan trigger portion, embodiments may use normal VT devices in the scan trigger portion 404,504. It should be noted that although high VT devices can be used to minimize or further reduce leakage, these high VT devices may impair circuit operation at low voltages.In the embodiment of FIG. 4, the local feet 408 (circled devices) can be used as local power blocks to reduce leakage. The local foot 408 may be configured to be coupled to the scan trigger section 404 to turn off the Vss supply, thereby generating virtual Vss (VVss) for the sections 404 and 406.The embodiment illustrated in FIG. 4 may have local and dedicated power blocks (eg, local feet 408) for the scan trigger section 404 and the scan output section 406. Thus, by adding a single device 408 configured to decouple only the scan trigger portion 404 and / or the scan output portion 406, the size of the device 408 can be scaled so that the impact on the area of the circuit 400 is reduced. In addition, the size of the local power block (eg, the local foot 408) can be minimized because it does not have to drive a large load. Alternatively, a local power block (eg, a local foot 408) may be dedicated to the scan trigger portion 404 and / or the scan output portion 406 only.The embodiment is not limited to the local power blocks (eg, the local feet 408) provided at the scan trigger section 404 and the scan output section 406. In another embodiment, first and second local power blocks (eg, a local foot (not shown)) may be coupled to each of the scan trigger portion 404 and the scan output portion 406. In the embodiment illustrated in FIG. 4, a local foot 408 controlled by a shift signal (sh) may be used at the scan output portion 406 (for example, a Sout NAND gate) to bring the power supply from the scan trigger Part 404 is decoupled. However, embodiments of the present invention are not limited in this respect.In the example illustrated in FIG. 4, the shift signal (sh) may be used to control the scanning operation. The shift signal (sh) is a known signal for turning on scanning. The shift signal (sh) is a full IC signal that can be used in various parts of the circuit. The shift signal (sh) provides an opportunity to store values in each sequential element against which testing is being performed to determine the functionality of the circuit. The shift signal (sh) may enable a scanning operation through the latch 402. The scan trigger portion 404 may be edge-triggered to avoid contention on all latches.The embodiment illustrated in FIG. 4 may use a shift signal (sh) that has become an input to the latch circuit 400 as a control signal for controlling a scanning operation. The embodiment may have a local power block (eg, a local foot 408) to decouple the scan trigger portion 404 and / or the scan output portion 406 of the circuit 400. Instead of linking the local foot 408 to a dedicated control signal, embodiments may use a shift signal (sh) already used at the scan output portion 406 (eg, a Sout NAND gate) to turn the local foot on and off 408. That is, the local foot 408 may be controlled by a shift signal (sh) that has been configured to be supplied to the scanning circuit. Therefore, this embodiment can significantly reduce the footprint associated with decoupling the scan trigger portion 404 and scan output portion 406 of the circuit 400, and therefore, this embodiment can reduce the costs associated with this device.Therefore, the embodiment illustrated in FIG. 4 does not require other (eg, special or dedicated) circuits to control the local feet 408. Since the scan trigger section 404 and the scan output section 406 can only be used during a scan or test operation, the shift signal (sh) can be used to power the scan trigger section 404 and the scan output section 406 (e.g., coupling and decoupling) . Conventionally, feet may require separate control signals to turn on and off. However, in this embodiment, the shift signal (sh) can be used to turn the local foot 408 on and off, because if the shift signal (sh) is turned on (ie, supplied to the local foot 408), then the scanning procedure is being performed, And the scan trigger section 404 and the scan output section 406 are coupled to a power source via a local foot 408. On the other hand, if the shift signal (sh) is turned off, the scan procedure is not being performed, and the scan trigger section 404 and the scan output section 406 may be decoupled from the power source via the local foot 408. Therefore, the local foot 408 can be powered up only during the scan, and the embodiment illustrated in FIG. 4 does not require another signal to control it.The shift signal (sh) is a static signal throughout the scanning operation (ie, it is not turned on and off). Therefore, during the scanning operation, the local foot 408 may be turned on to connect the scan trigger portion 404 and the scan output portion 406 to ground. When the scan operation is not being performed, the shift signal (sh) can be turned off, and therefore, the local foot portion 408 can be turned off, and the scan trigger portion 404 and the scan output portion 406 can be decoupled from the ground, thereby reducing or Any leaks in scan trigger section 404 and scan output section 406 are limited.In addition, since the shift signal (sh) is a static signal (that is, the shift signal (sh) does not switch back and forth during the scanning procedure), the local foot 408 may be a long channel device, a high VT device, etc., so as to further leak Minimized. Since the scan trigger section 404 and the scan output section 406 may not be used after the scan program is executed, the performance of the scan trigger section 404 and the scan output section 406 will not affect the operation performance.The embodiment of FIG. 4 may further reduce leakage by coupling the test clock inverter 412 to the virtual power node VVss. In this configuration, in the inactive state, ckt is low and nckt will be high instead of floating. Since the state of the nckt will be stable, the circuit can be prevented from operating incorrectly while still reducing leakage. The clocked inverter 410 is not coupled to the virtual power node and therefore has no effect on the clocked inverter 410.In the embodiment of FIG. 5, the local head 508 (circled device) can be used as a local power block to reduce leakage. The embodiment illustrated in FIG. 5 may add local and dedicated power blocks (e.g., local head 508) to scan trigger section 504 and scan output section 506 to turn off the Vdd supply, thereby generating Virtual Vdd (VVdd). Therefore, by adding a single device 508 configured to decouple only the scan trigger portion 504 and / or the scan output portion 506, the size of the device 508 can be minimized so that the impact on the area of the circuit 500 can be reduced. In addition, the size of the local power block (eg, the local head 508) can be minimized because it does not have to drive a large load. Alternatively, the local power block 508 may be dedicated to the scan trigger portion 504 and / or the scan output portion 506 only.Embodiments of the present invention are not limited to the local power blocks (for example, the local head 508) provided at the scan trigger section 504 and the scan output section 506. In another embodiment, a second local power block (eg, a local head or foot (not shown)) may be coupled to the scan trigger portion 504. In the embodiment illustrated in FIG. 5, a local head 508 controlled by a non-shifting (nsh) signal may be used at a Sout NOR gate to establish a local virtual power node (e.g., VVdd) and enable Power is decoupled from the scan trigger section 504 and the scan output section 506. However, the embodiments are not limited in this respect.The embodiment illustrated in FIG. 5 may use a non-shifted signal (nsh) that has become an input to the latch circuit 500 as a control signal for controlling a scanning operation. Instead of linking the local head 508 to a dedicated control signal, embodiments may use a non-shifting signal (nsh) that has been used at the Sout NOR gate to turn the local head 508 on and off. That is, the local head 508 may be controlled by a signal that has been supplied to the scanning circuit. Therefore, this embodiment can reduce the occupation area associated with decoupling the scan trigger portion 504 and the scan output portion 506 of the circuit 500, and therefore, the cost associated with this device can be reduced.The embodiment illustrated in FIG. 5 may not require other (eg, special or dedicated) circuits to control the local head 508. Since scan trigger section 504 and scan output section 506 can only be used during a scan or test operation, a non-shifted signal (nsh) can be used to power scan trigger section 504 and / or scan output section 506 (e.g., coupling and Decoupling). If the non-shifting signal (nsh) is turned off, the scan program is not being executed, and the scan trigger section 504 and the scan output section 506 can be decoupled from the power supply via the local head 508, thereby reducing or limiting the scan trigger section 504 and / Or scan for any leaks in the output section 506.In contrast to the embodiment of FIG. 4, the embodiment illustrated in FIG. 5 does not couple the test clock inverter 512 to the virtual power node. The inverter 512 is not coupled to the virtual power node because the output of the inverter 512 will be floating in the event that the input (ckt) to the inverter 512 is low in an inactive state, which may negatively affect Operation of the function latch 502. Further, as in FIG. 4, the clocked inverter 510 is not coupled to a virtual power source.As mentioned above, the scanning circuit can be used to test or scan the IC or part thereof only before the IC leaves the foundry to determine whether the IC is operating properly. The scanning circuit may not be used for design operations. These scanning circuits may not be used again after the IC is shipped out of the foundry, and therefore, the performance of a scanning or testing circuit is not a priority in circuit design. Therefore, the scan circuit may be configured to operate at a lower operating frequency than a functional latched portion of the circuit.In some applications, each sequential element of the circuit may have a scanning portion. The scanning circuit can be half the size of each latch, which takes up a larger portion of the IC area. For example, an IC may contain thousands of latches, which may occupy 25% of the IC area. Therefore, half of this area can be the scanning circuit of the latch, which can occupy a large area of the circuit and a large source of leakage. Although the scan portion of each latch is not available for field operation, the scan portion may leak because the scan portion is coupled to VDD. That is, although an inverter, a cascode amplifier pair, a gated inverter, etc. can be turned off, leakage may still occur. Therefore, if leakage from the scan portion of the latch can be prevented or limited, the overall leakage of the IC can be significantly reduced.In other words, even when the function latches 402, 502; the scan trigger sections 404, 504, and the scan output sections 406, 506 are inactive (i.e., turned off), these sections may leak. There is a need to reduce the leakage in the example illustrated in Figures 4 and 5, in which after the circuit has been tested and the device has been shipped from the foundry, the scan trigger sections 404, 504 may not be used at all And scan output sections 406, 506. That is, even if the scan trigger sections 404, 504 and / or the scan output sections 406, 506 may not be used after the device is shipped from the foundry (i.e., it may not be configured to operate), the circuit's scan trigger The portions 404, 504 and the scan output portions 406, 506 may also leak whenever the circuit is powered. Therefore, leaks in the scan trigger sections 404, 504 and scan output sections 406, 506 can cause a large amount of power leakage, and even if these sections of the circuit may no longer be used, the leakage can account for the total power consumed by the IC Salient part.In the embodiments illustrated in FIGS. 2 to 5, the local power block may be configured to reduce or avoid functional portions of the circuit (eg, the first portions 202, 302 illustrated in FIGS. 2 and 3, and FIG. 4 And the effects of latches 402, 502) illustrated in FIG. Therefore, the described embodiment can reduce or limit any performance degradation of the first part (ie, the functional part). On the other hand, the scanning portion of the circuit can be designed for lower performance (e.g., operating frequency). Thus, one embodiment may have smaller long-channel local power blocks (eg, local feet) for significantly reducing leakage in the scanned portion of the circuit. Compared to a global head or foot, the size of the local power block may be reduced and may even be formed by the smallest size device (e.g., a transistor) based on the configuration of the second part being decoupled.As exemplarily illustrated in FIG. 8, another embodiment may include a reduction in a circuit having a first portion (e.g., 202, 302, 402, 502) and a second portion (e.g., 204, 304, 404, 504) Leak Method 600. Method 600 may include (in block 602) providing a local power block (e.g., 208, 308, 408, 508) configured to decouple the second part if the second part is inactive. Method 600 may further include (in block 604) using local power blocks to decouple power from the second portion in response to control signals (e.g., shifted (sh), non-shifted (nsh)) input to the second portion . In one embodiment, the control signal may be a pre-existing signal configured to control the operation of the second portion. In another embodiment, the local power block may be a device that decouples the power source from the second part and functionally operates in the second part. For example, transistor 408 in the NAND gate of 406 and transistor 508 in the NOR gate of 506 serve as both a local power block and part of the corresponding gate. Therefore, no additional device is required in the corresponding circuit.Other aspects of the embodiments disclosed above will now be described with reference to the exemplary embodiments illustrated in FIGS. 4 and 5. It can be seen that each of the illustrated latch circuits (e.g., 400, 500) has eleven sections A, B, C, D, E, F, G, H, I, J and K. In the embodiments of FIGS. 4 and 5, portions A, B, C, D, E, and G of each circuit may only be associated with performing a scan operation (eg, 404, 406, and 504, 506), and each The four sections H, I, J, and K of the circuit may be associated with latch sections (eg, 402, 502).As can be seen in the examples illustrated in Figures 4 and 5, if a conventional global foot or global header is configured to couple a voltage source to all eleven parts A, B, C, D, E, F, G, H, I, J, and K or decouple the voltage source from all eleven parts of the circuit A, B, C, D, E, F, G, H, I, J, and K, then All eleven sections will require a global head / foot of considerable size and capacity. Moreover, such global heads and feet may affect or degrade the performance of the circuit. Alternatively, if the impact on performance would be reduced, global heads or feet are not used, and all eleven parts of the circuit may leak, including the scanning part, which may not be configured to perform the scanning operation (Ie, after testing the IC and shipping the IC out of the foundry).In contrast, the exemplary embodiments of the latch circuits 400, 500 illustrated in FIGS. 4 and 5 may reduce or eliminate the effect of leakage on the overall performance of the circuits 400, 500. In the latch circuits 400 and 500 illustrated in FIGS. 4 and 5, local power blocks (for example, the local foot 408 and the local head 508) can be substantially reduced or eliminated. Leaks in the seven sections A, B, C, D, E, and G (in 400) (eg, the sections of scan triggers 404, 504 and scan outputs 406, 506). This may result in a substantial reduction in current leakage while minimizing or avoiding any performance degradation of the functional portions of the circuits 400, 500. Only the leakage of portions G (in 500), H, I, J, and K of circuits 400, 500 associated with functional latches 402, 502 will not be affected. Therefore, compared with the leakage in the conventional latch circuit, the leakage in the latch circuits 400, 500 illustrated in FIGS. 4 and 5 is substantially reduced.The embodiments are not limited to latches or scan mode circuits. Other embodiments may be applied to any circuit having one or more high-performance portions and one or more low-performance portions for a low-activity mode, which may be mutually exclusive with a functional mode of the high-performance portion.The difference between the high-performance and low-performance parts of a circuit can depend on several factors, such as the technology involved, the configuration of the circuit, whether the circuit is in a critical processing path, and so on. In some embodiments, the high performance portion may be determined based on a circuit that governs the overall performance of the IC; for example, a critical path / pass circuit that limits the speed of the entire IC. On the other hand, a low-performance portion of a circuit can be defined as a circuit (such as a scanning circuit) that has little or no effect on the overall IC's performance, or is inactive during normal operation (such as a test circuit) but Those parts that may still affect the overall power consumption of the IC. For example, in the circuits illustrated in FIGS. 4 and 5, the operation of the function latches 402, 502 may affect the frequency response / performance during normal operation, while the scanning portion (e.g., the scanning trigger portion 404, 504 And scan output sections 406, 506) are for testing purposes only, but may still affect the power consumption of the IC.The embodiment is not limited to the arrangements illustrated in FIGS. 2 to 6. Other embodiments may have one or more portions of the circuit configured to operate at a higher operating frequency than one or more portions of the circuit configured to operate at a lower operating frequency. In other embodiments, the local power block may be one or more local head / foot portions for one or more lower performance portions. The selection of a local head or a local foot may be based on the configuration of the circuit and / or the portion of the circuit that is decoupling the power source from the local power block.In circuits configured to have higher performance paths (e.g., configured to operate at higher operating frequencies), there is a need to avoid or reduce, for example, the use of global headers or global feet to minimize leakage associated with Reduced performance. However, it is desirable to reduce leakage in the lower performance part of the circuit. Therefore, the embodiment may configure the circuit such that a higher performance portion of the circuit can be directly connected to a voltage source in order to minimize or limit performance degradation of the higher performance portion of the circuit. The lower performance portion of the circuit may have a local power block (eg, a local head or a local foot) to minimize leakage from the lower performance portion of the circuit. Thus, the embodiments may optionally have localized power blocks (e.g., local heads or local feet) that are configured to be coupled to only lower performance portions of the circuit, instead of all portions (e.g., , Higher performance part and lower performance part).In embodiments having more than one lower performance portion, each lower performance portion may have a localized power block, or multiple portions may share a common local power block. Design considerations for selecting a common local power block for multiple low performance sections may include the availability of existing control signals to activate / deactivate the common local power block, existing devices to act as a local power block (E.g., NAND gate transistors in Figure 4), the total current switched, and the proximity of the portion to the lower performance portion. Therefore, the physical size and configuration of each localized power block can be designed to reduce the area, cost, and power consumed, because each localized power block can only serve the configuration of the circuit to decouple the power from the power supply. Lower performance part.Embodiments of the present invention reduce performance variation, for example, by using a global head or foot that decouples power from the higher performance portion of the circuit while still reducing leakage in the lower performance portion of the circuit. Since the higher performance portion of the circuit does not have a local head or a local foot, the higher performance portion may leak. However, leakage in the lower performance portion of the circuit can account for a significant portion of the overall leakage, and thus can be a significant portion of the power loss of the circuit.It should be understood that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and codes that may be referenced throughout the above description may be represented by voltage, current, electromagnetic waves, magnetic fields or particles, light fields or particles, or any combination thereof. sheet.Although the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and / or actions of the method items according to the embodiments of the invention described herein need not be performed in any particular order. In addition, although elements of the present invention are described or claimed in the singular, the plural is encompassed unless explicitly limited to the singular. |
Managing processing of memory commands in a memory subsystem with a high latency backing store. A method is described for managing the issuance and fulfillment of memory commands. The method includes receiving, by a cache controller of a memory subsystem, a first memory command corresponding to a set of memory devices. In response, the cache controller adds the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands and sets a priority of the first memory command to either a high or low priority based on (1) whether the first memory command is of a first or second type and (2) an origin of the first memory command. |
1.A method comprising:receiving, by a cache controller of a memory subsystem, a first memory command corresponding to a set of memory devices of the memory subsystem;adding, by the cache controller, the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands including the first memory command; andBased on (1) whether the first memory command is of the first type or the second type and (2) the source of the first memory command, all items in the cache controller command queue are sorted by the cache controller The priority of the first memory command is set to a high priority or a low priority.2.The method of claim 1, further comprising:determining, by the cache controller, a dependency of the first memory command on one or more memory commands in the first set of memory commands stored in the cache controller command queue; andThe priority of the one or more memory commands is adjusted by the cache controller based on the determined dependency on the first memory command.3.The method of claim 1, further comprising:selecting, by the cache controller, a second memory command from the first set of memory commands to issue to a low-latency memory controller of a low-latency memory of the memory subsystem, wherein the low-latency memory is used for processing the latency of the memory command is less than the latency of the set of memory devices for processing the memory command, andwherein the second memory command is a high priority memory command when the previous memory command issued to the low latency memory controller was a low priority memory command.4.3. The method of claim 3, wherein selecting the second memory command when the previous memory command issued to the low-latency memory controller was a high-priority memory command comprises determining to store in the low-latency memory Whether the number of low-priority memory commands in the controller's low-latency controller command queue meets the threshold,wherein when the number of low priority memory commands stored in the low latency controller command queue fails to meet the threshold, the second memory command is selected because it is a high priority memory command; andwherein when the number of low priority memory commands stored in the low latency controller command queue meets the threshold, the second memory command is selected because it is a low priority memory command.5.The method of claim 4, further comprising:issuing the second memory command by the cache controller to the low-latency memory controller; prioritizing the second memory command and a priority associated with the second memory command by the low-latency memory controller stage is added to the low-latency controller command queue such that the low-latency controller command queue stores a second set of memory commands including the second memory command; andA third memory command from the second set of memory commands is fulfilled by the low-latency memory controller based on a priority associated with each of the memory commands in the second set of memory commands such that all The higher priority memory commands in the second set of memory commands are executed in preference to the lower priority memory commands.6.10. The method of claim 1, wherein determining the dependency between the first memory command and the one or more memory commands comprises determining the first memory command and the one or more memory commands Associated with the same area of cache storage or the same area of the set of memory devices.7.The method of claim 1, wherein the first type is a read memory command and the second type is a write memory command, andThe source of the first memory command is the host system or the memory subsystem.8.A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to:receiving a first memory command corresponding to a group of memory devices of the memory subsystem;adding the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands including the first memory command; andPrioritize the first memory command in the cache controller command queue based on (1) whether the first memory command is of the first or second type and (2) the source of the first memory command Set the priority to high or low priority.9.The non-transitory computer-readable storage medium of claim 8, wherein the processing device is further configured to:determining a dependency of the first memory command on one or more memory commands in the first set of memory commands stored in the cache controller command queue; and based on the determined correlation with the first memory command The dependencies of adjust the priority of the one or more memory commands.10.The non-transitory computer-readable storage medium of claim 8, wherein the processing device is further configured to:A second memory command is selected from the first set of memory commands to issue to a low-latency memory controller of a low-latency memory of the memory subsystem, wherein the low-latency memory has a lower latency for processing memory commands than the latency for processing memory commands for a set of memory devices, andwherein the second memory command is a high priority memory command when the previous memory command issued to the low latency memory controller was a low priority memory command.11.11. The non-transitory computer-readable storage medium of claim 10, wherein selecting the second memory command comprises determining when the previous memory command issued to the low latency memory controller was a high priority memory command Whether the number of low-priority memory commands stored in the low-latency controller command queue of the DRAM controller meets a threshold,wherein when the number of low priority memory commands stored in the low latency controller command queue fails to meet the threshold, the second memory command is selected because it is a high priority memory command; andwherein when the number of low priority memory commands stored in the low latency controller command queue meets the threshold, the second memory command is selected because it is a low priority memory command.12.The non-transitory computer-readable storage medium of claim 11, wherein the processing device is further configured to:issuing the second memory command to the low-latency memory controller;Adding the second memory command and the priority associated with the second memory command to the low-latency controller command queue such that the low-latency controller command queue stores the second memory command containing the second memory command. two sets of memory commands; andBased on the priority associated with each of the memory commands in the second set of memory commands, a third memory command from the second set of memory commands is fulfilled such that the memory commands in the second set of memory commands are High priority memory commands are executed in preference to low priority memory commands.13.9. The non-transitory computer-readable storage medium of claim 8, wherein determining the dependency between the first memory command and the one or more memory commands comprises determining the first memory command and the The one or more memory commands are associated with the same area of cache storage or the same area of the set of memory devices.14.The non-transitory computer-readable storage medium of claim 8, wherein the first type is a read memory command and the second type is a write memory command, andThe source of the first memory command is the host system or the memory subsystem.15.A system comprising:memory device; anda processing device operably coupled to the memory device to:receiving a first memory command corresponding to the memory device;adding the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands including the first memory command;Based on one or more of (1) whether the first memory command is a first type or a second type and (2) the source of the first memory command, the cache controller command queue The priority of the first memory command is set to high priority or low priority;determining a dependency of the first memory command on one or more memory commands in the first set of memory commands stored in the cache controller command queue; andadjusting the priority of the one or more memory commands based on the determined dependency on the first memory command,wherein adjusting the priority of the one or more memory commands based on the determined dependencies includes adjusting the priority of each of the one or more memory commands when the first memory command has a high priority Set to high priority.16.The system of claim 15, wherein the processing device is further configured to:A second memory command is selected from the first set of memory commands to issue to a low-latency memory controller of a low-latency memory, wherein the low-latency memory has a lower latency for processing memory commands than the memory device has for processing latency of memory commands, andwherein the second memory command is of the second type when the previous memory command issued to the low latency memory controller was of the first type.17.17. The system of claim 16, wherein selecting the second memory command when the previous memory command issued to the low-latency memory controller was a high-priority memory command comprises determining to store in the low-latency memory Whether the number of low-priority memory commands in the controller's low-latency controller command queue meets the threshold,wherein when the number of low priority memory commands stored in the low latency controller command queue fails to meet the threshold, the second memory command is selected because it is a high priority memory command; andwherein when the number of low priority memory commands stored in the low latency controller command queue meets the threshold, the second memory command is selected because it is a low priority memory command.18.The system of claim 17, wherein the processing device is further configured to:issuing the second memory command to the low-latency memory controller;Adding the second memory command and the priority associated with the second memory command to the low-latency controller command queue such that the low-latency controller command queue stores the second memory command containing the second memory command. two sets of memory commands; andBased on the priority associated with each of the memory commands in the second set of memory commands, a third memory command from the second set of memory commands is fulfilled such that the memory commands in the second set of memory commands are High priority memory commands are executed in preference to low priority memory commands.19.16. The system of claim 15, wherein determining the dependency between the first memory command and the one or more memory commands comprises determining the first memory command and the one or more memory commands Associated with the same area of cache storage or the same area of the memory device.20.The system of claim 15, wherein the first type is a read memory command and the second type is a write memory command, andThe source of the first memory command is the host system or the memory subsystem. |
Manage memory commands in memory subsystems with high-latency backing stores
deal withtechnical fieldThe present disclosure relates generally to managing the processing of memory commands, and more particularly, to managing the processing of memory commands in a memory subsystem with a high-latency backing store.Background techniqueThe memory subsystem may include one or more memory devices that store data. The memory devices may be, for example, non-volatile memory devices and volatile memory devices. Generally, a host system can utilize a memory subsystem to store data at and retrieve data from a memory device.SUMMARY OF THE INVENTIONIn one aspect, the present application provides a method comprising: receiving, by a cache controller of a memory subsystem, a first memory command corresponding to a set of memory devices of the memory subsystem; adding the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands including the first memory command; and based on (1) the first memory command is the first type or the second type and (2) the source of the first memory command, the priority of the first memory command in the cache controller command queue is set by the cache controller to High or low priority.In another aspect, the present application provides a non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to: receive a first data corresponding to a set of memory devices of a memory subsystem a memory command; adding the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands including the first memory command; and based on (1) all Whether the first memory command is of the first type or the second type and (2) the source of the first memory command, the priority of the first memory command in the cache controller command queue is set to high priority high or low priority.In another aspect, the application provides a system comprising: a memory device; and a processing device operably coupled to the memory device to: receive a first memory command corresponding to the memory device; the first memory command is added to the cache controller command queue, so that the cache controller command queue stores a first set of memory commands including the first memory command; based on (1) the first memory command is the first one or more of a type or a second type and (2) the source of the first memory command, setting the priority of the first memory command in the cache controller command queue to high priority or low priority; determining a dependency of the first memory command on one or more memory commands in the first set of memory commands stored in the cache controller command queue; and based on the determined The dependencies of the first memory command adjust the priority of the one or more memory commands, wherein adjusting the priority of the one or more memory commands based on the determined dependencies includes when the first memory command has a high priority When the priority is set, the priority of each memory command in the one or more memory commands is set to a high priority.Description of drawingsThe present disclosure will be more fully understood from the detailed description given below and from the accompanying drawings of various embodiments of the present disclosure. However, the drawings should not be viewed as limiting the disclosure to specific embodiments, but are for explanation and understanding only.FIG. 1 illustrates an example computing system including a memory subsystem in accordance with some embodiments of the present disclosure.2 is a flowchart of an example method for managing the issuance and fulfillment of memory commands in accordance with some embodiments of the present disclosure.3 is an example memory configuration according to some embodiments of the present disclosure.4 is an example memory configuration after updating priority indications based on dependencies, according to some embodiments of the present disclosure.5 is an example memory configuration after issuing a low priority memory command to a dynamic random access memory (DRAM) controller, according to some embodiments of the present disclosure.6 is an example memory configuration after a high priority memory command is issued to the DRAM controller, according to some embodiments of the present disclosure.7 is a flowchart of another example method for managing the issuance and fulfillment of memory commands in accordance with other embodiments of the present disclosure.8 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.detailed descriptionAspects of the present disclosure relate to efficient scrambling and encoding for memory operations, including write-back procedures in memory subsystems. The memory subsystem may be a storage device, a memory module, or a hybrid of a storage device and a memory module. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . Generally, a host system may utilize a memory subsystem that includes one or more components, such as a memory device that stores data. The host system can provide data to be stored at the memory subsystem, and can request data to be retrieved from the memory subsystem.The memory device may be a non-volatile memory device. A non-volatile memory device is a package of one or more dies. One example of a non-volatile memory device is a negative-AND (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1 . The dies in the package may be assigned to one or more channels for communication with the memory subsystem controller. Each die may consist of one or more planes. Planes can be grouped into logical units (LUNs). For some types of non-volatile memory devices (eg, NAND memory devices), each plane consists of a set of physical blocks, which are groups of memory cells that store data. Cells are electronic circuits that store information.Depending on the cell type, a cell can store one or more bits of binary information and have various logic states related to the number of bits being stored. Logic states may be represented by binary values, such as "0" and "1", or a combination of such values. There are various types of cells, such as single-level cells (SLC), multi-level cells (MLC), three-level cells (TLC), and quad-level cells (QLC). For example, an SLC can store one bit of information and have two logic states.The memory subsystem provides host system access to data. Specifically, a host system may request to read data from or write data to a set of memory devices. Thereafter, the memory subsystem processes these requests and provides the requested data to the host system in response to a read memory request (sometimes called a read command or read memory command), or in response to a write memory request (sometimes called a read memory command) For a write command or a write memory command), the provided data is written to the memory device. To efficiently handle requests from the host system, the memory subsystem may rely on a set of caches. Specifically, data designated or currently stored in a set of high-latency memory devices (sometimes referred to as a backing store) may be stored in low-latency cache storage. Thus, the memory subsystem can access data from cache storage (eg, read data from cache storage or write data to cache storage, which will eventually be flushed to the memory device), rather than directly from high-latency memory Device access data. Since cache storage provides lower latency (eg, lower read and/or write times) compared to memory devices, memory subsystems can process host system requests in an efficient manner by relying on cache storage . Furthermore, in order to maintain concurrency of memory operations, the memory subsystem may cache requests until the memory subsystem has a chance to issue and fulfill the request.Some parts of the above efficiencies related to the use of caches will be obvious to the host system, and some parts will not be obvious. That is, the host system will directly benefit from the increased efficiency relative to read requests originating from the host system, since the memory subsystem will not be Data in response to these read requests is provided with reduced latency. This increased speed in response to read requests originating from the host system allows the host system to thus increase the processing speed dependent on the requested data. While the higher read efficiency associated with read requests originating from the host system provides an efficiency improvement for the host system, the higher write efficiency is usually not noticeable to the host system because write requests generally do not affect the The processing of the host system. That is, while the memory subsystem fulfills a write request, processing activity on the host system is generally not affected because the memory subsystem does not wait for the write request to fulfill before proceeding with further processing activities. Additionally, efficiency improvements in processing read or write requests originating within the memory subsystem (eg, read requests as a result of evictions and flushes from cache stores to memory devices or fills as a result of prefetching) It will generally not be obvious to the host system, since these requests are transparent to the host system.Some memory subsystems may store memory requests in a cache store command queue in the order in which the memory subsystems receive the requests. The memory subsystem may consider dependencies between requests such that related requests (eg, requests to access the same sector, line, or other unit of access from a memory device or cache) are issued together or sequentially. However, even accounting for dependencies, issuing requests based on the order of receipt does not provide a significant performance improvement for the host system, including improved QoS. That is, these techniques treat each type of memory request (eg, read and write requests) the same, regardless of their origin, and thus do not prioritize improving the QoS of the host system.Aspects of the present disclosure address the above and other deficiencies by prioritizing read requests originating from the host system to provide greater processing improvements to the host system (eg, reducing the processing of read requests originating from the host system) ). Specifically, instead of processing memory requests in the order the memory subsystem receives or generates the requests, the memory subsystem may process read requests originating from the host system in preference to other pending memory requests (eg, from the host system). write requests and read and write requests from other sources), while still respecting memory request dependencies. Specifically, when a memory request is received by the cache controller of the memory subsystem, the cache controller adds the memory request to the cache controller command queue. The cache controller command queue stores the memory request with a priority indication indicating whether the memory request is a high or low priority request. If (1) the memory request is a read request received from the host system or (2) a high priority request is dependent on a newly received request (eg, accessing the same sector, row, or other access from a memory device or cache) unit), the cache controller sets the newly received request to high priority. Given the priority of memory requests, the cache controller may periodically iterate through the cache controller command queue to select memory commands for issuing to the dynamic random access memory (DRAM) controller. The DRAM controller can prioritize high-priority commands based on the high-priority flag (that is, when the high-priority flag is set, the DRAM controller prioritizes high-priority commands) ) performs selective prioritization to issue and fulfill received memory requests. To ensure that low-priority requests are not ignored by the memory subsystem, the cache controller may alternately issue low-priority and high-priority requests to the DRAM controller (in equal or unequal way to send out in turn). However, by setting the maximum number of low-priority memory requests that may be outstanding in the DRAM controller, the cache controller can ensure that low-priority requests are not overrepresented in the DRAM controller. If the maximum number of low-priority memory requests that may be outstanding in the DRAM controller is satisfied, the cache controller will not issue the low-priority request until the low-priority request is fulfilled and thus removed from the DRAM controller.The memory request strategy outlined above reduces the latency of host read requests by (1) issuing host read requests and requests that host read commands depend on earlier, rather than issuing them based on age, (2) by Use the maximum low priority value (that is, the maximum number of low priority memory requests that may be outstanding in the DRAM controller) where there are always high priority requests (e.g., read requests from the host system) in the DRAM controller. availability, and (3) selectively enable priority logic in the DRAM controller via the high priority flag. This design, described in more detail below, can provide up to 77.5% improvement in average latency of read requests originating from the host system.FIG. 1 illustrates an example computing system 100 including a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140), one or more non-volatile memory devices (eg, memory device 130), or a combination of such media.Memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash (UFS) drives, secure digital (SD) cards, and hard drives (HDD). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device, such as a desktop computer, laptop computer, web server, mobile device, vehicle (eg, airplane, drone, train, car, or other means of transportation), Internet of Things (IoT) enabled device , embedded computers (eg, computers included in vehicles, industrial equipment, or networked business devices), or such computing devices that include memory and processing devices.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to different types of memory subsystems 110 . FIG. 1 illustrates one example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communicative connection or a direct communicative connection (eg, without intervening components), whether wired or wireless , including connections such as electrical, optical, magnetic, etc.Host system 120 may contain a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, NVDIMM controller), and a storage protocol controller (eg, PCIe controller, SATA controller). Host system 120 uses memory subsystem 110 to, for example, write data to and read data from memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect Express (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS) , Small Computer System Interface (SCSI), Double Data Rate (DDR) memory bus, Dual Inline Memory Module (DIMM) interface (e.g., Double Data Rate (DDR) capable DIMM socket interface), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR) or any other interface. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may also utilize a non-volatile memory host controller interface specification (NVMe) interface to access components (eg, memory device 130). The physical host interface may provide an interface for passing control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 illustrates a memory subsystem 110 as an example. Typically, host system 120 may access multiple memory subsystems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.The memory devices 130, 140 may comprise any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices (eg, memory device 140) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices (eg, memory device 130) include NAND-type flash memory and write-in-place memory, such as three-dimensional cross-point ("3D cross-point") memory devices, which are non- A crosspoint array of volatile memory cells. Cross-point arrays of non-volatile memory can perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform in-situ write operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells. NAND-type flash memories include, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3DNAND).Although non-volatile memory devices are described, such as NAND-type memory (eg, 2D NAND, 3DNAND) and 3D cross-point arrays of non-volatile memory cells, memory device 130 may be based on any other type of non-volatile memory , such as read only memory (ROM), phase change memory (PCM), self-selective memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-Based RRAM (OxRAM), Negative-OR (NOR) Fast Flash memory and Electrically Erasable Programmable Read Only Memory (EEPROM)Memory subsystem controller 115 (or, for simplicity, controller 115 ) may communicate with memory device 130 to perform operations such as reading data, writing data, or erasing data at memory device 130 and other such operations (eg, , in response to commands dispatched by the controller 115 on the command bus). Memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. Hardware may contain digital circuits with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or another suitable processor.Memory subsystem controller 115 may include a processing device 117 (processor) configured to execute instructions stored in local memory 119 . In the illustrated example, local memory 119 of memory subsystem controller 115 includes embedded memory that is configured to store various processes, operations, logic flows, and routines for performing operations that control memory subsystem 110 Program instructions, including handling communications between memory subsystem 110 and host system 120 .In some embodiments, local memory 119 may contain memory registers that store memory pointers, fetch data, and the like. Local memory 119 may also contain read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 is illustrated as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but may rely on For external control (eg, provided by an external host, or by a processor or controller separate from the memory subsystem).Generally, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to effect desired accesses to memory device 130 and/or memory device 140 . Memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical addresses associated with memory devices 130 (eg, , an address translation between a logical block address (LBA), namespace) and a physical address (eg, physical block address). Memory subsystem controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions to access memory device 130 and/or memory device 140, as well as translate responses associated with memory device 130 and/or memory device 140 for the host system 120 information.Memory subsystem 110 may also contain additional circuits or components not shown. In some embodiments, memory subsystem 110 may include caches or buffers (eg, DRAM) and address circuits (eg, row and column decoders) that may receive and decode addresses from memory subsystem controller 115 address to access memory device 130 .In some embodiments, memory device 130 includes a local media controller 135 that operates with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 130 . An external controller (eg, memory subsystem controller 115) may manage memory device 130 externally (eg, perform media management operations on memory device 130). In some embodiments, memory device 130 is a managed memory device, which is a raw memory device combined with a local controller (eg, local controller 135) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.The memory subsystem 110 includes a cache controller 142 and a DRAM controller 144 (eg, a low-latency memory controller), which can selectively issue and fulfill read requests/commands issued from the host system 120 to improve performance of the host system. Quality of Service (QoS) of 120 (eg, reducing latency associated with fulfilling read requests originating from host system 120). In some embodiments, controller 115 includes at least a portion of cache controller 142 and/or DRAM controller 144 . For example, controller 115 may include a processor 117 (processing device) configured to execute instructions stored in local memory 119 to perform the operations described herein. In some embodiments, cache controller 142 and/or DRAM controller 144 are part of host system 110, an application program, or an operating system.Cache controller 142 and/or DRAM controller 144 may selectively issue and fulfill read requests issued from host system 120 to improve QoS to host system 120 . Further details regarding the operation of cache controller 142 and/or DRAM controller 144 are described below.FIG. 2 is a flowchart of an example method 200 for managing the issuance and fulfillment of memory commands in accordance with some embodiments of the present disclosure. Method 200 may be performed by processing logic, which may include hardware (eg, processing devices, circuits, special purpose logic, programmable logic, microcode, hardware of devices, integrated circuits, etc.), software (eg, running on a processing device or instructions to execute) or a combination thereof. In some embodiments, method 200 is performed by cache controller 142 and/or DRAM controller 144 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise stated, the order of the processes/operations may be modified. Accordingly, the illustrated embodiments should be understood as examples only, and the illustrated processes/operations may be performed in a different order, and some processes/operations may be performed in parallel. Additionally, one or more procedures/operations may be omitted in various embodiments. Therefore, not all procedures/operations are required for every embodiment. Other processes are also possible.The method 200 of FIG. 2 will be described in conjunction with several other supporting figures including the memory configuration 300 of FIG. 3 . However, the method 200 is not limited to the embodiments shown in these supporting figures, but these supporting figures are for illustrative purposes.Although described using a DRAM controller, the memory configuration 300 and computing system 100 typically include both high-latency memory and low-latency memory (ie, high-latency memory has higher latency than low-latency memory with respect to performing reads and writes) . In memory configuration 300, memory devices 130 and/or 140 (eg, backing store 308) are high-latency memory, while cache store 302 is low-latency memory. Therefore, the use of specific memory types (eg, DRAM) is for illustrative purposes.As shown in FIG. 2 , method 200 may begin at operation 202 . At operation 202, the processing device determines whether a memory command (sometimes referred to as a memory request, command, or request) has been received or otherwise detected. For example, in the memory configuration 300 of FIG. 3 , the processing device may determine whether the memory command 312 has been received by the cache controller 142 . Memory command 312 may be an internally generated command (eg, memory command 312 is generated by memory subsystem 110) or an externally generated command (eg, memory command 312 is received from host system 120). For example, with respect to internally generated commands, the memory subsystem 110 may trigger (1) a fill operation in which data from the backing store 308 (eg, memory devices 130/140) is added to the cache store 302, and/or (2) An eviction or purge operation in which dirty data in cache store 302 is evicted from cache store 302 to backing store 308 . In one embodiment, the cache controller 142 may trigger (1) a direct memory access (DMA) fill engine 304 to perform a fill operation, including passing the write memory command 312 to the cache controller 142 so that the data from the backing store Data at 308 may be written/filled into cache storage 302, and/or (2) DMA eviction/purge engine 306 to perform an eviction or purge operation, including passing a read command to cache controller 142, causing Data of cache store 302 is read and can be evicted and written to backing store 308 . In the case of externally generated memory commands 312, host system 120 may transmit (1) read memory commands 312 to request data from backing store 308, which may also be cached by memory subsystem 110 (eg, in cache store 302). ) and/or (2) write memory command 312 to request data to be written to backing store 308 , which may also be cached by memory subsystem 110 prior to writing to backing store 308 . In response to the processing device determining at operation 202 that the memory command 312 is received, the method 200 proceeds to operation 204 .At operation 204 , the processing device adds the newly received memory command 312 to the cache controller command queue 310 . Specifically, the cache controller command queue 310 stores memory commands 312 that have not yet been issued/fulfilled, and the processing device adds newly received memory commands 312 to the cache controller in the order in which the memory commands 312 were received by the cache controller 142 Command queue 310. Accordingly, cache controller command queue 310 contains received memory commands 312 in the order in which they were received, which have not yet been issued to DRAM controller 144 for fulfillment. For example, in the example of FIG. 3, cache controller 142 receives command C4 before command C5, command C5 before command C6, and command C6 before command C7. For purposes of illustration, command C7 is the newly received memory command 312 that the processing device determined to be received at operation 202 and that the processing device added to the cache controller command queue 310 at operation 204 .In addition to storing memory commands 312 in the order in which they were received, cache controller command queue 310 also tracks the priority of pending/pending memory commands 312 . Specifically, the cache controller command queue 310 contains a priority indication 314 that indicates whether the associated memory command 312 is high priority (H) or low priority (L). As will be discussed in more detail below, high priority memory commands 312 are typically selected for issue/fulfillment to DRAM controller 144 before low priority memory commands 312 . In one embodiment, when the memory command 312 is added to the cache controller command queue 310 at operation 204, the memory command 312 is initially assigned a low priority. However, as described below, in other embodiments, the processing device may set the priority of newly received memory commands 312 at a later time.At operation 206 , the processing device updates the dependency tracker 316 based on the newly received memory command 312 . Specifically, the dependency tracker 316 tracks the interdependencies of all pending/incomplete memory commands 312 (ie, memory commands 312 that have not yet been issued and fulfilled). In one embodiment, the processing device detects dependencies between commands when commands target the same sector, row, or other access unit of cache store 302 or backing store 308 . In response to the dependency tracker 316 determining that the newly received memory command 312 is dependent on the previously received memory command 312, the dependency tracker 316 records the dependency. For example, as shown in FIG. 3, memory commands C4, C5, C6, and C7 are pending memory commands 312, and as described above, memory command C7 is newly received. In this example, memory command C5 does not share an interdependency with any other memory command 312 (ie, memory command C5 does not depend on another memory command 312 and another memory command 312 does not depend on memory command C5). Instead, memory commands C4, C6, and C7 are interdependent. In one embodiment, dependency tracker 316 may determine dependencies based on (1) a region or portion of cache store 302 or (2) backing store 308 associated with memory command 312 . For example, cache store 302 may be a multi-way associative cache. In this example, although memory command C4 and memory command C6 may not refer to the same address (different logical and/or physical addresses), the two memory commands 312 may be the same area (sometimes referred to as a sector) of cache store 302 or access units) are associated (eg, addresses referenced by memory commands C4 and C6 are associated with at least partially overlapping regions of cache store 302). For example, the shared area of cache store 302 may be one or more cache lines. Dependency tracker 316 indicates that memory command C6 is dependent on memory command C4 based on the association/similarity. Likewise, when dependency tracker 316 determines that memory command C7 is associated with the same region of cache store 302 as memory commands C4 and/or C6, dependency tracker 316 indicates that memory command C7 is associated with memory commands C4 and/or C6 interdependent. For example, as shown in FIG. 3, memory command C7 depends on memory command C6, which depends on memory command C4.At operation 208 , the processing device determines whether the newly received memory command 312 is a read memory command 312 already received from the host system 120 . As described above, newly received memory commands 312 may be of various types and from various sources. For example, the newly received memory command 312 may be a read memory command or a write memory command. Additionally, newly received memory commands 312 may be received from host system 120 or may be generated internally by memory subsystem 110 (eg, as part of a fill, evict, or clear operation). In response to the processing device determining that the newly received memory command 312 is not a read memory command 312 from host system 120 (eg, is a write memory command 312 or an internally generated read memory command 312 ), method 200 proceeds to operation 210 .At operation 210, the processing device sets the priority of the newly received memory command 312 to a low priority. Specifically, at operation 210, the processing device sets the priority indication 314 of the newly received memory command 312 in the cache controller command queue 310 to a low priority. For example, FIG. 3 shows when the memory command 312 is not a read memory command 312 from the host system 120 and thus the processing device sets the priority of the memory command C7 to low priority at operation 210 (ie, the processing device places the memory The newly received memory command C7 when the priority indication 314 of command C7 is set to low priority (L)). As will be described below, this low priority may change based on a later received dependency of a high priority memory command 312 that is dependent on memory command C7.Returning to operation 208 , in response to the processing device determining that the newly received memory command 312 is a read memory command 312 from the host system 120 , the method 200 proceeds to operation 212 . At operation 212, the processing device sets the priority of the newly received memory command 312 to a high priority. Specifically, at operation 212, the processing device sets the priority indication 314 of the newly received memory command 312 in the cache controller command queue 310 to high priority (H). This high priority will help ensure that read memory commands 312 from host system 120 generally take precedence over lower priority memory commands 312 by reducing latency for fulfilling read memory commands 312 from host system 120 The performance relative to the host system 120 is improved.At operation 214, the processing device also sets pending/pending memory commands 312 in the cache controller command queue 310 upon which the newly received memory command 312 (set to high priority at operation 212) depends as high priority. Specifically, at operation 214, the processing device (eg, high priority enable logic 328) indicates 314 the priority of the pending memory commands 312 in the cache controller command queue 310 upon which the newly received memory command 312 depends Set to high priority (H). For example, when the newly received memory command 312 is a memory command C7, and the memory command C7 is a read memory command 312 from the host system 120, the processing device is caused to set the priority indication 314 of the memory command C7 at operation 212 to High priority, the processing device at operation 214 also sets the priority indications 314 of the memory commands C4 and C6 to high priority, as shown in FIG. 4 .After operation 214 , operation 210 , or after the processing device determines that the memory command 312 was not received at operation 202 , the method 200 proceeds to operation 216 . At operation 216 , the processing device determines whether a triggering event has occurred, which would indicate that a memory command 312 needs to be issued from the cache controller 142 to the DRAM controller 144 . For example, the triggering event may be the elapse of a certain period of time such that cache controller 142 issues memory commands 312 to DRAM controller 144 at specified intervals. In another embodiment, the triggering event may be the state of the DRAM controller command queue 318 after they are issued from the cache controller 142 but before the memory commands 312 are fulfilled (ie, after a command selection with read priority). Logic 324 stores memory commands 312 before reading data from or writing data to cache storage 302 according to corresponding memory commands 312 . For example, detecting a triggering event may include the processing device detecting that the DRAM controller command queue 318 has an entry/space available for additional memory commands 312 .Although operation 216 described above occurs after or as a result of the completion of one of operations 202, 210, and 214, operation 216 and one or more of operations 216 may be performed independently of operations 202-214 (including operations 202, 210, and/or 214). follow-up operations 218-226. In this manner, operations 202-214 and operations 216-226 are performed independently, including in time periods that may at least partially overlap, as will be described further below.In response to the processing device determining at operation 216 that the triggering event has not occurred, the method 200 returns to operation 202 to determine whether a memory command 312 has been received. Instead, method 200 proceeds to operation 218 in response to the processing device determining at operation 216 that a triggering event has occurred.At operation 218 , the processing device determines whether the last memory command or commands 312 issued by cache controller 142 to DRAM controller 144 were high priority memory commands 312 . For example, cache controller 142 may store and maintain priority indication 314 in conjunction with each of one or more memory commands 312 issued by cache controller 142 to DRAM controller 144 . For example, a priority indicating memory may contain a single bit that is set to a value/state (eg, set or value "1") to indicate that the last memory command 312 issued by cache controller 142 to DRAM controller 144 was high The priority memory command 312, or set to another value/state (eg, not set or value "0") to indicate that the last memory command 312 issued by the cache controller 142 to the DRAM controller 144 was a low priority memory Command 312. This check allows the processing device to alternate high priority and low priority memory commands 312 as appropriate. In response to the processing device determining that the last memory command 312 issued by the cache controller 142 to the DRAM controller 144 was a high priority memory command 312 , the method 200 proceeds to operation 220 .At operation 220, the processing device determines whether the number of low priority memory commands 312 in the DRAM controller command queue 318 meets a threshold (eg, equal to the maximum number of outstanding low priority memory commands 320). Specifically, the maximum number of outstanding low priority commands 320 indicates the maximum number of low priority memory commands 312 allowed in the DRAM controller command queue 318 at any point in time. As shown in FIGS. 3 and 4 , DRAM controller command queue 318 is a queue similar to cache controller command queue 310 . Specifically, DRAM controller command queue 318 contains memory commands 312 and priority indications 314 stored in the order in which they were received by cache controller 142 . As will be described below, when a memory command 312 is issued to the DRAM controller 144 by the prioritized command selection logic 322 of the cache controller 142, the memory command 312 is added from the cache controller command queue 310 to the DRAM controller command queue At 318, an associated priority indication 314 is included. Using the maximum number of outstanding low-priority memory commands 320 as a threshold ensures that despite a large number of previous low-priority memory commands 312, a minimum number of high-priority memory commands 312 can enter or otherwise be in DRAM controller commands in queue 318. For example, based on the example shown in FIG. 3 or the example shown in FIG. 4, when the maximum number of outstanding low priority memory commands 320 is set to three, the processing device determines at operation 220 that the DRAM controller command queue 318 is in the The number of low priority memory commands 312 is not equal to the maximum number 320 of outstanding low priority memory commands. However, when the maximum number of outstanding low priority memory commands 320 is set to two, the processing device determines at operation 220 that the number of low priority memory commands 312 in the DRAM controller command queue 318 is equal to outstanding low priority memory commands The maximum number of level memory commands is 320.In response to the processing device determining that the number of low priority memory commands 312 in the DRAM controller command queue 318 does not meet the threshold, eg, not equal to the maximum number of outstanding low priority commands 320 , method 200 proceeds to operation 222 . At operation 222 , the processing device issues the oldest low priority memory command 312 from the cache controller command queue 310 of the cache controller 142 to the DRAM controller command queue 318 of the DRAM controller 144 . For example, in the example of FIG. 4 , at operation 222 , the prioritized command selection logic 322 issues memory command C5 to the DRAM controller command queue 318 , as shown in FIG. 5 .In response to the processing device determining (1) the number of low-priority memory commands 312 in the DRAM controller command queue 318 meets a threshold (eg, equal to the maximum number of outstanding low-priority commands 320), or (2) by the cache the last memory command 312 issued by the controller 142 to the DRAM controller 144 was not a high priority memory command 312 (ie, the last memory command 312 issued by the cache controller 142 to the DRAM controller 144 was a low priority memory command 312), Method 200 proceeds to operation 224 . At operation 224 , the processing device issues the oldest high priority memory command 312 from the cache controller command queue 310 of the cache controller 142 to the DRAM controller command queue 318 of the DRAM controller 144 . For example, in the example of FIG. 4 , at operation 224 , the prioritized command selection logic 322 issues memory command C4 to the DRAM controller command queue 318 , as shown in FIG. 6 .After issuing the low priority memory command 312 at operation 222 or the high priority memory command 312 at operation 224 , the method 200 proceeds to operation 226 . At operation 226, the processing device issues/fulfills commands 312 from the DRAM controller command queue 318, where the high priority memory commands 312 are prioritized. Specifically, the command selection 324 with read priority issues the memory command 312 from the DRAM controller command queue 318 with a priority indication 314 in the DRAM controller command queue 318 indicating that the memory command 312 is a high priority memory command 312 The commands 312 are prioritized. Issuing a command from the DRAM controller command queue 318 may include (1) reading data from cache storage 302 so that the data may be returned to the requester of the corresponding read memory command 312 (eg, host system 120 or a DMA eviction/purging engine). 306) or (2) write the data to cache storage 302 to fulfill write memory command 312 (eg, write memory command 312 from host system 120 or DMA fill engine 304). In one embodiment, read priority enable logic 326 selectively enables and disables read priority in command selection 324 with read priority. Specifically, read priority enable logic 326 may enable command selection 324 with read priority to prioritize high priority memory commands 312 over low priority memory commands 312 in DRAM controller 144, or disable the feature, Causes commands with read priority to select 324 to issue memory commands 312 in the order in which memory commands 312 were received from cache controller 142 or the total age of memory commands 312 (ie, the order in which memory commands 312 were received by cache controller 142 ) . In one embodiment, read priority enable logic 326 is selectively enabled on a per received memory command 312 basis. For example, cache controller 142 may set one or more bits/fields in each memory command 312 passed to DRAM controller 144 to indicate whether read priority enable logic 326 is enabled or disabled for memory command 312 .The memory request strategy outlined above reduces the latency of read memory commands 312 originating from host system 120 by (1) issuing read memory commands 312 originating from host system 120 and these host read memory commands earlier 312 relies on memory commands 312, rather than issuing them based on age, (2) by using the maximum low priority value (ie, the maximum number of low priority memory requests that may be outstanding in the DRAM controller 144), in the DRAM Controller 144 always has the availability of high priority memory commands 312 (eg, read memory commands 312 from host system 120 ), and (3) selectively enables priority logic in DRAM controller 144 via the high priority flag . As described above, this design can provide up to a 77.5% improvement in average latency of read memory commands 312 originating from host system 120.Turning to FIG. 7 , this figure shows a flowchart of an example method 700 for managing the issuance and fulfillment of memory commands in accordance with other embodiments of the present disclosure. Method 700 may be performed by processing logic, which may include hardware (eg, processing devices, circuits, special purpose logic, programmable logic, microcode, hardware of devices, integrated circuits, etc.), software (eg, running on a processing device or instructions to execute) or a combination thereof. In some embodiments, method 700 is performed by cache controller 142 and/or DRAM controller 144 of FIG. 1 . Although shown in a particular order or sequence, unless otherwise stated, the order of the processes/operations may be modified. Accordingly, the illustrated embodiments should be understood as examples only, and the illustrated processes/operations may be performed in a different order, and some processes/operations may be performed in parallel. Additionally, one or more procedures/operations may be omitted in various embodiments. Therefore, not all procedures/operations are required for every embodiment. Other processes are also possible.As shown in FIG. 7 , method 700 may begin at operation 702 , where a processing device receives a memory command 312 corresponding to a set of memory devices 130 / 140 of memory subsystem 110 . For example, cache controller 142 may receive memory command C7 corresponding to backing store 308 of memory subsystem 110 (eg, memory devices 130/140). Memory commands 312 may be internally generated memory commands (eg, memory commands 312 are generated by memory subsystem 110) or externally generated commands (eg, memory commands 312 are received from host system 120). Thus, memory commands 312 may have different origins, including host system 120 and memory subsystem 110 . Additionally, memory commands 312 may be of various types, including read memory command types and write memory command types.At operation 704, the processing device adds the memory command 312 received at operation 702 (ie, the received memory command 312) to the cache controller command queue 310 such that the cache controller command queue 310 stores a set of memory Command 312, containing received memory command 312. For example, cache controller 142 may add memory command C7 to cache controller command queue 310 such that cache controller command queue 310 stores commands C4, C5, C6, and C7, as shown in FIG.At operation 706, the processing device caches the controller command queue 310 based on (1) whether the received memory command 312 is a read memory command type or a write memory command type and (2) the source of the received memory command 312. The priority of the received memory command 312 in is set to high priority or low priority. For example, when the received memory command 312 is a read memory command 312 and originates from the host system 120, at operation 706, the cache controller 142 may set the priority indication 314 of the received memory command 312 to high priority Grade (H).8 illustrates an example machine of a computer system 800 in which a set of instructions can be executed to cause the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 800 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or is operable to execute Operation of a controller (eg, executing an operating system to perform operations corresponding to cache controller 142 and/or DRAM controller 144 of FIG. 1 ). In alternative embodiments, machines may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. A machine may operate as a server or client in a client-server network environment, as a peer in a peer-to-peer (or distributed) network environment, or as a server or client in a cloud computing infrastructure or environment.A machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, web device, server, network router, switch, or bridge, or capable of performing the actions specified to be taken by the machine. Any machine that acts as a set of instructions (sequential or otherwise). Furthermore, although a single machine is illustrated, the term "machine" should also be taken to encompass any collection of machines that, individually or jointly, execute a set (or sets) of instructions to perform any one or more of the methods discussed herein.The example computer system 800 includes a processing device 802, main memory 804 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.) , static memory 806 (eg, flash memory, static random access memory (SRAM), etc.), and data storage systems 818 in communication with each other via bus 830 .Processing device 802 represents one or more general-purpose processing devices, such as a microprocessor, central processing unit, or the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. Processing device 802 may also be one or more special-purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 802 is configured to execute instructions 826 for performing the operations and steps discussed herein. Computer system 800 may also include a network interface device 808 that communicates over network 820 .Data storage system 818 may include machine-readable storage media 824 (also referred to as computer-readable media) having stored thereon one or more sets of instructions 826 or software embodying any one or more of the methods or functions described herein. During execution of instructions 826 by computer system 800, the instructions may also reside entirely or at least partially within main memory 804 and/or within processing device 802, which also constitute machine-readable storage media. Machine-readable storage medium 824, data storage system 818, and/or main memory 804 may correspond to memory subsystem 110 of FIG.In one embodiment, instructions 826 include instructions for implementing functions corresponding to a cache controller and/or DRAM controller (eg, cache controller 142 and/or DRAM controller 144 of FIG. 1 ). Although machine-readable storage medium 824 is shown in example embodiments as a single medium, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" should also be taken to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be taken to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the foregoing detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is generally considered here to be a self-consistent sequence of operations leading to a desired result. Operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to the acts and processes of a computer system or similar electronic computing device that manipulate and convert data represented as a computer system's registers and physical (electronic) quantities within the memory to a computer system memory or registers or similar representations of data or Other such information stores other data of physical quantities within the system.The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. For example, a computer system or other data processing system such as controller 115 may perform a computer-implemented method in response to its processor executing a computer program (eg, a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium 200 and 700. This computer program may be stored in a computer readable storage medium such as, but not limited to, any type of disk including floppy disk, optical disk, CD-ROM and magneto-optical disk, read only memory (ROM), random access memory (RAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, are each coupled to a computer system bus.The algorithms and displays presented herein are not intrinsically related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the method. The structure of a variety of these systems will be set forth in the description below. Additionally, the present disclosure is not described with reference to any particular programming language. It should be understood that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read only memory ("ROM"), random access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory components, etc.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. Obviously, various modifications may be made thereto without departing from the broader spirit and scope of the disclosed embodiments as set forth in the following claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
One embodiment provides an apparatus. The apparatus includes a single instruction multiple data (SIMD) hash module configured to apportion at least a first portion of a message of length L to a number (S) of segments, the message including a plurality of sequences of data elements, each sequence including S data elements, a respective data element in each sequence apportioned to a respective segment, each segment including a number N of blocks of data elements and to hash the S segments in parallel, resulting in S segment digests, the S hash digests based, at least in part, on an initial value and to store the S hash digests; a padding module configured to pad a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; and a non-SIMD hash module configured to hash the padded remainder, resulting in an additional hash digest and to store the additional hash digest. |
CLAIMSWhat is claimed is:1. An apparatus, comprising:a single instruction multiple data (SIMD) hash module configured to apportion at least a first portion of a message of length L to a number (S) of segments, the message comprising a plurality of sequences of data elements, each sequence comprising S data elements, a respective data element in each sequence apportioned to a respective segment, each segment comprising a number N of blocks of data elements and to hash the S segments in parallel, resulting in S hash digests, the S hash digests based, at least in part, on an initial value; the SIMD hash module is further configured to store the S hash digests;a padding module configured to pad a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; anda non-SIMD hash module configured to hash the padded remainder, resulting in an additional hash digest and to store the additional hash digest.2. The apparatus of claim 1, wherein the padding comprises a representation of a length parameter and the length parameter corresponds to a length of the remainder or to the length of the message L.3. The apparatus of claim 1 or 2, further comprising:a hash-based message authentication code (HMAC) module, configured to determine at least one initial value based, at least in part, on a first cryptographic key and to generate a message authentication code (MAC) based, at least in part, on at least the S hash digests.4. The apparatus of claim 3, wherein the HMAC module is further configured to determine one initial value and the S hash digests are based, at least in part, on the one initial value or to determine at least S initial values, and each hash digest is based, at least in part, on a respective initial value.5. The apparatus of claim 3, wherein the HMAC module is configured to hash an ordered set comprising at least the S hash digests, the hashing resulting in an intermediate hash digest, the intermediate hash digest based, at least in part, on at least one of the first initial value and a second initial value, the second initial value related to a second cryptographic key.6. The apparatus of claim 5, wherein the HMAC module is further configured to hash the intermediate hash digest, based, at least in part, on a third initial value related to a third cryptographic key.7. A computing device, comprising:a processor comprising at least one single instruction multiple data (SIMD) register, each SIMD register configured to hold a plurality of data elements;memory comprising a data buffer, the data buffer configured to store a message of lengthL;an SIMD hash module configured to apportion at least a first portion of the message to a number (S) of segments, the message comprising a plurality of sequences of data elements, each sequence comprising S data elements, a respective data element in each sequence apportioned to a respective segment, each segment comprising a number N of blocks of data elements and to hash the S segments in parallel using the at least one SIMD register, resulting in S hash digests, the S hash digests based, at least in part, on an initial value; the SIMD hash module is further configured to store the S hash digests in memory;a padding module configured to pad a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; anda non-SIMD hash module configured to hash the padded remainder, resulting in an additional hash digest and to store the additional hash digest in memory.8. The computing device of claim 7, wherein the padding comprises a representation of a length parameter and the length parameter corresponds to a length of the remainder or to the length of the message L. 9. The computing device of claim 7 or 8, further comprising:a hash-based message authentication code (HMAC) module, configured to determine at least one initial value based, at least in part, on a first cryptographic key and to generate a message authentication code (MAC) based, at least in part, on at least the S hash digests.10. The computing device of claim 9, wherein the HMAC module is further configured to determine one initial value and the S hash digests are based, at least in part, on the one initial value or to determine at least S initial values, and each hash digest is based, at least in part, on a respective initial value.11. The computing device of claim 9, wherein the HMAC module is configured to hash an ordered set comprising at least the S hash digests, the hashing resulting in an intermediate hash digest, the intermediate hash digest based, at least in part, on at least one of the first initial value and a second initial value, the second initial value related to a second cryptographic key.12. The computing device of claim 11, wherein the HMAC module is further configured to hash the intermediate hash digest, based, at least in part, on a third initial value related to a third cryptographic key. 13. A method, comprising:apportioning, by a single instruction multiple data (SIMD) hash module, at least a first portion of a message of length L to a number (S) of segments, the message comprising a plurality of sequences of data elements, each sequence comprising S data elements, a respective data element in each sequence apportioned to a respective segment, each segment comprising a number N of blocks of data elements;hashing, by the SIMD hash module, the S segments in parallel, resulting in S hash digests, the S hash digests based, at least in part, on an initial value;storing, by the SIMD hash module, the S hash digests;padding, by a padding module, a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size;hashing, by a non-SIMD hash module, the padded remainder, resulting in an additional hash digest; andstoring, by the non-SIMD hash module, the additional hash digest.14. The method of claim 13, wherein the padding comprises a representation of a length parameter and the length parameter corresponds to a length of the remainder or to the length of the message L.15. The method of claim 13 or 14, further comprising:determining, by a hash-based message authentication code (HMAC) module, at least one initial value based, at least in part, on a first cryptographic key; andgenerating, by the HMAC module, a message authentication code (MAC) based, at least in part, on at least the S hash digests.16. The method of claim 15, wherein determining the at least one initial value comprises determining, by the HMAC module, one initial value and the S hash digests are based, at least in part, on the one initial value or determining, by the HMAC module, at least S initial values, and each hash digest is based, at least in part, on a respective initial value.17. The method of claim 15, wherein generating the MAC comprises hashing, by the HMAC module, an ordered set comprising at least the S hash digests, the hashing resulting in an intermediate hash digest, the intermediate hash digest based, at least in part, on at least one of the first initial value and a second initial value, the second initial value related to a second cryptographic key.18. The method of claim 17, wherein generating the MAC further comprises hashing, by the HMAC module, the intermediate hash digest, based, at least in part, on a third initial value related to a third cryptographic key.19. A system comprising, one or more storage devices having stored thereon, individually or in combination, instructions that when executed by one or more processors result in the following operations comprising:apportioning at least a first portion of a message of length L to a number (S) of segments, the message comprising a plurality of sequences of data elements, each sequence comprising S data elements, a respective data element in each sequence apportioned to a respective segment, each segment comprising a number N of blocks of data elements;hashing the S segments in parallel, resulting in S hash digests, the S hash digests based, at least in part, on an initial value;storing the S hash digests;padding a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; hashing the padded remainder, resulting in an additional hash digest; and storing the additional hash digest.20. The system of claim 19, wherein the padding comprises a representation of a length parameter and the length parameter corresponds to a length of the remainder or to the length of the message L.21. The system of claim 19 or 20, wherein the instructions that when executed by one or more processors results in the following additional operations comprising:determining at least one initial value based, at least in part, on a first cryptographic key; andgenerating a message authentication code (MAC) based, at least in part, on at least the S hash digests. 22. The system of claim 21, wherein determining the at least one initial value comprises determining one initial value and the S hash digests are based, at least in part, on the one initial value or determining at least S initial values, and each hash digest is based, at least in part, on a respective initial value. 23. The system of claim 21, wherein generating the MAC comprises hashing an ordered set comprising at least the S hash digests, the hashing resulting in an intermediate hash digest, the intermediate hash digest based, at least in part, on at least one of the first initial value and a second initial value, the second initial value related to a second cryptographic key. 24. The system of claim 23, wherein generating the MAC further comprises hashing the intermediate hash digest, based, at least in part, on a third initial value related to a third cryptographic key.25. A device comprising means to perform the method of any one of claims 13 to 18. |
GENERATING MULTIPLE SECURE HASHES FROM A SINGLE DATA BUFFERFIELDThe present disclosure relates to hashes, and, more particularly, to generating multiple secure hashes from a single data buffer.BACKGROUNDCryptographic hashing algorithms typically function in a chained dependent fashion on a single buffer of data (e.g., message). The buffer of data is divided into blocks with a size of the blocks defined by a hashing algorithm, e.g., SHA-1, SHA-256, etc. The blocks are then processed serially according to a standard specification. An output of each block being processed is a hash digest that is used as input (i.e., initial digest) to processing a subsequent block of data in the buffer, thus the serial chaining constraint. Blocks are processed in this manner until each block of the data buffer has been processed. Computational intensity associated with executing a hashing algorithm combined with the serial nature of the processing results in a relatively long time for generating cryptographic hashes.Cryptographic hashes may be utilized, for example, for secure loading of files when the files need to be authenticated. Such secure loading may occur, for example, during boot sequences or soon after boot sequences when an operating system may be running, but in a limited form, e.g., a single thread. The serial chaining constraint when determiningcryptographic hashes can thus result in an unacceptably long duration associated with authenticating such files during the boot sequences.Single instruction multiple data (SIMD) techniques may be used to accelerate, e.g., SHA- 256 hashing, but such techniques typically utilize multiple independent data buffers that may require a multi-threaded environment. Such a multi-threaded environment may then require producer-consumer queues and/or significant re-architecting of the application and processing flow that is generally infeasible during, for example, the boot sequence.BRIEF DESCRIPTION OF DRAWINGSFeatures and advantages of the claimed subject matter will be apparent from the following detailed description of embodiments consistent therewith, which description should be considered with reference to the accompanying drawings, wherein:FIG. 1 illustrates a functional block diagram of a computing device consistent with various embodiments of the present disclosure; FIG. 2 illustrates apportioning a plurality of sequences of data elements to S segments;FIG. 3 illustrates segment digests that may result from hashing the segments of apportioned data elements, in parallel and according to SHA-256;FIG. 4 is a flowchart of hashing operations according to various embodiments of the present disclosure;FIG. 5 is a flowchart of hash-based message authentication code (HMAC) operations according to various embodiments of the present disclosure; andFIG. 6 is a flowchart of one example of hash-based message authentication code(HMAC) operations according to one embodiment of the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.DETAILED DESCRIPTIONGenerally, this disclosure relates to hash and message authentication code methods (and systems) configured to determine hash digests and/or hash-based message authentication codes (HMACs) using a single data buffer and exploiting an SIMD (single instruction-multiple data) architecture of a computing device. As used herein, "hash digest" includes segment digest, message digest and/or other hash digest. A segment digest corresponds to a hash digest of a segment. A message digest corresponds to a hash digest of a message. The message digest may be determined based, at least in part, on a plurality of segment digests. A plurality of segment digests may be calculated in parallel. In order to benefit from the reduced processing time associated with calculating the segment digests in parallel, the plurality of segment digests may be stored and/or may be utilized for further processing. The plurality of segment digests may then be utilized to, for example, generate an HMAC on a message stored in the data buffer. As use herein, a "message" is a sequence of bytes of data of arbitrary length that may correspond to one or more of data, commands, text, an application, a received bit stream, etc. A hash digest of a message (i.e., a message digest) is a fixed-length bit string that uniquely represents the message. A message authentication code (MAC) is an authentication tag that is generated using the message and a symmetric cryptographic key configured to allow authentication of a message. The MAC is typically appended to the message and a recipient may then authenticate the message by determining whether a recipient-generated MAC, determined using the received message and symmetric key, matches the MAC received with the message. An HMAC is generated using the message and one or more cryptographic keys. In an HMAC, the cryptographic key(s) are hashed along with the message and resulting message digests may again be hashed with a same or different cryptographic key. Hashing a plurality of times using one or more cryptographic keys is configured to enhance the security of the HMAC.The systems and methods are configured to apportion at least a portion of a message to a number of segments. The message may include a plurality of sequences of data elements. A data element size, i.e., a number of bits in a data element, may be related to a specific hash algorithm, e.g., SHA-256 word size, and/or a processor architecture, e.g., width of an SIMD register. The sequences of data elements are apportioned so that data elements are interleaved with an interleave period related to a number of segments, S. The number of segments may be determined based, at least in part, on the width of SIMD registers associated with the processor and/or processor core that is configured to determine the message digest. The number of segments S may be further based, at least in part, on a word size of a particular hashing (and/or HMAC) algorithm. The segments may then be processed, in parallel, using SIMD functionality, to generate a respective segment digest for each segment.Hashing algorithms include, but are not limited to, MD5 and/or the SHA family (e.g., SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256). An MD5 algorithm may comply or be compatible with Request for Comments (RFC) 1321, published by the Internet Engineering Task Force (IETF), titled "The MD5 Message-Digest Algorithm", published April 1992, and/or later versions of this specification. An SHA algorithm may comply or be compatible with the Federal Information Processing Standards (FIPS) Publication 180-4, published by the National Institute of Standards and Technology (NIST), InformationTechnology Laboratory (ITL) titled "Secure Hash Standard (SHS)", published in March, 2012 and/or later versions of this standard. The HMAC protocol may comply or be compatible with the FIPS Publication 198, published by NIST ITL, titled "The Keyed-Hash MessageAuthentication Code (HMAC)", published March 6, 2002 and/or later versions of this standard, for example, FIPS Publication 198-1, published July 2008. Of course, in other embodiments, the secure hash protocol and/or HMAC protocol may include custom and/or proprietary secure hash and/or HMAC protocols.For processing, each segment may be divided into blocks with each set of blocks processed in parallel. A size of the blocks may be determined based on the particular cryptographic algorithm being used, e.g., SHA-256, block size 64 bytes. Messages may typically be of arbitrary length. Thus, a message length (and associated data buffer size) may or may not be a whole number multiple of a number of segments multiplied by the block size. The message may include a remainder (i.e., R bytes) that includes data bits and/or data elements that do not evenly fit into the defined segment blocks. The remainder may be processed, as described herein, to yield an additional message digest related to the remainder.Thus, for a message (and associated data buffer) of length L, that includes N segment blocks (i.e., number of segments S multiplied by block size B associated with a hashing algorithm) plus a remainder R bytes in the data buffer that are not included in the segment blocks, S+1 hash digests may be determined for the message. The S+1 hash digests may then be stored. Generating and storing the S+1 hash digests is configured to trade off storage for speed. In other words, while the S+1 hash digests could themselves be hashed into a single message digest, the additional processing operations may add to the overall processing time thereby reducing operational speed. While storing the S+1 hash digests may consume more storage, since the hash digests are bit-limited, e.g. 128 bits, the additional storage used may be relatively minor with respect to the savings in processing time achieved by avoiding hashing the S+1 hash digests to produce a single message digest.FIG. 1 illustrates a functional block diagram of a computing device 100 consistent with various embodiments of the present disclosure. Computing device 100 generally includes a processor 110, a network controller 115 and a memory 120. The computing device 100 may include, but is not limited to, a mobile computing device, e.g., a laptop computer, a notebook computer, a tablet computer, a feature phone, a smartphone, a desktop computer, a server and/or another computing device that may be configured to authenticate a message and/or data. The network controller 115 is configured to manage communication between computing device 100 and a network and/or other computing devices.The processor 110 includes one or more core(s) 112a,..., 112m, one or more general purpose register(s) 114a,..., 114n, one or more SIMD register(s) 116a,..., 116p, and/or one or more other register(s) 118a,... 118q. Each core, e.g., core 112a, is configured to perform operations associated with a power up sequence of the computing device 100, padding a message and/or generating one or more hash digest(s) and/or one or more message authentication code(s), as described herein. Each general purpose register, e.g., general purpose register 114a, is configured to support, e.g., basic integer arithmetic and/or bit and byte string operations. For example, general purpose register 114a may be configured to hold operands for logical and arithmetic operations, operands for address calculations and/or memory pointers. Other register(s) 118a,..., 118q may include, e.g. , a flag register and/or a program counter. Each SIMD register, e.g., SIMD register 116a, is configured to support execution of single instruction - multiple data operations on integers and/or floating point values. For example, SIMD registers 116a,..., 116p may include one or more 64-bit MMX registers, one or more 128-bit XMM registers and/or one or more 256-bit YMM registers.Each SIMD register 116a,..., 116p has a corresponding register size and is configured to hold at least one data element in a respective execution element. A size of an execution element may correspond to a data element type, i.e., a byte, a word, a double word ("dword") or a quad word. For example, a 128-bit register may include 16 8-bit execution elements, each corresponding to a data element type of bytes, eight 16-bit execution elements, each execution element corresponding to a data type of words, four 32-bit execution elements, eachcorresponding to a data type dwords, two 64-bit execution elements corresponding to two data elements of data type quad words or one 128-bit execution element corresponding to one data element of data type double quad word. Thus, a single SIMD instruction may be configured to operate on at least one data element held in a respective execution element. For example, for a single SIMD instruction configured to operate on four 32-bit execution elements, acorresponding 128-bit SIMD register may hold four 32-bit dwords (each dword corresponding to an execution element). In this example, the four 32-bit dwords may then be operated on in parallel by the single SIMD instruction.Memory 120 includes an operating system OS 121 configured to manage operations of computing device 100. OS 121 may be configured to operate in a limited mode upon, e.g., power up of the computing device 100 and/or during resume operations. During power up and/or resume operations, OS 121 may be configured to authenticate one or more modules prior to allowing the module to operate on computing device 100. Memory 120 includes a data buffer 122 configured to store a message 123. A size of the data buffer 122, e.g., L bytes, corresponds to the size of the stored message. Memory 120 further includes an SIMD hash module 124, a nonSIMD hash module 125, an HMAC module 126, a padding module 128 and storage for a plurality of hash digests 130a,... 130s. Memory 120 may include secure storage 132 configured to store one or more cryptographic key(s) 134a, ... , 134r. The cryptographic keys may be associated with authenticating a message (e.g., a module prior to allowing the module to operate).SIMD hash module 124 is configured to retrieve one or more blocks of message 123 from data buffer 122, to hash the retrieved blocks to generate a plurality of segment digests and to store the plurality of segment digests in memory 120, e.g., hash digests 130a,..., 130s. In some embodiments, the plurality of hash digests 130a,... , 130s may be utilized by MAC module 126, as described herein. SIMD hash module 124 is configured to hash the retrieved blocks of message 123 according to a selected hash algorithm. Each hash algorithm may include associated predefined hash parameters related to properties and/or operation of the hash algorithm. Predefined hash parameters may include a maximum message size, a block size, B, an algorithmic word size, w, and a hash digest size. For example, for SHA-256, the maximum message size is 264-l bits, the block size is 512 bits (i.e., 64 bytes), the algorithmic word size is 32 bits (corresponding to one dword) and the hash digest size is 256 bits (e.g., corresponding to 8 dwords).SIMD hash module 124 is configured to apportion a plurality of sequences of data elements to a number of segments, S. The number of segments, S, is related to an SIMD register size and the algorithmic word size associated with a selected hashing algorithm. The number of segments, S, may correspond to the SIMD register size divided by the algorithmic word size. The number of segments may also correspond to a number of execution elements in a register, e.g., SIMD register 116a. Similarly, a data element size, dw, corresponds to the algorithmic word size. For example, SHA-256 is configured to process 32-bit (i.e., w = 32) algorithmic words. A 32-bit execution element for an SIMD register corresponds to a dword data type. Thus, for a 256-bit SIMD register utilized by SIMD hash module 124 configured to implement SHA-256 may be configured with eight 32-bit execution elements. Message 123, of length L bytes, configured to be hashed by SIMD hash module 124 using the SHA-256 algorithm, may then include a plurality of sequences of 32-bit data elements (dw = 32 bits). The number of segments corresponding to the number of execution elements may then be eight.Each sequence of data elements may then include S data elements. The plurality of sequences of data elements may then be apportioned to the S segments by apportioning a respective data element in each sequence to a respective segment. For example, message 123 may include M sequences of S data elements (e.g., mo(do, di, ..., ds-i), mi(do, di, ..., ds-i), . . . , niM- i(do, di, ..., ds-i)), where ι¾ represents sequence number i and djrepresents data element j in an associated sequence, i.e., sequence number i. S segments may be written as So, Si, . . ., Ss-i-Apportioning may then result in segment So including mo(do), mi(do), . . . , niM-i(do), segment Si including mo(di), mi(di),... , niM-i(di) and so on until segment Ss-i that includes mo(ds-i), mi(ds- i), . . . , mM-i(ds-i). Apportioning in this manner is configured to avoid transposing operations and to thereby improve efficiency and reduce processing time.FIG. 2 illustrates apportioning a plurality of sequences of data elements to S segments.Each box in FIG. 2 represents a data element and the number in each box represents a segment number. The segment numbers range from zero to seven corresponding to eight-segment sequences. For example, a first sequence of data elements includes eight data elements, e.g., data element 202, data element 204, and so on to data element 206 and a second sequence of data elements includes eight data elements, e.g., data element 212, data element 214 and so on to data element 216. Thus, data element 202 and data element 212 are associated with segment zero, data element 204 and data element 214 are associated with segment 1 and data element 206 and data element 216 are associated with segment 7. Thus, a respective data element in each sequence may be apportioned to a respective segment.The number of data elements that may be apportioned may be based, at least in part, on a segment block size. As used herein, segment block size SB corresponds to a product of the number of segments, S, and the algorithmic parameter block size B (i.e., S*B). The message 123 length L may be divided into N segment blocks with a remainder of R, where R is greater than or equal to zero and less than SB and N is a whole number. The number of data elements that may be apportioned is equal to N multiplied by the block size (in, e.g., bytes) divided by the data element size (in, e.g., bytes). For example, SHA-256, has block size of 64 bytes (i.e., 512 bits) and algorithmic word size of 4 bytes (i.e., 32 bits). Assuming the number of segments is eight (i.e., register size divided by algorithmic word size). Thus, the segment block size for this example is 512 bytes (i.e., 8 segments multiplied by the block size (i.e., 64 bytes)). Up to512*N/4 = 128*N data elements may be apportioned where N is a maximum whole number that satisfies 512*N < L.Thus, based at least in part, on a selected hashing algorithm, an SIMD register size and the message 123 size (and corresponding data buffer 122 size), L, the number, N, of segment blocks and the remainder, R, may be determined. A segment length, SL, may be determined based, at least in part, on the selected hashing algorithm and the number N. The segment length SL may correspond to the block size B multiplied by N. The segment length corresponds to a number of e.g., bytes of data elements included in each segment after the sequences of data elements have been apportioned, as described herein.SIMD hash module 124 is configured to process the S segments according to the selected hashing algorithm, in parallel, producing S segment digests. Consistent with the selected hashing algorithm, SIMD hash module 124 may be configured to initialize each segment digest with a respective initial value. For example, SHA-256 specifies eight 32-bit initial values H0-H7. SIMD hash module 124 may then process the S segments in parallel, exploiting SIMD instructions and one or more of the SIMD registers 116a,..., 116p. Each segment may include N blocks of data elements. When the processing completes, SIMD hash module 124 may be configured to store the S segment digests 130a,..., 130s in memory 120. For example, a length of each segment digest 130a,..., 130s may be 256 bits for the SHA-256 hashing algorithm. Thus, rather than processing one block at a time, SIMD hash module 124 is configured to process S blocks in parallel and to provide S segment digests as output. The S segment digests may then be stored, used for authentication and/or processed further.FIG. 3 illustrates segment digests that may result from the operations of, e.g. , SIMD hash module 124, configured to hash the eight segments of apportioned data elements illustrated in FIG. 2, in parallel and according to SHA-256. SHA-256 is configured to produce a 256-bit hash digest that includes eight 32-bit dwords, H0- H7. Thus, a first segment digest 322 is associated with segment zero, a second segment digest 324 is associated with segment 1 and an eighth segment digest 326 is associated with segment 7.Padding module 128 is configured to pad the remainder R with padding bits to achieve a padded length that corresponds to P*block size for the selected hashing algorithm, where P is a whole number. The padding is configured to be a minimum number of padding bits that results in a padded length that corresponds to P*block size, the block size corresponding to the selected hashing algorithm. The padding bits may have values (i.e., zero or one) and an order that are defined by the selected hashing algorithm. In some embodiments, the padding bits may include a binary representation of a length parameter, e.g., message length. For example, for SHA-256, padding includes a one bit (i.e., bit of value one), followed by k zero bits, followed by a 64-bit block that corresponds to a binary representation of the message length, k may be determined as the smallest non-negative solution to R+l+k≡ 448mod512, where 512 corresponds to the block size in bits of the SHA-256 algorithm. In an embodiment, the length parameter may correspond to the length of the remainder R, in, e.g., bits. In another embodiment, the length parameter may correspond to the length of the message L, in, e.g., bits.In another embodiment, padding module 128 may be configured to pad each segment with a number of padding bits corresponding to the block size. In this embodiment, the binary representation of the length parameter may correspond to the segment length SL. SIMD hash module 124 may then be configured to process the padded segments, as described herein.NonSIMD hash module 125 is configured to process the padded remainder according to the selected hashing algorithm producing an additional hash digest. Consistent with the selected hashing algorithm, SIMD hash module 124 may be configured to initialize the additional hash digest with an additional initial value. When the processing completes, nonSIMD hash module 125 may be configured to store the additional hash digest, e.g., 130s, in memory 120. The additional hash digest may then be used for authentication and/or processed further.Thus, SIMD hash module 124 and nonSIMD hash module 125 are each configured to process at least a portion of a message and to produce a plurality of hash digests. Processing time may be reduced by apportioning sequences of data elements of the message to S segments and processing the S segments in parallel to produce S hash digests, as described herein. The remainder of the message may then be padded and processed to produce an additional hash digest. The S+1 hash digests may then be stored, used for authentication and/or processed further.HMAC module 126 is configured to generate a keyed-hash message authentication code(MAC) based, at least in part, on a message, e.g., message 123, and a cryptographic key. For example, HMAC module 126 may retrieve one or more cryptographic keys, e.g., keys 134a,..., 134r, from secure storage 132. HMAC module 126 may be configured to generate one or more initial value(s), based, at least in part, on the cryptographic key(s) 134a,..., 134r, and to provide the initial value(s) to SIMD hash module 124 and/or nonSIMD module 125. SIMD hash module 124 and/or nonSIMD module 125 may be configured to use the provided initial value(s) during processing, instead of initial values associated with the selected hashing algorithms. SIMD hash module 124 and/or nonSIMD module 125 may then produce the S+1 hash digests, based at least in part on message 123 and the initial value(s), as described herein. In some embodiments, HMAC module 126 is configured to generate the MAC for message 123 based, at least in part, on at least the S segment digests produced by SIMD hash module 124.In an embodiment, HMAC module 126 may retrieve a first cryptographic key, e.g., key 134a, may generate a first initial value based, at least in part, on the first cryptographic key 134a. HMAC module 126 may provide the first initial value to SIMD hash module 124. The SIMD hash module 124 may then be configured to utilize the first initial value as the initial value for each of the S segments. HMAC module 126 may similarly provide the first initial value to the nonSIMD hash module 125 that may then utilize the first initial value as the initial value for determination of the additional hash digest, as described herein.In another embodiment, HMAC module 126 may be configured to divide the first cryptographic key into a plurality of portions, generate a plurality of initial values and to provide S initial values to SIMD hash module 124. The SIMD hash module 124 may then be configured to utilize a respective initial value as the initial value for each respective segment. HMAC module 126 may similarly provide a respective initial value to the nonSIMD hash module 125 that may then utilize the respective initial value as the initial value for determination of the additional hash digest, as described herein.In another embodiment, HMAC module 126 may retrieve a second cryptographic key, e.g., key 134b, and may generate a second initial value based, at least in part, on the second cryptographic key 134b. HMAC module 126 may be configured to hash an ordered set of hash digests, utilizing the second initial value. For example, the ordered set of digests may include a concatenation of the S hash digests produced by SIMD hash module 124 based, at least in part, on the first cryptographic key. In another example, the ordered set of digests may include a concatenation of the S+l hash digests produced by SIMD hash module 124 and nonSIMD hash module 125 based, at least in part, on the first cryptographic key.In another embodiment, HMAC module 126 may be configured to utilize the second initial value (generated based, at least in part, on the second key 134b) as the initial value for an intermediate hash operation on the S (or S+l) message digests 130a,..., 130s concatenated with, e.g., a preprocessed first key exclusive-OR'd with an opad, as described herein.In another embodiment, HMAC module 126 may be configured to retrieve a third cryptographic key, e.g., key 134c, and may generate a third initial value based, at least in part, on the third cryptographic key 134c. HMAC module 126 may be configured to utilize the third initial value (generated based, at least in part, on the third key 134c) as the initial value for an intermediate hash operation on the S (or S+l) message digests 130a,..., 130s concatenated with, e.g., a preprocessed first key exclusive-OR'd with an opad, as described herein.Of course, one or more of the embodiments related to HMAC module may be combined.Such combinations may be configured to provide enhanced security through use of multiple cryptographic keys for generating initial values, as described herein.The foregoing example embodiments are configured to determine hash digests and/or hash-based message authentication codes using a single data buffer and exploiting the SIMD architecture of a computing device. A plurality of segments may be apportioned from the message stored in the data buffer and a plurality of segment digests may then be determined, in parallel. A remainder of the message may then be padded and hashed, generating an additional hash digest. The HMAC process may include determining the plurality of segment digests using one or more initial values based, at least in part, on one or more cryptographic keys. Producing and storing the plurality of segment digests (and the additional hash digest) is configured to increase the speed of the hashing operations and thereby reduce the processing time. The additional storage utilized by the plurality of segment digests and the additional hash digest may then be traded off for the increase in speed.FIG. 4 is a flowchart 400 of hashing operations according to various embodiments of the present disclosure. In particular, the flowchart 400 illustrates generating a plurality of segment digests and an additional hash digest from a message, the segment digests generated in parallel. Operations of this embodiment include apportioning at least a first portion of a message of (unpadded) length L to a number (S) of segments 402. The message includes a plurality of sequences of data elements. The number of segments may be related to a register size, e.g., size of SIMD register 116a. Each sequence may include S data elements. A respective data element in each sequence may be apportioned to a respective segment. Thus, operations 402 are configured to interleave message data elements into segments, as described herein. In some embodiments, each segment may be padded prior to hashing at operation 404. For example, each segment may be padded with a block of padding. Operation 406 includes hashing the S segments, in parallel, resulting in (i.e., yielding) S segment digests. Each segment may include N blocks of data elements, each block having an associated block size. The S segment digests may be generated based, at least in part, on at least one initial value. In some embodiments, the S segment digests may be stored at operation 408.A remainder may be padded at operation 410. The remainder corresponds to a second portion of the message (and data buffer). The second portion may be related to the unpadded length L, the number of segments S and the block size. Operation 412 includes hashing the padded remainder resulting in an additional hash digest. The additional hash digest may be stored at operation 414. Program flow may return at operation 416.The operations of flowchart 400 are configured to generate a plurality of segment digests, in parallel, from a message and to generate an additional hash digest from a remainder. At least the segment digests may then be utilized to authenticate the message. In some embodiments, the operations of flowchart 400 may be included in HMAC operations configured to authenticate a message.FIG. 5 is a flowchart of hash-based message authentication code (HMAC) operations 500 according to various embodiments of the present disclosure. In particular, the flowchart 500 illustrates one example of generating a message authentication code (MAC). Operations of this embodiment include determining at least one initial value based, at least in part, on a first cryptographic key 502. Operation 504 includes generating a MAC based, at least in part, on the S hash digests.The operations of flowchart 500 are configured to generate a hash-based MAC for a message, e.g., message 123. The operations may include apportioning the message into segments that may then be processed (e.g., hashed) in parallel, producing a plurality of segment digests. The hashing may utilize one or more initial values that are determined, based at least in part, on one or more cryptographic keys. At least the segment digests may then be utilized in generating the MAC. The operations related to generating the MAC may also utilize one or more initial values that are determined, based at least in part, on the one or more cryptographic keys. FIG. 6 is another flowchart of one example 600 of hash-based message authentication code (HMAC) operations according to one embodiment of the present disclosure. In particular, the flowchart 600 illustrates generating a hash-based MAC utilizing one or more initial values related to one or more cryptographic keys. Operations of this embodiment include determining at least one preprocessed cryptographic key Ko 602 related to a first cryptographic key K. The preprocessing is configured to result in at least one K0with a length that corresponds to the algorithmic block size of a selected hashing algorithm. Thus, at least a portion of the first cryptographic key K may be lengthened, e.g., by appending bit(s), may be shortened, e.g., by hashing then appending bit(s) or may be unchanged if the length of the at least a portion of K corresponds to the algorithmic block size, in order to produce the at least one Ko. The at least one preprocessed cryptographic key Ko may then be exclusive-OR'd with an inner pad (ipad) at operation 604. For example, the inner pad may correspond to a byte (e.g., 36hex) , repeated B (i.e., block size in bytes) times. The result(s) of operation 604 may then be hashed at operation 606 to produce at least one initial value. For example, the first cryptographic key K may be preprocessed to produce one Ko that may then be exclusive-OR'd with the ipad to produce one initial value. In another example, the first cryptographic key K may be divided into at least S portions, where S is the number of segments, and each of the at least S portions may be preprocessed, exclusive-OR'd and hashed with the ipad to produce at least S initial values.Operation 608 includes hashing a message, e.g., message 123. Operation 608 may receive as input, the message and one or more initial values produced according to operations 602, 604 and 606. Operation 608 may include one or more of the operations illustrated in flowchart 200. Operation 608 includes generating S segment digests and may include generating an additional hash digest related to a remainder of the message, as described herein. In some embodiments, operation 608 may include hashing an ordered set of at least the S segment digests to produce an intermediate hash digest. For example, operation 608 may include hashing an ordered set of S segment digests to produce the intermediate hash digest. In another example, operation 608 may include hashing an ordered set of the S segment digests and the additional hash digest related to the remainder of the message. The initial value for hashing the ordered set may correspond to one of the at least one initial values used for input to operation 608 or may be generated based, at least in part, on a second cryptographic key. Utilizing the second cryptographic key is configured to enhance the security of the HMAC process.Operation 610 includes exclusive-ORing one preprocessed cryptographic key K0with an outer pad (opad) to produce a result. For example, the outer pad may correspond to a byte (e.g., 5chex), repeated B (i.e., block size in bytes) times. Operation 612 includes concatenating the result of operation 610 with the result of operation 608. The concatenated results of operations 610 and 608 may be hashed at operation 614. The initial value(s) for operation 614 may correspond to one or more of the at least one initial value used for input to operation 608, may be generated based, at least in part, on a second cryptographic key or may be generated based, at least in part, on a third cryptographic key. Operation 616 includes selecting a number of leftmost bytes of the result of operation 614 as the message authentication code for the message.The operations of flowchart 600 are configured to generate a hash-based message authentication code for a message where at least a portion of the message is hashed, in parallel, as described herein. Utilizing a plurality of initial values, generated based on a plurality of cryptographic keys, is configured to enhance the security of the HMAC process illustrated by flowchart 600.Table 1 includes example pseudocode configured to apportion a message stored in a data buffer into S segments, to process the segments in parallel, to pad the remainder and to hash the padded remainder, as described herein. Parameters may include S (the number of segments), w (width of a segment word, i.e., algorithmic word and execution element size), B (hash algorithm specified block size) and SB (segment block size =S*B).TABLE 1//Function that provides a series of hash digests over the length of data starting at dataPtr hash(dataPtr, digestPtrSIMD, digestPtrFinal, length) {hashSIMDLen = length/SB //integer valuehashNonSIMDLen = length - (hashSIMDLen * SB)//Initialize each segments digest according to selected hashing algorithm specification initializeDigestSIMD(digestPtrSIMD)//Parallel processing of data bufferwhile (hashSIMDLen>0) {hashSIMD(dataPtr, digestPtrSIMD)dataPtr += SB //increment dataPtrhashSIMDLen- //decrement loop counter //Initialize final digest according to specificationinitializeDigest(digestPtrFinal)//Hash the remaining portion of the data as specifiedhashnonSIMD(dataPtr, digestPtrFinal, hashNonSIMDLen)//digestPtrSIMD contains the S hash digests//digestPtrFinal contains a single hash digestReturnFor example, for an SHA-256 hashing algorithm (32-bit word size) and a computing device that includes 256-bit SIMD registers, a number of segments S that may be processed in parallel is eight (i.e., register width/algorithmic word size). In this example, a message length and associated data buffer length is L bytes. Thus, the hashSIMD() function of Table 1 may operate on 8 segments.Continuing with this example, the SHA-256 algorithmic block size B, is 64 bytes. Thus, the segment block size SB in bytes is 8*64=512 bytes. The length of the remainder is then R = (L mod SB) bytes. The data buffer length may then be L = SB*N + R bytes. The segment length SL in bytes is then B*N = 64*N bytes. Each segment may then be formed by using each 8thdword (32-bit data element) of the message up to 512*N bytes, where N is greater than zero and 512*N is less than the message length (and corresponding buffer size) L.Thus, at least a portion of a message of length L and including a plurality of sequences of data elements may be apportioned to a number S of segments with a respective data element in each sequence apportioned to a respective segment. The segments may then be processed by a hashing algorithm, e.g., SHA-256, in parallel, resulting in the number S of segment digests. The parallel processing is configured to reduce the processing time associated with hashing messages and to thereby reduce the time associated with, for example, authentication. A remainder of the message may be padded then hashed to produce an additional hash digest, as described herein.While the flowcharts of FIGS. 4, 5 and 6 illustrate operations according various embodiments, it is to be understood that not all of the operations depicted in FIGS. 4, 5 and/or 6 are necessary for other embodiments. In addition, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in FIGS. 4, 5 and/or 6, and/or other operations described herein may be combined in a manner not specifically shown in any of the drawings, and such embodiments may include less or more operations than are illustrated in FIGS. 4, 5 and/or 6. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure.The foregoing provides example system architectures and methodologies, however, modifications to the present disclosure are possible. For example, computing device 100 may also include a host processor, chipset circuitry and system memory. The host processor may include one or more processor cores and may be configured to execute system software. System software may include, for example, operating system code (e.g., OS kernel code) and local area network (LAN) driver code. LAN driver code may be configured to control, at least in part, the operation of the network controller 115. System memory may include I/O memory buffers configured to store one or more data packets that are to be transmitted by, or received by, network controller 115. Chipset circuitry may generally include "North Bridge" circuitry (not shown) to control communication between the processor 110, network controller 115 and system memory 120.Computing device 100 may further include an operating system (OS) to manage system resources and control tasks that are run on, e.g., computing device 100. For example, the OS may be implemented using Microsoft Windows, HP-UX, Linux, or UNFX, although other operating systems may be used. In some embodiments, the OS may be replaced by a virtual machine monitor (or hypervisor) which may provide a layer of abstraction for underlying hardware to various operating systems (virtual machines) running on one or more processing units. The operating system and/or virtual machine may implement one or more protocol stacks. A protocol stack may execute one or more programs to process packets. An example of a protocol stack is a TCP/IP (Transport Control Protocol/Internet Protocol) protocol stack comprising one or more programs for handling (e.g., processing or generating) packets to transmit and/or receive over a network. A protocol stack may alternatively be comprised on a dedicated sub-system such as, for example, a TCP offload engine and/or network controller 115. The TCP offload engine circuitry may be configured to provide, for example, packet transport, packet segmentation, packet reassembly, error checking, transmission acknowledgements, transmission retries, etc., without the need for host CPU and/or software involvement.The system memory may comprise one or more of the following types of memory:semiconductor firmware memory, programmable memory, non-volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, and/or optical disk memory. Either additionally or alternatively system memory may comprise other and/or later-developed types of computer-readable memory.Embodiments of the operations described herein may be implemented in a system that includes one or more storage devices having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. The processor may include, for example, a processing unit and/or programmable circuitry. The storage device may include any type of tangible, non-transitory storage device, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of storage devices suitable for storing electronic instructions."Circuitry", as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. "Module", as used herein, may comprise, singly or in any combination circuitry and/or code and/or instructions sets (e.g., software, firmware, etc.).Hashing and message authenticating methods (and systems), consistent with the teachings of the present disclosure are configured to determine hash digests and/or hash-based message authentication codes (HMACs) using a single data buffer and exploiting an SIMD (single instruction-multiple data) architecture of a computing device. A plurality of segment digests may be determined in parallel. An additional hash digest may be produced, based at least in part, on a padded remainder, as described herein. At least the plurality of segment digests may then be utilized to, for example, generate an HMAC on a message stored in the data buffer. Determining a plurality of segment digests, in parallel, is configured to reduce processing time associated with authenticating messages.Accordingly, the present disclosure provides an example apparatus. The example apparatus includes an SIMD hash module configured to apportion at least a first portion of a message of length L to a number (S) of segments, the message including a plurality of sequences of data elements, each sequence including S data elements, a respective data element in each sequence apportioned to a respective segment, each segment including a number N of blocks of data elements and to hash the S segments in parallel, resulting in S segment digests, the S hash digests based, at least in part, on an initial value and to store the S hash digests. The example apparatus further includes a padding module configured to pad a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; and a nonSIMD hash module configured to hash the padded remainder, resulting in an additional hash digest and to store the additional hash digest.The present disclosure also provides an example computing device. The example computing device includes a processor including at least one SIMD register, each SIMD register configured to hold a plurality of data elements and memory including a data buffer, the data buffer configured to store a message of length L. The example computing device further includes an SIMD hash module configured to apportion at least a first portion of the message to a number (S) of segments, the message including a plurality of sequences of data elements, each sequence including S data elements, a respective data element in each sequence apportioned to a respective segment, each segment including a number N of blocks of data elements and to hash the S segments in parallel using the at least one SIMD register, resulting in S segment digests, the S hash digests based, at least in part, on an initial value and to store the S hash digests in memory. The example computing device further includes a padding module configured to pad a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; and a nonSIMD hash module configured to hash the padded remainder, resulting in an additional hash digest and to store the additional hash digest in memory.The present disclosure also provides an example method. The example method includes apportioning, by an SIMD hash module, at least a first portion of a message of length L to a number (S) of segments, the message including a plurality of sequences of data elements, each sequence including S data elements, each segment including a number N of blocks of data elements, a respective data element in each sequence apportioned to a respective segment;hashing, by the SIMD hash module, the S segments in parallel, resulting in S segment digests, the S hash digests based, at least in part, on an initial value; and storing, by the SIMD hash module, the S hash digests. The example method further includes padding, by a padding module, a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; hashing, by a nonSIMD hash module, the padded remainder, resulting in an additional hash digest; and storing, by the nonSIMD hash module, the additional hash digest.The present disclosure also provides an example system that includes one or more storage devices having stored thereon, individually or in combination, instructions that when executed by one or more processors result in the following operations including: apportioning at least a first portion of a message of length L to a number (S) of segments, the message including a plurality of sequences of data elements, each sequence including S data elements, a respective data element in each sequence apportioned to a respective segment, each segment including a number N of blocks of data elements; hashing the S segments in parallel, resulting in S segment digests, the S hash digests based, at least in part, on an initial value; storing the S hash digests; padding a remainder, the remainder corresponding to a second portion of the message, the second portion related to the length L of the message, the number of segments and a block size; hashing the padded remainder, resulting in an additional hash digest; and storing the additional hash digest.The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents.Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications. |
A device for interconnecting and multiplexing a plurality of interfaces is disclosed. In the preferred embodiment, three serial communications ports are interfaced such that any two may be interconnected for communications. A novel circuit comprising a plurality of buffers is employed to allow selection of the interconnection scheme by a controller. The design is scalable and is characterized by low cost, low printed circuit board area requirements, and graceful function. In the preferred embodiment, a PDA, wireless transceiver, and cradle interface of a personal communications device are effectively interconnected and multiplexed. |
CLAIMS 1. An apparatus for selectively interconnecting a plurality of ports, comprising : a cross-bar switch, having a plurality of bi-directional data ports, and a controller, operable to control said cross-bar switch to interconnect any two of said plurality of bi-directional data ports. 2. The apparatus in Claim 1 wherein said plurality of bi-directional ports are adapted to interconnect RS-232 ports. 3. The apparatus of Claim 1 wherein said cross-bar switch is implemented with a plurality of digital buffers. 4. An apparatus, comprising: first, second, and third interfaces each having an input and an output; an interface controller having a first, second, and third control outputs, and operable to enable any one of said outputs individually; a first, second, third, fourth, fifth, and sixth buffer, each having an input, an output, and a control input, and wherein said control inputs enable and disable the coupling of signals through said buffers, and wherein said output of said first and second buffers are coupled to said input of said first interface; said outputs of said third and fourth buffers are coupled to said input of said second interface; said outputs of said fifth and sixth buffers are coupled to said input of said third interface; <Desc/Clms Page number 16> said output of said first interface is coupled to said input of said fourth and fifth buffer; said output of said second interface is coupled to said inputs of said first and sixth buffers; said output of said third interface is coupled to said inputs of said second and third buffers; said first control output is coupled to said control inputs of said first and fourth buffers; said second control output is coupled to said control inputs of said third and sixth buffers, and said third control output is coupled to said control inputs of said second and fifth buffers. 5. The apparatus of Claim 3 including means for disabling said control inputs sets said outputs of said buffers to a high impedance state, and wherein said interface controller is operable to disable all of said control outputs. 6. The apparatus of Claim 3 wherein said interfaces are serial port interfaces. 7. The apparatus of Claim 6 wherein said serial port interfaces are RS-232 serial port interfaces. 8. The apparatus of Claim 6 wherein said output of said serial port interface is a transmit data output, and said input of said serial port interface is a receive data input. 9. The apparatus of Claim 7 wherein said output of said serial port interface is a request to send output, and said input of said serial port interface is a clear to send input. <Desc/Clms Page number 17> 10. The apparatus of Claim 4 wherein said interface controller is incorporated into one of said interfaces. 11. An apparatus, comprising: a plurality of n interfaces, each having an input and an output; a plurality of n (n-1) buffers, each having an input, an output, and a control input, and wherein said control inputs enable and disable the coupling of signals through said buffers, respectively ; an interface controller having a plurality of (nC2) control outputs, and operable to enable any one of said plurality of outputs individually, and wherein said outputs of a unique (n-1) of said plurality of buffers are coupled to said input of each one of said plurality of interfaces; every one of said outputs of said plurality of interfaces is uniquely coupled to said input of one of said (n-1) plurality of buffers that are coupled to said inputs of every other of said plurality of interfaces, such that said output of every interface is coupled to said input of every other interface through a unique one of said plurality of buffers, and each one of said plurality of control outputs is coupled to said control inputs of the two of said plurality of buffers that couples a unique pair of the (nC2) combinations of said interface inputs and outputs. 12. The apparatus of Claim 11 wherein disabling said control inputs sets said outputs of said plurality of buffers to a high impedance state, and wherein said interface controller is operable to disable all of said plurality of control outputs. 13. The apparatus of Claim 11 wherein said plurality of interfaces are serial port interfaces. <Desc/Clms Page number 18> 14. The apparatus of Claim 13 wherein said serial port interfaces are RS-232 serial port interfaces. 15. The apparatus of Claim 13 wherein said output of said serial port interface is a transmit data output, and said input of said serial port interface is a receive data input. 16. The apparatus of Claim 14 wherein said output of said serial port interface is a request to send output, and said input of said serial port interface is a clear to send input. 17. The apparatus of Claim 11 wherein said interface controller is incorporated into one of said interfaces. |
<Desc/Clms Page number 1> MULTIPLE-INTERFACE PORT MULTIPLEXER BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to digital circuits. More specifically, the present invention relates to systems and methods for multiplexing a plurality of communication resource interfaces. Description of the Related Art Modern electronic devices continue to evolve into ever-higher levels of integration. While devices used to implement a given electronic function were once wired together from discrete components, now, highly integrated'chips'are produced to consolidate the discrete components and functions into a single package. This lowers cost, reduces size and makes more reliable product manufactured in this manner. Examples of such highly integrated devices include wireless telephones, personal digital assistants, radio transceivers, media recorders and players, device controllers and all kinds of other devices and functions. Today, designers of integrated systems that combine more than one of these integrated devices into a single product are faced with the task of integrating integrated devices. For example, if a designer wishes to integrate a wireless telephone and a personal digital assistant, they must design a circuit for accomplishing such integration or create an entirely new integrated device which incorporates all the components and functions desired for the integrated product. The latter approach may be an expensive and risky investment in an emerging market. Aside from sharing power supply and ground circuits, highly integrated devices must communicate in some fashion because there is usually a software <Desc/Clms Page number 2> application dedicated to each integrated device. To yield a gracefully functioning integrated product, it is necessary to integrate both the hardware and the software. This usually implies a parallel or serial communications port interconnecting the two integrated devices. This can be straight forward in the case where two devices are integrated into a product. Serial communications protocols and physical interfaces are often times preferred because a smaller quantity of printed circuit board area is required to route the relatively fewer circuit traces. However, where there are more than two devices integrated into a product, the connection of a plurality of interfaces are more problematic. Where three or more integrated devices are integrated into a product, the designer must design a communications interface, or multiplexing circuit that not only interconnects the various devices, but also deals with issues of contention between the devices that inevitably arise. Of course, the design of such circuitry is within the ability of many designers, given there are not great limitations placed on cost, development time, and printed circuit board area required. However, this is rarely the case. In modern, highly integrated, devices that compete in open markets, there is always a great pressure to hold size and cost down, while at the same time providing short product development cycles and good reliability. Thus there is a need in the art for a low cost device for interconnecting and multiplexing three or more interfaces among devices. SUMMARY OF THE INVENTION The need in the art is addressed by the apparatus and methods of the present invention. In one embodiment, an apparatus for selectively interconnecting a plurality of ports is taught. It comprises a cross-bar switch, having a plurality of bi-directional data ports, and a controller, operable to control the cross-bar switch to interconnect any two of said plurality of bi-directional data ports. In a refinement of this, the plurality of bi-directional ports are adapted to interconnect RS-232 ports. In further refinement, the cross-bar switch is implemented with a plurality of digital buffers. <Desc/Clms Page number 3> In another embodiment, an apparatus for interconnecting three bi-directional interfaces is taught. The apparatus comprises a first, second, and third interface each having an input and an output, and an interface controller having a first, second, and third control outputs, and operable to enable any one of the outputs individually. Also, a first, second, third, fourth, fifth, and sixth buffer, each having an input, an output, and a control input, and wherein the control inputs enable and disable the coupling of signals through the buffers. The output of the first and second buffers are coupled to the input of the first interface, and, the outputs of the third and fourth buffers are coupled to the input of the second interface, and, the outputs of the fifth and sixth buffers are coupled to the input of the third interface. Similarly, the output of the first interface is coupled to the input of the fourth and fifth buffer, and, the output of the second interface is coupled to the inputs of the first and sixth buffers, and, the output of the third interface is coupled to the inputs of the second and third buffers. Also, the first control output is coupled to the control inputs of the first and fourth buffers, and, the second control output is coupled to the control inputs of the third and sixth buffers, and, the third control output is coupled to the control inputs of the second and fifth buffers. In a refinement to the foregoing, disabling the control inputs sets the outputs of the buffers to a high impedance state, and the interface controller is operable to disable all of the control outputs. In a further refinement, the interfaces are serial port interfaces. And more specifically, the serial port interfaces are RS-232 serial port interfaces. The invention is applicable when output of the serial port interface is a transmit data output, and the input of the serial port interface is a receive data input. And also, when the output of the serial port interface is a request to send output, and the input of the serial port interface is a clear to send input. It is also taught that the interface controller is incorporated into one of the interfaces. The foregoing embodiment is with regard to the specific case of interconnecting three interfaces. The present invention also teaches a general case <Desc/Clms Page number 4> that can be applied to any number of interfaces, that is the number n interfaces. This is accomplished with an apparatus, comprising a plurality of n interfaces, each having an input and an output, and a plurality of n times (n-1) buffers, each having an input, an output, and a control input, and wherein the control inputs enable and disable the coupling of signals through the buffers, respectively. Also, an interface controller having a plurality of (nC2) control outputs, and operable to enable any one of the plurality of outputs individually. The expression (nC2) is the number of unordered combinations of 2 interfaces taken from a total of n interfaces. Mathematically, it is read as"71 choose 2". In this general case, the outputs of a unique (n-1) of the plurality of buffers are coupled to the input of each one of the plurality of interfaces, and, every one of the outputs of the plurality of interfaces is uniquely coupled to the input of one of the (ra-1) plurality of buffers that are coupled to the inputs of every other of the plurality of interfaces, such that the output of every interface is coupled to the input of every other interface through a unique one of the plurality of buffers. Further, each one of the plurality of control outputs is coupled to the control inputs of the two of the plurality of buffers that couples a unique pair of the (nu2) combinations of the interface inputs and outputs. In a refinement to the general case, it is taught that disabling the control inputs sets the outputs of the buffers to a high impedance state, and the interface controller is operable to disable all of the control outputs. In a further refinement, the interfaces are serial port interfaces. And more specifically, the serial port interfaces are RS-232 serial port interfaces. The invention is applicable when output of the serial port interface is a transmit data output, and the input of the serial port interface is a receive data input. And also, when the output of the serial port interface is a request to send output, and the input of the serial port interface is a clear to send input. It is also taught that the interface controller is incorporated into one of the interfaces. <Desc/Clms Page number 5> BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a drawing of an illustrative embodiment portable device implementation of the present invention. Fig. 2 is a drawing of the cradle unit of an illustrative embodiment implementation of the present invention. Fig. 3 is a functional block diagram of an illustrative embodiment of the present invention. Fig. 3A is a functional block diagram of an illustrative embodiment of the present invention. Fig. 4 is a schematic diagram of an illustrative embodiment of the present invention. Fig. 5A is a diagram of a three-interface implementation an illustrative embodiment of the present invention. Fig. 5B is a diagram of a four-interface implementation an illustrative embodiment of the present invention. Fig. 5C is a diagram of a five-interface implementation an illustrative embodiment of the present invention. Fig. 5D is a diagram of a six-interface implementation an illustrative embodiment of the present invention. DESCRIPTION OF THE INVENTION Illustrative embodiments and exemplary applications will now be described with reference to the accompanying drawings to disclose the advantageous teachings of the present invention. While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those having ordinary skill in the <Desc/Clms Page number 6> art and access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the present invention would be of significant utility. Reference is directed to Figure 1, which is a drawing of a personal wireless communications device 2 in the preferred embodiment of the present invention. This device 2 incorporates a wireless telephone and a personal digital assistance (hereinafter'PDA'). The telephone functions are implemented using the earphone 4 and microphone 6, as is traditionally done in wireless telephones. Also, a keypad 10 is used for dialing telephone numbers, placing calls, and generally to operate the wireless telephone functions. In the preferred embodiment, the wireless telephone utilizes a spread spectrum transceiver according to the IS-95 CDMA protocol. The device 2 also incorporates a PDA, which primarily uses a liquid crystal display 8 as the output device, and may include a touch screen input function as well. The keypad 10 is also used to operate some of the PDA functionality. Since a device such as this is capable of storing and manipulating a large amount of data, it is useful to store a back-up copy of such data so that loss or damage to the device does not result in a total loss of the data stored therein. The back-up storage function is accomplished through an interface connector 12. In the preferred embodiment, the interface connector 12 couples to a mating connector in a docking cradle. Figure 2 illustrates a docking cradle 14 in the preferred embodiment. The docking cradle 14 comprises a structure 16 adapted to sit on, or be mounted to, a surface (not shown). The structure 16 is adapted to accept a portion of the wireless device 2 and to generally support the device 2 when it is inserted into the structure 16. Within the cradle 16 is a connector 18 that is adapted to interconnect electrical signals with the connector 12 in the wireless device 2. In addition, the cradle 14 is adapted to interface to a computing device, such as a personal computer, so that signals interfaced between the wireless device 2 and the cradle 14 can be further coupled into the computing device (not shown) allowing the data to be stored in the computing device. In the preferred embodiment, the electrical interface between the device 2 and <Desc/Clms Page number 7> the cradle 14 comprises a serial communications path, in addition to other electrical signals. The serial communications path operates in accordance with the EIA/TIA RS-232 serial communications physical interface and protocols, as is well under stood by those of ordinary skill in the art. The functional components inside the wireless communications device 2 include one or more microprocessors or microcontrollers, or simply'controllers', and a wireless transceiver, as well as a PDA device. In addition, several other functional components are employed to deliver the various required functions. While there exists a very high level of component integration in wireless personal communications devices, there has not yet been achieved total integration into a single semiconductor device all of the functions needed of the PDA, wireless telephone and related functionality. This is typically the case where two or more basic functional systems are combined, as is the case with the preferred embodiment where a wireless telephone and PDA have been integrated. Reference is directed to Figure 3, which is a functional block diagram of some of the components utilized in the preferred embodiment. A PDA device 20 is represented by a communications interface, or port, item 20 in Figure 3. The wireless telephone in the device is represented by the communications port'B'22 which couples to mobile station modem (hereinafter'MSM'). The MSM operates to provide a vast portion of the wireless telephone functionality in the preferred embodiment device and is the primary point of interface between the wireless telephone and other product components. In addition, in Figure 3, the interface to the cradle is represented by block 12. Each of the foregoing functional blocks, the PDA, the MSM, and the cradle need to be interconnected during different times of normal operation of the device. By way of example, and not limitation, the PDA interface is coupled to the cradle interface when operations to back-up or restore the PDA memory are desired. The MSM interface 22 is coupled to the cradle interface 12 when the wireless device is installed in the cradle 14 to extend the functions of the wireless telephone, such as in hands-free operation or programming. Also, the PDA 20 is coupled to the MSM 22 <Desc/Clms Page number 8> when data are being transferred to and from the PDA 20 via the wireless telephone, through the MSM 22, or when PDA 20 data is being used to control the MSM 22. In Figure 3, the interconnection between the PDA port 20, the MSM port 22, and the cradle port 12 is accomplished by use of the omni-directional interface multiplexer 24, in the present invention. This device is characterized by efficient use of the minimum components needed to accomplish the required function, low parts cost, low development costs, and compact size. The multiplexer 24 controls the interconnection of bi-directional signals, in the form of serial communications signals, between the three aforementioned interfaces. The multiplexer 24 is a digital circuit and is under the control of the PDA controller (not shown) in the preferred embodiment. The flow of digital data between the interfaces is illustrated by the three double-headed arrows in Figure 3. The control of the multiplexer 24 is illustrated by arrow 25 connecting the PDA interface 20 and the multiplexer 24. While the preferred embodiment multiplexer deals with three serial communications ports in a portable wireless device, those of ordinary skill will appreciate that any reasonable number of interfaces, which carry bi-directional signals, may employ the teachings of the present invention. The present invention is readily scalable, as will be more fully discussed hereinafter. In Figure 3A, a more generalized diagram of the illustrative embodiment from Figure 3 is shown. In Figure 3A, the interconnection is accomplished by a cross-bar switch 29. The bi-directional inputs/outputs of cross-bar switch 29 are coupled to port A 27, port B 31, and port C 33. In this illustrative embodiment, the ports are configured to interconnect transmit and receive data, such as used in an RS-232 port, for example. A controller 35 controls the cross-bar switch 29, as is necessary for the system the system using the invention. In the preferred embodiment, this would be a portable communications device, as was described hereinbefore. Reference is directed to Figure 4, which is a schematic diagram of the preferred embodiment omni-direction interface multiplexer. The three port interfaces are illustrated and include the PDA port 20, the MSM port 22, and the cradle port ('CRDL') 12. Since the preferred embodiment utilizes RS-232 serial communications <Desc/Clms Page number 9> between devices, each of the three port interfaces includes the typical RS-232 signals, which are: receive data'RXD', transmit data'TXD', data terminal ready'DTR', request to send'RTS', and clear to send'CTS'. The function and purpose of these signal lines is well understood by those of ordinary skill in the art. In addition, the MSM port interface comprises an RS-232 data carrier detect'DCD'output which is coupled to the cradle port interface 12 so that the external computing device can be informed as to when the MSM is receiving data carrier signals. The other signal interfaces will more discussed hereinafter. The structure of the preferred embodiment of the present invention includes the twelve non-inverting buffers identified in Figure 4 as items 26, 28, 30,32, 34,36, 38,40, 42,42, 44,46, ad 48. The buffers implement two instances of the present invention. The TXD and RXD RS-232 signal lines are a first output/input pair, and, the RTS and CTS signal lines are a second transmit/receive signal pair. Thus, six buffers are required to implement each instance of the present invention in the preferred embodiment. The buffers each have an input and an output. An output signal, either TXD or RTS is coupled to the input side of a buffer and an input signal, either RXD or CTS is coupled to the output side of a buffer. Each buffer also has a control input. The control input can be enabled or disabled. In the enabled stated, the signal level at the input of a buffer is coupled to the output of the buffer. In the disabled state, the input of the buffer is not coupled to the output. In the preferred embodiment, the output of the buffer is set to a high impedance state when the control input is disabled. This provides the basic advantage that two, or more, outputs can be coupled to a single input such that any one of them can drive the input without being loaded by one of the other outputs, so long as each other outputs is disabled to the high impedance state. In addition, the interface multiplexer can be set to a state where all of the buffers is set to the high impedance state, and no signal are coupled from any interface to any other interface. In the preferred embodiment illustrated in Figure 4, there are three interface ports, the PDA 20, the MSM 22, and the cradle interface 12. Thus, there are three <Desc/Clms Page number 10> interfaces, which may be bi-directionally coupled, any two at. a time. In mathematical terms, the interconnection possibilities are the number of ways, or Combinations, of picking two unordered outcomes from three possibilities, also stated as'three choose two'. Obviously, there are three such possibilities. To control these three possibilities, there are three control output signals from a general purpose input/output (hereinafter'GPIO') device 50. The GPIO is interfaced to a PDA microcontroller (not shown) in the preferred embodiment, which determines when it is appropriate to make the needed bi-directional interface interconnections. Naturally, the controller could be a separate entity, apart from any one of the interfaces being multiplexed. The three control outputs of GPIO 50 are labeled'A','B', and'C'. Each of these outputs is a conventional CMOS or TTL level signal output line from a microcontroller, the PDA microcontroller, in the preferred embodiment. By enabling any one of these outputs, a particular one of the three interconnection possibilities is enabled. Also, when all three of the outputs of GPIO 50 are disabled, then none of the buffers are enabled and all of the buffer outputs are disabled to the high impedance state and there is no interconnection between the three interfaces. Stated otherwise, the multiplexer is turned off. In Figure 4, as stated earlier, there are two instances of the present invention that operate in parallel. This is necessary because the TXD/RXD and RTS/CTS output/input signal line pairs operate in unison in the RS-232 protocol specification. Therefore, three control outputs from GPIO 50 can control both instances of the present invention in the preferred embodiment. The specifics of the interconnection and operation of the preferred embodiment follow. Considering first, the multiplexing of the TXD and RXD signals among the PDA 20, the MSM 22, and the cradle 12 interfaces, each RXD signal lines has the output of two buffers coupled to it. In the PDA 20, the output of buffers 26 and 28 are coupled to RXD. In the MSM 22, the output of buffers 34 and 36 are coupled to RXD. In the cradle 12, the output of buffers 42 and 44 are coupled to RXD. For each interface, the TXD signal is coupled to the input side of one of the buffers coupled to <Desc/Clms Page number 11> the other two interfaces. Specifically, the TXD signal from the PDA is coupled to the input buffer 34, which couples to RXD on MSM 22, and the input of buffer 42, which couples to RXD on cradle 12. Similarly, the TXD signal from the MSM is coupled to the input buffer 28, which couples to RXD on PDA 20, and the input of buffer 44, which couples to RXD on cradle 12. And, the TXD signal from the cradle 12 is coupled to the input buffer 26, which couples to RXD on PDA 20, and the input of buffer 36, which couples to RXD on MSM 22. The control output labeled'A'on GPIO 50 is coupled to the control input of buffers 26 and 42. Therefore, when the signal on control output A is enabled, so are buffers 26 and 42. Buffer 26 couples the TXD on cradles 12 to the RXD on PDA 20, and, buffer 42 couples the TXD on PDA 20 to the RXD on cradle 12. Thus, enabling control output A establishes a bi- directional communications path between PDA 20 and cradles 12. The control output labeled'B'on GPIO 50 is coupled to the control input of buffers 28 and 34. Therefore, when the signal on control output B is enabled, so are buffers 28 and 34. Buffer 28 couples the TXD on MSM 22 to the RXD on PDA 20, and, buffer 34 couples the TXD on PDA 20 to the RXD on MSM 20. Thus, enabling control output B establishes a bi-directional communications path between PDA 20 and MSM 20. The control output labeled'C'on GPIO 50 is coupled to the control input of buffers 36 and 44. Therefore, when the signal on control output C is enabled, so are buffers 36 and 44. Buffer 36 couples the TXD on cradle 12 to the RXD on MSM 22, and, buffer 44 couples the TXD on MSM 22 to the RXD on cradle 12. Thus, enabling control output C establishes a bi-directional communications path between cradle 12 and MSM 20. The interconnection of the RTS and CTS signals on PDA 20, MSM 22, and cradle 12 through buffers 30,32, 38,40, 46, and 48, with control signals A, B, and C from GPIO 50 are the same functionally as was just described respecting the TXD and RXD signals, so the details will not be reduced to words here. For a thorough understanding, please refer to Figure 4, which details the interconnections. <Desc/Clms Page number 12> Those of ordinary skill in the art will appreciate that the circuitry illustrated in Figure 4 will benefit from the use of pull-up resistors (not shown) at each of the inputs of the multiplexer components. These serve two beneficial purposes. First, they establish valid logical signal levels at the multiplexor inputs when not otherwise driven by one of the other ports. Second, they establish valid logical signal levels when all of the multiplexor buffers are set to the high-impedance state (or"tri- stated"). Respecting the remaining circuitry detailed in Figure 4, these are used primarily to deal with the differing RS-232 voltage levels. Those of ordinary skill in the art understand that the RS-232 interface specification does not specify an exact operating voltage. The cradle 12 operates with both positive (plus twelve volts) and negative (minus 3 to minus 12 volts) signals. Resistors 62 and 60 establish a voltage divider so that the plus twelve volt charge signal output on the'CHRG'line of cradle 12 does not create an over-voltage situation at the'CHRG'input of MSM 22. Buffer 56 and resistor 58 serve to isolate the'DTR'input of MSM 22 from the rest of the circuit when MSM 22 is powered-off in a standby state. The-V SENSE 52 and +V SENSE 54 bocks convert the higher voltages output from cradle 12 to the CMOS voltages required in the portable device through GPIO 50. Transistor 64 and it related components serve to isolate the MSM interface from the other circuitry when the MSM is turned off. The'RNG'signal output from MSM 22 indicates that a call is coming into the device. The'C'signal output from the GPIO 50 sets the multiplexer, into an MSM 22 to cradle 12 bi-directional communications mode. When RNG goes active (low), transistor 64 turns on, so the RNG signal is disconnected from the cradle 12, Unless C is active, and then the RNG signal is coupled though to the cradlel2. Resistor 74,76, 68, and 70 are used to bias and isolate the transistor, as is understood by those of ordinary skill in the art. Diode 72 serves to clamp negative going signals to ground when the RS-232 signals fall below a safe level for the portable device. The foregoing preferred embodiment implements three interface multiplexing for selected bi-directional communications with two communications lines pairs <Desc/Clms Page number 13> (TXD/RXD and CTS/RTS). However, it is to be understood that the preset invention is readily scalable depending on the number of interfaces as well as the number of communication line pairs. The general case is for'n'interface ports interconnected two at a time. Where the number of communications line pairs is more than one, the circuitry is multiplied and the control signals are shared, as was described respecting the preferred embodiment. The general mathematical expression is for a Combination of sub-elements taken from a set of elements as follows: nCk ! !/ (-k) !) Read fa choose k', where n is the number of interfaces and k is the number of interfaces interconnected at one time. The number of possible combinations is the number of control outputs required to operate the multiplexer. If there are n interfaces, then each input of each interface naturally requires (ra-1) buffers be coupled to it, so that each other interface can be coupled thereto. Also, the total number of buffers required for the multiplexer will be n multiplied by (n-1). Taking all this into consideration, and solving for various values of n and setting k equal to two, we have the following: <tb> <tb> Number <SEP> of <SEP> Number <SEP> of <SEP> Number <SEP> of<tb> <tb> Interfaces <SEP> Interconnections <SEP> Buffers<tb> /Control <SEP> Lines<tb> <tb> <tb> 3 <SEP> 3 <SEP> 6<tb> <tb> 4 <SEP> 6 <SEP> 12<tb> <tb> 5 <SEP> 10 <SEP> 20<tb> <tb> 6 <SEP> 15 <SEP> 30<tb> The forgoing is graphically represented in Figures 5A, 5B, 5C, and 5D. Figure 5A depicts three interfaces 90,92, and 94. There are three possible connections 91, 93, and 95. Therefore, the multiplexer requires three control lines to select the three possibilities. Each interface requires two buffers to couple the outputs of the other <Desc/Clms Page number 14> two interfaces to the input of the selected interface (n-1). In Figure 5B, there are four interfaces, 100,102, 104, and 106. These can be interconnected in six combinations as shown by lines 101,103, 105,107, 108, and 109. Each interface has three interconnection lines coupled to it, so three buffers are required for each. Thus, six control lines and twelve buffers are required in all. In Figure 5C, there are five interfaces, 110,112, 114,116, and 118. Each interface has four lines 111 coupled to it so four buffers are required for each interface. There are ten possible interconnection pairs 113. Thus, ten control lines and twenty buffers are required to implement the present invention. In Figure 5D, there are six interfaces, 120,121, 122, 123,124, and 125. Each interface has five interconnections 126 coupled to it, and there are a total of fifteen interconnection possibilities. Therefore, fifteen control outputs and thirty buffers are required. While the numbers of control lines and buffers grow as the number of interface ports increases, the cost to implement the present invention remains low. In the preferred embodiment the buffers used are packaged as six buffers to a package (Toshiba TC74CHC367 hex-buffers are used). The control lines are available either directly from a micrcontroller or may be demultiplexed from a smaller number of micrcontroller control lines. Thus, in the case of three or four interfaces, two buffer packages are required. In the case of five interfaces, four buffer packages are required. In the case of six interfaces, five buffer packages are required. Because the cost is so low, and the printed circuit board area required for these packages are so small, it is very economical to implement an omni-directional interface multiplexer of the present invention. Thus, the present invention has been described herein with reference to a particular embodiment for a particular application. It is therefore intended by the appended claims to cover any and all such applications, modifications and embodiments within the scope of the present invention. WHAT IS CLAIMED IS: |
The invention discloses a device, a system and a method for efficiently updating a secure arbitration mode module. Techniques and mechanisms for efficiently providing features of a security authentication module (SEAM) by a processor. In one embodiment, a core of a processor supports an instruction set that includes instructions to invoke a SEAM. One such core installs an authentication code module (ACM) that is executed to load a persistent SEAM loader module (P-SEAMLDR) in a reserved area of system memory. In turn, the P-SEAMLDR loads a SEAM module into the reserved area, which facilitates trusted domain extension (TDX) protection for a given trusted domain. In another embodiment, the instruction set supports a SEAM call instruction with which either of a P-SEAMLDR or a SEAM module is accessed in a reserved area. |
1.A processor comprising:a decoder, the decoder comprising circuitry for decoding an instruction set-based secure arbitration mode (SEAM) call (SEAMCALL) instruction, the SEAMCALL instruction comprising:a first field to provide an opcode to indicate that the logical processor is to transition from traditional virtual machine extension root operations; andThe second field, used to provide an action object, to specify one of the following:a SEAM loader module to be loaded in a reserved range of system memory coupled to the processor, wherein the processor's range register stores information identifying the reserved range; orA SEAM module to be loaded in the reserved scope by the SEAM loader module, the SEAM module initiating SEAM of the processor; andAn execution circuit coupled with the decoder for executing the SEAMCALL instruction, wherein the execution circuit determines whether to access the SEAM loader module or one of the SEAM modules based on the operation object.2.The processor of claim 1, wherein the execution circuit determining whether to access the SEAM loader module or one of the SEAM modules comprises the execution circuit identifying the availability of the SEAM module at a first variable while determining whether to signal the invalidation of the SEAMCALL instruction based on determining whether the operation object specifies the SEAM module, the first variable is different from a second variable that identifies whether the SEAM loader module is available.3.The processor of claim 1, wherein the execution circuit determining whether to access the SEAM loader module or one of the SEAM modules comprises the execution circuit determining whether a mutex lock has been acquired, wherein the The mutex described above is shared among multiple logical processors.4.The processor of any one of claims 1 to 3, wherein the execution of the SEAMCALL instruction by the execution circuit further comprises the execution of the circuit to perform the following operations:determining that the operation object specifies the SEAM loader module; andSetting a first variable based on the operation object indicates that among the SEAM loader module and the SEAM module, the SEAM loader module is more recently invoked by the logical processor, wherein the Among the multiple logical processors provided by the processor, the first variable only corresponds to the logical processor.5.5. The processor of claim 4, further comprising a translation lookaside buffer (TLB), wherein the execution circuit further executes a SEAM retirement instruction based on the instruction set, comprising the execution circuit determining based on the first variable Whether to flush the TLB.6.The processor of any one of claims 1 to 3, further comprising a measurement result register, wherein the execution of the SEAMCALL instruction by the execution circuit includes the execution of the execution circuit invoking the execution of the SEAM loader module to The measurement result register writes the measurement result of the SEAM module.7.A system that includes:memory; andA processor coupled to the memory, the processor comprising:a decoder, the decoder comprising circuitry for decoding an instruction set-based secure arbitration mode (SEAM) call (SEAMCALL) instruction, the SEAMCALL instruction comprising:a first field to provide an opcode to indicate that the logical processor is to transition from traditional virtual machine extension root operations; andThe second field, used to provide an action object, to specify one of the following:A SEAM loader module to be loaded in a reserved range of the memory, wherein the processor's range register stores information identifying the reserved range; orA SEAM module to be loaded in the reserved scope by the SEAM loader module, the SEAM module initiating SEAM of the processor; andAn execution circuit coupled with the decoder for executing the SEAMCALL instruction, wherein the execution circuit determines whether to access the SEAM loader module or one of the SEAM modules based on the operation object.8.8. The system of claim 7, wherein the execution circuit determining whether to access the SEAM loader module or one of the SEAM modules comprises the execution circuit identifying the availability of the SEAM module in a first variable While determining whether to signal failure of the SEAMCALL instruction based on determining whether the operation object specifies the SEAM module, the first variable is different from a second variable that identifies whether the SEAM loader module is available.9.8. The system of claim 7, wherein the execution circuit determining whether to access the SEAM loader module or one of the SEAM modules comprises the execution circuit determining whether a mutex has been acquired, wherein the Mutexes are shared among multiple logical processors.10.9. The system of any one of claims 7 to 9, wherein the execution circuit executing the SEAMCALL instruction further comprises the execution circuit performing the following operations:determining that the operation object specifies the SEAM loader module; andSetting a first variable based on the operation object indicates that among the SEAM loader module and the SEAM module, the SEAM loader module is more recently invoked by the logical processor, wherein the Among the multiple logical processors provided by the processor, the first variable only corresponds to the logical processor.11.11. The system of claim 10, the processor further comprising a translation lookaside buffer (TLB), wherein the execution circuit further executes a SEAM retirement instruction based on the instruction set, including the execution circuit based on the first A variable determines whether the TLB is to be flushed.12.The system according to any one of claims 7 to 9, the processor further comprising a measurement result register, wherein the execution of the SEAMCALL instruction by the execution circuit comprises: the execution circuit calls the SEAM loader module's Executed to write the measurement result for the SEAM module to the measurement result register.13.A method for a processor, the method comprising:Decode the instruction set based security arbitration mode (SEAM) call (SEAMCALL) instruction, the SEAMCALL instruction includes:a first field to provide an opcode to indicate that the logical processor is to transition from traditional virtual machine extension root operations; andThe second field, used to provide an action object, to specify one of the following:a SEAM loader module loaded in a reserved range of system memory coupled to the processor, wherein the processor's range register stores information identifying the reserved range; ora SEAM module loaded in the reserved scope by the SEAM loader module, the SEAM module initiating SEAM of the processor; andExecuting the SEAMCALL instruction includes determining whether to access the SEAM loader module or one of the SEAM modules based on the operation object.14.The method of claim 13, further comprising:initiating an Authentication Code Module (ACM) at the processor; andUsing the ACM, the SEAM loader module is loaded in the reserved scope.15.The method of claim 14, further comprising:Execution of the SEAM loader module is invoked to load the SEAM module in the reserved scope.16.14. The method of claim 13, wherein determining whether to access the SEAM loader module or one of the SEAM modules comprises determining whether the operation object is based on a first variable identifying availability of the SEAM module The SEAM module is designated to determine whether to signal failure of the SEAMCALL instruction, the first variable being different from a second variable identifying whether the SEAM loader module is available.17.14. The method of claim 13, wherein determining whether to access the SEAM loader module or one of the SEAM modules comprises determining whether a mutex has been acquired, wherein the mutex is in a plurality of logical shared between processors.18.The method of claim 13, 16 or 17, wherein executing the SEAMCALL instruction further comprises:determining that the operation object specifies the SEAM loader module; andSetting a first variable based on the operation object indicates that among the SEAM loader module and the SEAM module, the SEAM loader module is more recently invoked by the logical processor, wherein the Among the multiple logical processors provided by the processor, the first variable only corresponds to the logical processor.19.19. The method of claim 18, wherein the processor includes a translation lookaside buffer (TLB), the method further comprising:Executing a SEAM retirement instruction based on the instruction set includes determining whether to flush the TLB based on the first variable.20.17. The method of claim 13, 16 or 17, wherein the processor further includes a measurement result register, wherein executing the SEAMCALL instruction includes invoking execution of the SEAM loader module to send the measurement result value to the The registers are written with the measurement results of the SEAM module.21.A machine-readable medium comprising code which, when executed, causes a machine to perform the method of any one of claims 13 to 20. |
Apparatus, system and method for efficiently updating secure arbitration mode modulestechnical fieldThe present disclosure relates to computer systems, and more particularly—but not exclusively—to a secure arbitration mode for computing devices to construct and operate within a trust domain extension.Background techniqueModern processing devices employ disk encryption to protect data at rest. However, the data in memory is in clear text and is vulnerable to attack. Attackers can use a variety of techniques, including software-based and hardware-based bus scanning, memory scanning, hardware probing, and the like, to retrieve data from memory. This data from memory may include sensitive data such as privacy sensitive data, IP sensitive data, and also keys used for file encryption or communication.The data exposure is further exacerbated with the current trend of moving data and enterprise workloads to the cloud using virtualization-based hosting services provided by cloud service providers (CSPs). CSP customers (herein referred to as tenants) are increasingly demanding better security and isolation solutions for their workloads. Specifically, the tenant seeks a solution to enable the operation of the CSP-provided software external to the tenant's software's trusted computing base (TCB). The TCB of a system refers to a set of hardware, firmware and/or software components capable of influencing trust in the overall operation of the system.To provide these protections, some CSP systems remove virtual machine monitors (VMMs), also known as hypervisors, and other untrusted ones from the TCBs of virtual machines (VMs) managed by the VMM. Firmware, Software and Devices. VMs are the workloads of the individual tenants of the CSP. From both CSP and cloud tenant perspectives, both want confidentiality for VM workloads. To achieve this secure VM execution, the VM's memory and runtime processor state are kept private, integrity protected, and recovery protected against data exfiltration or tamper-based attacks. As CSPs continue to grow in number, size, and capability, it is expected that increasing emphasis will be placed on improving the efficiency of solutions that provide a secure execution environment.SUMMARY OF THE INVENTIONAccording to a first aspect of the present disclosure, there is provided a processor comprising: a decoder, the decoder comprising circuitry for decoding an instruction set based secure arbitration mode (SEAM) call (SEAMCALL) instruction, the The SEAMCALL instruction includes: a first field for providing an opcode to indicate that the logical processor is to transition from a traditional virtual machine extended root operation; and a second field for providing an operation object to specify one of the following: A SEAM loader module loaded in a reserved range of system memory coupled to the processor, wherein the processor's range register stores information identifying the reserved range; or to be executed by the SEAM loader module in the a SEAM module loaded in a reserved scope, the SEAM module initiating SEAM of the processor; and an execution circuit coupled with the decoder for executing the SEAMCALL instruction, wherein the execution circuit is based on the operation object to determine whether to access the SEAM loader module or one of the SEAM modules.According to a second aspect of the present disclosure, there is provided a system comprising: a memory; and a processor coupled to the memory, the processor comprising: a decoder comprising circuitry for performing an instruction-set-based The Secure Arbitration Mode (SEAM) call (SEAMCALL) instruction is decoded, and the SEAMCALL instruction includes: a first field for providing an opcode to indicate that the logical processor is to transition from a traditional virtual machine extension root operation; and a second A field that provides an operation object to specify one of the following: a SEAM loader module to be loaded in a reserved range of the memory where the processor's range register stores information identifying the reserved range; or a SEAM module to be loaded in the reserved scope by the SEAM loader module, the SEAM module initiating SEAM of the processor; and an execution circuit coupled to the decoder for executing the SEAMCALL instruction, Wherein, the execution circuit determines whether to access the SEAM loader module or one of the SEAM modules based on the operation object.According to a third aspect of the present disclosure, a method for a processor, the method comprising: decoding an instruction set-based secure arbitration mode (SEAM) call (SEAMCALL) instruction, the SEAMCALL instruction comprising: a first field , for providing an opcode to indicate that a logical processor is to transition from a traditional virtual machine extended root operation; and a second field for providing an operation object to specify one of the following: in a system coupled to the processor A SEAM loader module loaded in a reserved range of memory, wherein a range register of the processor stores information identifying the reserved range; or a SEAM module loaded in the reserved range by the SEAM loader module, so the SEAM module initiates SEAM of the processor; and executing the SEAMCALL instruction includes determining whether to access the SEAM loader module or one of the SEAM modules based on the operation object.Description of drawingsVarious embodiments of the present invention are illustrated by way of example and not by way of limitation in the accompanying drawings, in which:1A is a block diagram illustrating features of a computing system including a processor that supports Secure Arbitration Mode (SEAM) extensions to the Instruction Set Architecture (ISA), according to one embodiment.Figure IB is a block diagram illustrating features of a processor core of a processor that supports SEAM extensions to ISA, according to one embodiment.2 is a flow diagram illustrating features of a method of providing SEAM functionality, according to one embodiment.3 is a data diagram illustrating the operation of a persistent loader that provides a SEAM module, according to one embodiment.4 is a block diagram illustrating features of a virtual machine monitor (VMM) managed computing system that provides a trust control boundary, according to one embodiment.5 is a block diagram illustrating features of a processor-implemented trust domain extension (TDX) that facilitates operation of a SEAM module, according to one embodiment.6 is a flow diagram illustrating features of a state machine including Virtual Machine Extensions (VMX) and SEAM-based TDX transitions, according to one embodiment.7A-7E are sequence diagrams, each showing various pseudocode illustrating operations to facilitate provisioning of SEAM functions, according to respective embodiments.8A-8B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof, according to one embodiment.9A-9D are block diagrams illustrating exemplary specific vector friendly instruction formats, according to one embodiment.Figure 10 is a block diagram of a register architecture according to one embodiment of the invention.11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline, according to one embodiment.11B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor, according to one embodiment.12A-12B illustrate block diagrams of a more specific exemplary in-order core architecture, which would be one of several logic blocks (including other cores of the same type and/or different types) in the chip.13 is a block diagram of a processor that can have more than one core, can have an integrated memory controller, and can have integrated graphics, according to one embodiment;14-17 are block diagrams of exemplary computer architectures.18 is a block diagram compared to using a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to one embodiment.Detailed waysA processor architecture that utilizes trust domains (TDs) to provide isolation in virtualized systems is described. The techniques described herein may be implemented in one or more electronic devices. Non-limiting examples of electronic devices that may utilize the techniques described herein include mobile and/or stationary devices of any kind, such as cameras, cell phones, computer terminals, desktop computers, e-readers, fax machines, kiosks, laptops, etc. Top computers, netbook computers, notebook computers, Internet appliances, payment terminals, personal digital assistants, media players and/or recorders, servers (eg, blade servers, rack mount servers, combinations thereof, etc.), Set-top boxes, smart phones, tablet personal computers, ultra-portable personal computers, wired telephones, combinations thereof, and the like. More generally, the techniques described herein may be used in any of a variety of electronic devices that include processor circuitry and/or computer-readable instructions that provide security arbitration functionality.In the following description, numerous details are discussed in order to provide a more thorough description of embodiments of the present disclosure. However, it will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present disclosure.Note that in the respective figures of the embodiments, signals are represented by lines. Some lines may be thicker to indicate a greater number of constituent signal paths, and/or have arrows at one or more ends to indicate the direction of information flow. This indication is not intended to be limiting. Rather, these lines are used in conjunction with one or more exemplary embodiments to help facilitate easier understanding of a circuit or logic unit. Any represented signal, dictated by design needs or preferences, may actually include one or more signals that may travel in either direction and that may be implemented using any suitable type of signal scheme.Throughout the specification, and in the claims, the term "connected" means a direct connection, such as an electrical, mechanical, or magnetic connection between connected things, without any intervening devices. The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between connected things, or an indirect connection through one or more passive or active intermediate devices. The term "circuit" or "module" may refer to one or more passive and/or active components arranged to cooperate with each other to provide desired functionality. The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meanings of "a" and "the" include plural referents. The meaning of "in" includes "in" and "on".The term "apparatus" may generally refer to a device depending on the context in which the term is used. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures with active and/or passive components, and the like. In general, a device is a three-dimensional structure with a plane along the x-y direction of an x-y-z Cartesian coordinate system and a height along the z direction. The plane of a device may also be the plane of a device that includes the device.The term "scaling" generally refers to converting a design (schematic and layout) from one process technology to another and subsequently reducing the layout area. The term "scaling" also generally refers to reducing the size of layouts and devices within the same technology node. The term "scaling" may also refer to adjusting the signal frequency (eg, slowing down or speeding up - ie, scaling down or scaling up, respectively) relative to another parameter (eg, power supply level).The terms "substantially", "approximately", "approximately", "approximately" and "approximately" generally mean within +/- 10% of the target value. For example, the terms "substantially equal to", "approximately equal to" and "approximately equal to" mean that there is no difference more than incidental between the things so described, unless otherwise indicated in the explicit context of their use. In the art, this difference typically does not exceed +/- 10% of the predetermined target value.It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.Unless otherwise indicated, the use of the ordinal adjectives "first," "second," and "third," etc. to describe a common object merely indicates that different instances of similar objects are cited, and is not intended to imply that the objects so described must be in be in a given sequence in time, space, rank, or in any other way.The terms "left", "right", "front", "rear", "top", "bottom", "top", "bottom", etc. (if any) in the specification and in the claims are used for for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms "above", "below", "front side", "rear side", "top", "bottom", "above", "under" as used herein " and "on" refer to the relative position of a component, structure or material relative to other referenced components, structures or materials within the device, where such physical relationship is notable. These terms are used herein for descriptive purposes only, and are primarily used in the context of the device's z-axis, and thus may be relative to the device's orientation. Thus, a first material that is "above" a second material in the context of the figures provided herein may also be "below" the second material if the device is oriented reversed relative to the context of the figures provided. In the context of materials, one material disposed above or below another material may be in direct contact or may have one or more intervening materials. Additionally, a material disposed between two materials may be in direct contact with the two layers or may have one or more intervening layers. In contrast, the first material "on" the second material is in direct contact with the second material. A similar distinction is made in the context of component assembly.The term "between" may be used in the context of the z-axis, x-axis, or y-axis of a device. A material between two other materials may be in contact with one or both of these materials, or it may be separated from both of the other two materials by one or more intervening materials. A material "between" two other materials may thus be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device between two other devices may be directly connected to one or both of these devices, or it may be separated from both other devices by one or more intervening devices.As used throughout this specification and in the claims, a list of items joined by the terms "at least one of" or "one or more of" can mean any combination of the listed terms. For example, the phrase "at least one of A, B, or C" can mean A; B; C; A and B; A and C; B and C; It is noted that those elements of the figures that have the same reference numerals (or names) as elements in any other figure may operate or function in any manner similar to that described, but are not limited thereto.In addition, the various elements of combinatorial and sequential logic discussed in this disclosure may relate to both physical structures (eg, AND, OR, or XOR gates), as well as logical structures that implement Boolean equivalences that are the logic in question A synthesized or otherwise optimized collection of devices.In various embodiments, the CSP system deploys one or more trust domain extensions (TDX) to meet security goals—eg, via the use of a full-memory encryption (multi-key) that is adapted to include multi-key memory encryption and integrity provided by the memory controller of the total memory encryption (MK-TME) engine. MK-TME technology refers to providing the operating system or VMM with a different unique encryption key to encrypt pages of physical memory associated with different workloads, such as different tenants, different applications, different devices, etc. Ability. To support TDX, some embodiments provide or otherwise operate on an MK-TME (or other) engine that employs specific keys that can only be used for TDX.In some embodiments, TDX includes technology for extending virtual machine extensions (VMX) with a class of virtual machine guests referred to herein as trust domains (TDs). The TD operates in a processor mode that protects the confidentiality of its memory contents and its processor state relative to other software, including the hosting VMM, unless explicitly shared by the TD itself. In order to coordinate the protection described above, a trust domain resource manager (TDRM) is a VMM software extension deployed for the management and support of TDX operations. A VMM working as a TDRM launches and manages both TD and "traditional" VMs. Therefore, a VMM working as a TDRM is a full VMM from the perspective of a traditional VM. In some embodiments, as will be explained, a TDRM is restricted only for TDs managed by the TDRM.Secure arbitration mode (SEAM) is an instruction set architecture (ISA) extension that implements TDX. This mode of the processor (eg, SEAM) hosts resource arbitration software (eg, the "SEAM module") that acts as a trust arbiter between the TDRM and the TD. The SEAM module that invokes the SEAM specific library to perform SEAM manages resource assignment to the TD. The SEAM module has access to certain privileged instructions that SEAM can use to build the TDX from which the TD is started. The SEAM module also manages the creation, deletion, entry/exit of TDs, and secure use of resources assigned to TDs, such as memory or input/output (I/O) devices. Thus, by trusting SEAM rather than VMM (or TDRM), TD is secured and protected. For example, TAD determines whether a part of a program is valid and runs inside SEAM, not outside SEAM.In one embodiment, the processor deploys a SEAM module to enter SEAM operations, from which TDs are initiated to enable secure execution of tenant workloads. In some embodiments, the SEAM module invokes SEAM-specific libraries to support resource management for the TD and become a trusted arbiter between the TDRM/VMM and the TD. The processor includes hardware registers to store information identifying reserved ranges of memory. A reserved area of memory stores code and data for SEAM modules, including SEAM-specific libraries. The processor also includes a processor core coupled with the hardware registers.To facilitate efficient implementation of SEAM functionality, some embodiments store a SEAM loader module in a reserved range of such memory in various ways, the execution of which would load the SEAM module into the reserved range. In turn, the SEAM loader module itself is initially loaded into a reserved range of system memory by an authenticated code module (ACM) - eg, such loading occurs during system startup. To avoid confusion, the SEAM loader module in system memory is referred to herein as the "persistent SEAM loader" (or "P-SEAMLDR"), while the ACM - which is executed to load this P-SEAMLDR in system memory SEAMLDR - referred to herein as the non-persistent SEAM loader (or "NP-SEAMLDR"). Providing P-SEAMLDR in system memory—in conjunction with certain state variables and adaptations of the SEAM instructions of the instruction set—some embodiments mitigate in various ways the processors being in their respective silent (eg, waiting for SIPI) states to enable the processor core to update the SEAM module.In an example embodiment, NP-SEAMLDR is initiated - eg, at startup - to load P-SEAMLDR in system memory, eg, in any of the various VMM and/or other software processes The execution period of one will persist. In some embodiments, the processor maps the image of the NP-SEAMLDR ACM into physical memory, and executes a get secure (GETSEC) leaf function, referred to herein as the GETSEC[ENTERACCS] instruction, to initiate NP-SEAMLDR. Upon execution of the GETSEC[ENTERACCS] instruction, the processor unlocks the hardware registers on the logical processor from which the NP-SEAMLDR ACM was launched, which unlocks the reserved range of memory in which the P-SEAMLDR module is loaded. ACM is a processor-authenticated firmware module that executes from a protected environment created in the processor core cache. In some embodiments, the NP-SEAMLDR ACM stores the P-SEAMLDR module and manifest in a reserved range of memory. The manifest, which may be located in the header of the NP-SEAMLDRACM, may be generated via a hash algorithm operating on specific information associated with the P-SEAMLDR module, such as the P-SEAMLDR module, the security of the P-SEAMLDR module The combination of the version number (security version number, SVN) and the P-SEAMLDR module identifier. Subsequently, the P-SEAMLDR module installs the SEAM module in the reserved range of system memory, as further detailed herein.In some embodiments, P-SEAMLDR creates a SEAM virtual machine control structure (VMCS) in a reserved range of memory to enable the state of the VMM to be stored in the SEAM VMCS when the logical processor transitions to SEAM mode . Additionally or alternatively, the SEAM VMCS may be used to store SEAM state, which may be provided to be loaded into a logical processor for execution in SEAM. In one such embodiment, the logical processor can use the data in such SEAM VMCS to restore the VMM state into the processor core when SEAM is exited. In some embodiments, NP-SEAMLDR executes in authenticated code (AC) mode and is authenticated against manifest signatures of manifests. The key used to verify the manifest signature is embedded in the hardware of the processor core. P-SEAMLDR also uses manifest signatures to authenticate SEAM modules loaded into a reserved range of memory. P-SEAMLDR then records the measurement results and identity of the SEAM module into a set of hardware measurement result registers. In some embodiments, some or all of these measurement result registers can only be written to by P-SEAMLDR, thereby creating a measured environment to ensure tamper-free execution. Once the SEAM has been deployed and set within the reserved range of memory, the processor core further restores the lock to the reserved range of memory by restoring the lock to the hardware registers.Once SEAM has been deployed by the loading process just discussed, the SEAM module enters the SEAM VMX root mode from which TDX operates. The SEAM module calls SEAM specific libraries to execute certain privileged instructions for building the TDX from which the TD is launched. In this way, the SEAM module creates a TD virtual machine (or "TD" for simplicity). In some embodiments, for each TD created by the SEAM module, the SEAM module programs various information in various fields of the TD VMCS created by the SEAM module for that TD. By way of illustration and not limitation, such information includes a TD host key identifier (TD-HKID) and a security extended page table (EPT) pointer (or SEC_EPTP), which is appended to the EPT (herein referred to as shared EPT). In some embodiments, outside of SEAM, the VM entry does not consult some or all of this information, which is (eg) reserved specifically for TDX and TD creation, so TDRM/VMM is unaware of this additional information.When the SEAM module executes the VM entry, the processor uses some or all of this information to enter the TD. For example, the processor (eg, the processor's memory controller) also utilizes the EPT pointed to by SEC_EPTP to translate the guest physical address of the first trust domain to the host physical address of the memory. Once the SEAM module is loaded in the reserved range of memory and operational as a SEAM, the processor transfers virtual root mode operational control to SEAM as a virtual machine exit in response to VMM (or TDRM) execution of the SEAMCALL instruction. In other words, the legacy VMX root mode passes control to the SEAM VMX root mode. In SEAM VMX root mode, the SEAM module manages entering and exiting the TD.Through the TD VMCS, the SEAM module can request the processor to cause the TD's VM to exit when certain instructions are executed or when certain events and conditions occur. If an event triggers an unconditional exit, VM exit transfers control from SEAM VMX non-root mode to SEAM VMX root mode. In some cases, such as in response to a system interrupt, a VM exit also triggers a SEAM exit, so control is further transferred to traditional VMX root mode.There are many advantages to using SEAM modules and associated supporting hardware technologies to build and operate TDX from SEAM. For example, a CSP (or a processor vendor in some embodiments) implements SEAM and differentiates software functions in SEAM that are built and evolved at the speed of business requirements. In addition, CSPs can generate open source code for review, obtain certificates, implement SEAM in software languages of their choice, and more. The use of SEAM also enables new usage models, such as the use of secure enclaves outside the TD and/or the use of VMMs within the TD, which would require several additional ISA instructions without SEAM.In addition to operating in SEAM-VMX root mode, SEAM modules loaded in reserved ranges of memory further harden SEAM modules using software and hardware protection mechanisms provided by the processor. These mechanisms include, for example, execute/disable (XD), virtual memory (eg paging), control flow of enforcement technology (CET), protection key (PK), and the like. Similarly, TDs managed and invoked by the SEAM module from the SEAM VMX root mode can also use these hardware protection techniques. The SEAM module ensures that VMM/TDRM cannot hide/virtualize or in any other way prevent TD's use of these technologies.In various embodiments, the SEAM function enables the platform to move further away from hard partitioning of resources in favor of flexible sharing of platform resources. Additionally or alternatively, the SAEM function supports sizing and/or partitioning of resources based on scaling requirements, eg, maximum number of TDs, maximum size of TDs, and the like. Additionally or alternatively, implementing SEAM-based TDX as software reduces the complexity of the ISA compared to building SEAM functionality into processor microcode (where the hardware also evolves at a slower rate).Some embodiments provide a persistent SEAM loader module in a protected area of system memory in various ways, which - in combination with modified SEAM instructions of the instructions - facilitates improved SEAM functionality. For example, such embodiments enable the SEAM module to be updated in various ways without requiring each of the multiple logical processors to be in a respective sleep (eg, waiting for SIPI) power state. Certain features of various embodiments are described with reference to providing implementation details for SEAM in accordance with the Intel™ processor architecture and/or instruction set. However, some embodiments, which are not limited in this regard, additionally or alternatively provide corresponding improvements to SEAM functionality according to any of various other architectures and/or instruction sets.1A is a block diagram illustrating an example computing system 100 that includes a processor 112 that supports the Secure Arbitration Mode (SEAM) extension of the Instruction Set Architecture (ISA) to facilitate Trust Domain Extensions (TDX) operate. System 100 is an example of an embodiment in which a persistent SEAM loader (P-SEAMLDR) module loaded in a reserved scope of system memory enables loading - and eg, updating - of SEAM modules in the reserved scope. For example, such a P-SEAMLDR module in system memory facilitates efficient updating of SEAM modules.Computing system 100 provides hardware (and, in some embodiments, executable instructions) that support operations in SEAM. SEAM in turn provides functionality to support the functionality of TDX operations on virtualization server 110 that supports, for example, one or more client devices, such as illustrative client devices 102A, 102B, and 102C shown.As shown in FIG. 1, computing system 1 includes network interface 104 and shared hardware devices 160A and 160B. Virtualized server 110 includes, but is not limited to, processor 112 and memory device 130, such as memory. The processor 112 executes a virtual machine monitor (VMM) 140 , which is extended with a TD resource manager (TDRM) 142 . VMM 140 controls one or more virtual machines (VMs) 155 . TDRM 142 provides resource assignments to VM 155, and via SEAM to one or more TDs 150A and 150B.The memory device 130 stores, among other data and information, a guest page table 132, an extended page table (EPT) 134, a VMCS 138A associated with one or more VMs 155, and a TD VMCS 138B. The memory device 130 also includes a reserved scope 136 into which the SEAM loader module P-SEAMLDR 135 is loaded, which in turn loads the SEAM module 137 in the reserved scope 136, as discussed herein. In one embodiment, P-SEAMLDR 135 and SEAM module 137 each include one or more of SEAM-specific libraries, manifests, and other code and data associated with SEAM for constructing and manipulating TDs, respectively. The one or more range registers 116 include a SEAM range register (SEAMRR) that is configured with a reserved range 136 of the memory device 130, such as with a base address and mask, or with a start address of the reserved range 136 and end address. Memory device 130 includes dynamic random access memory (DRAM), synchronous DRAM (SDRAM), static memory (eg, static random access memory (SRAM)), flash memory, data storage devices, or other types of memory devices. For brevity, memory device 130 is referred to herein as "memory" instead.In various embodiments, processor 112 includes one or more processor cores 114, one or more range registers 116, measurement result registers 117, cache 118, security version number (SVN) registers 121, Memory controller 120 , write machine specific register (WRMSR) microcode 160 , and memory check (MCHECK) firmware 162 . The memory controller 120 also includes an MK-TME engine 126 (or other memory encryption engine) and a translation lookaside buffer (TLB) 128 to store address translation information and/or a given one of the VMM or security authentication modes. other status.In some embodiments, MK-TME engine 126 encrypts data stored to memory device 130 and decrypts data retrieved from memory device 130 using an appropriate encryption key, eg, assigned to the A unique key for the VM or TD that stores data to memory device 130 . Internally, the MK-TME engine 126 maintains an internal table that holds the key and encryption mode associated with each key ID (eg, the key designated as KeyID 0 (TME) and not encrypted). The properties of this table can be programmed using the processor configuration (PCONFIG) instruction. In various embodiments, the SEAM module 137, once operating in SEAM VMX root mode, configures TD specific encryption keys that the MK-TME engine 126 then uses for secure memory operations from the SEAM-operated TD. Thus, while the MK-TME engine 126 has access to TD specific encryption keys, once created, they are not accessible to the TDRM 142/VMM 140 in non-SEAM operation.In some embodiments, the MK-TME engine 126 also provides integrity and recovery protection. The strength of integrity protection and whether memory or processor state can be replay protected is processor implementation dependent. Additionally, to support TDX, the MK-TME technology provides specific keys that are only used for TD. Alternatively or additionally, MK-TME technology provides a mechanism to divide keys such that a subset of keys are reserved for use only by TDX technology.The physical pages of memory 130 are encrypted with one of the encryption keys managed by MK-TME engine 126 . In one embodiment, some or all of these encryption keys are each associated with a respective key identifier (ID) that is added to the physical memory address of a physical page of memory, such as a host The physical storage of the server. In the case where the key ID is appended to the physical memory address, a software-requested memory transaction is invalidated unless the memory transaction request (eg, a request for a memory read or write) includes both the physical memory address of the page and the The correct key ID of the encryption key used to encrypt/decrypt physical pages of memory.Each client device is, for example, one of a remote desktop computer, a tablet device, a smartphone, another server, a thin/thin client, and so on. In various embodiments, each of some or all of such client devices executes respective one or more of virtualized servers 110 in one or more of TDs 150A and 150B and one or more of VMs 155, respectively application, where the VM runs outside the TCB of each TD. In one such embodiment, software other than SEAM module 137 would also run outside of the TD's TCB. VMM 140 executes a virtual machine environment that will utilize the hardware capabilities of the host and executes one or more guest operating systems that support client applications running from client devices 102A, 102B, and 102C, respectively.In some embodiments, a single TD, such as TD 150A, provides a secure execution environment to a single client 102A and supports a single guest OS. In other embodiments, a TD supports multiple tenants, each running in a separate virtual machine and facilitated by a tenant VMM running inside the TD. TDRM 142 in turn controls the TD's use of system resources, such as memory 130, processor 112, and shared hardware device 160B. TDRM 142 acts as the host and has control over processor 112 and other platform hardware. TDRM 142 assigns logical processor(s) to software in a TD (eg, TD 150A), but does not access the execution state of the TD on the assigned logical processor(s). Similarly, TDRM 142 assigns physical memory and I/O resources to the TD, but does not implicitly participate in accessing/spoofing the TD's memory state due to separate encryption keys, and other integrity/playback controls on the memory.TD 150A represents a software environment that supports, for example, a software stack including one or more VMMs, a guest operating system, and/or various application software hosted by the guest OS(s). TD 150A operates independently of other TDs and uses the logical processor(s), memory, and I/O assigned by TDRM 142 and verified by SEAM module 137 for SEAM. Software executing in TD 150A operates with reduced privileges, thereby allowing TDRM 142 to maintain control of platform resources. On the other hand, TDRM 142 cannot access the data associated with the TD or affect the confidentiality or integrity of the TD in some other way or replay the data into the TD.More specifically, TDRM 142 (which includes VMM 140 ) manages key IDs associated with encryption keys. TDRM 142 assigns key IDs, while SEAM module 137 assigns keys to TDs and programs the associated key IDs of those keys into the secure VMCS. The key ID that can be allocated for use by the TD is called a private key ID. The processor hardware enforces the private key ID's key not configured by the VMM 140 . In various embodiments, TDRM 142 hosts the TD and has full control over the core and other platform hardware. TDRM 142 assigns logical processor(s) to software in the TD. However, the TDRM 142 has no access to the execution state of the TD on the assigned logical processor(s). Similarly, TDRM 142 assigns physical memory and I/O resources to TDs, but does not implicitly participate in accessing the TD's memory state due to the use of a unique private encryption key configured by SEAM module 137 for each TD. Software executing in the TD operates with reduced privileges so that the TDRM 142 maintains control of platform resources. However, as TDRM 142 allocates resources, SEAM module 137 ensures that policies associated with TDX execution are enforced, and in this manner acts as a policy enforcer.The VMM 140 also assigns logical processors, physical memory, encryption key IDs, I/O devices, etc. to the TD, but does not access the execution state of the TD and/or data stored in the physical memory assigned to the TD. For example, the MK-TME engine 126 encrypts the data and generates an integrity check value before moving it from one or more range registers 116 or cache 118 to memory 130 when the "write" code is executed. Some embodiments also include anti-replay measures as part of generating the integrity check value. Conversely, when data is moved from memory 130 to processor 112 following a read or write command, MK-TME engine 126 decrypts the data (and verifies its integrity with the associated integrity check value). Some embodiments also check for anti-replay measures in the integrity check value.Some embodiments provide a processor core (eg, one of the cores 114 ) in various ways, the circuitry of which executes one or more instructions based on an instruction set supporting SEAM functionality in various ways. For example, such an embodiment adapts the SEAM CALL (SEAMCALL) instruction to transition the logical processor into a secure authentication mode. Alternatively or additionally, such embodiments extend and/or otherwise adapt the SEAM EXIT (SEAMEXIT) instruction to transition the logical processor from a secure authentication mode—eg, to a traditional VMM mode.By way of illustration and not limitation, execution of the SEAMCALL instruction in one embodiment will determine whether to access a particular one of P-SEAMLDR 135 or SEAM module 137 in reserved scope 136 . In one such embodiment, the SEAMCALL instruction includes an operation object (referred to herein as an LDR-TDX operation object) that identifies a particular one—and only one—of P-SEAMLDR 135 or SEAM module 137 as a SEAMCALL target of the instruction.In various embodiments, execution of the SEAMCALL instruction or the SEAMEXIT instruction is conditioned on or otherwise performed with reference to the variable SEAM_READY 181 that identifies whether a given function of the SEAM module 137 is currently available. Alternatively or additionally, such execution is conditioned on or otherwise performed with reference to another variable - P_SEAMLDR READY 182 - that identifies a given function of P-SEAMLDR 135 is currently available.Alternatively or additionally, access to P-SEAMLDR 135 on behalf of a given logical processor is contingent on acquiring a mutex lock - represented, for example, by variable P_SEAMLDR_MUTEX 183 - which is conditionally made available to it by P-SEAMLDR 135 shared by multiple logical processors. By way of illustration and not limitation, the value of P_SEAMLDR_MUTEX 183 at a given time indicates whether any next access to P-SEAMLDR 135 is to be blocked, at least until the current access (by a different logical processor) to P-SEAMLDR 135 is complete . In one such embodiment, P_SEAMLDR_MUTEX 183 stores a binary flag indicating the current (un)availability of P-SEAMLDR 135, or (alternatively) an identifier of a logical processor, if any, for that logical process device, P-SEAMLDR 135 is currently being accessed.In some embodiments, the most recent access to a particular P-SEAMLDR 135 or SEAM module 137 by a given logical processor is indicated by a variable (eg, binary flag value) specific to that logical processor. By way of illustration and not limitation, each flag 184 corresponds to a respective different logical processor provided by core 114, wherein - for a given logical processor - the corresponding "inP_SEAMLDR" flag of flag 184 identifies whether the current The P-SEAMLDR 135 is being accessed on behalf of this logical processor. In the example embodiment shown, SEAM_READY 181 , P_SEAMLDR READY 182 , P_SEAMLDR_MUTEX 183 and flags 184 are shown maintained in memory 130 . In alternative embodiments, some or all of such variables are instead maintained, eg, in any of various appropriate registers of processor 112 .FIG. 1B is a block diagram illustrating an example processor core of a processor of the computing system of FIG. 1A , according to one embodiment. In the embodiment shown in FIG. 1B, each processor core 114 includes a cache 118A (eg, one or more levels of cache), a page miss handler (PMH) 122, a PMH control register 123, Hardware virtualization support circuit 180 , and hardware registers 115 . Hardware registers 115 include, for example, several model specific registers 115A (or MSRs) and control registers 115B (eg, CR1, CR2, CR3, etc.). In some embodiments, when reference is made herein to cache 118 and range registers 116 , the reference is understood to additionally or alternatively include cache 118A and hardware registers 116 of one or more processor cores 114 .In various embodiments, cache 118A - via execution of the GETSEC[INTERACCS] instruction - is loaded with authentication code module NP-SEAMLDR ACM 170 , which will load P-SEAMLDR 135 . This NP-SEAMLDRACM 170 is actually a non-persistent SEAM loader that directs the load of the P-SEAMLDR 135 and associated data into the reserved range 136 of the memory device 130 (eg, memory). In turn, P-SEAMLDR 135 then loads SEAM module 137 (and associated data) into reserved range 136 . In other embodiments, for example, the NP-SEAMLDR ACM 170 is the security logic of the processor core 114, such as logic embedded in hardware, is microcode, or is embedded in the processor 112 using security logic Sex Controller.In some embodiments, processor core 114 executes instructions to run several hardware threads, also referred to as logical processors, including first logical processor 119A, second logical processor 119B, and so on, up to Nth logical processor 119N . In one embodiment, the first logical processor 119A is the VMM 140 . In various embodiments, several VMs 155 are executed and controlled by VMM 140 .In some embodiments, TDRM 142 schedules TDs for execution on a logical processor of one of processor cores 114 . In addition to the TDX-based client virtual machines, the virtualization server 110 executes one or more VMs 155 external to the TD for one or more client devices 102A-C. Given that software external to the TD's trusted computing base, such as TDRM 142 and VMM 140, may not be able to access the physical memory pages allocated to the TD and/or the TD's execution state of the TD, the access to the VMM 140 by a VM operating outside the TD is unsafe.In some embodiments, MK-TME engine 126 prevents such access by encrypting data moving between processor 112 and memory 130 using MK-TME engine 126 with one or more shared encryption keys. The term "shared" is intended to refer to a key accessible to VMM 140 and is distinct from the private key ID associated with the key configured by SEAM module 137 to assign to the TD. In some embodiments, PMH 122 imposes restrictions on the use of private key IDs by VMM/TDRM or VMs in core 114 . For example, PMH 122 enforces that private key IDs can be associated with read and write requests sent to MK-TME engine 126 only when the logical processor is executing in SEAM mode (root or non-root mode). If such restricted key IDs are used outside of SEAM mode, they cause a failure and read or write transactions are aborted. The TD cannot specify which private key ID it can use as the key ID configured by the SEAM module in the VMCS and the hardware uses the programmed TD-HKID when generating accesses to the TD private memory. The processor 112 also restricts the PCONFIG instruction so that the private key ID is only programmed with the key when operating from the SEAM module 137 .Additionally or alternatively, in various embodiments, one or more of the unrestricted keys are shared. The shared secret can be accessed by two or more entities, such as TDs and VMs running outside of the TDX environment. The shared key is used to access one or more shared structures, such as shared hardware devices 160A and 160B, which are, for example, printers, keyboards, mice, monitors, network adapters, routers, and the like. In some embodiments, MK-TME 126 encrypts data stored to memory using the shared key associated with the shared key ID. The shared key ID is used by system software, including software in SEAM, and by devices for direct memory access (DMA) to memory. Therefore, the TD can use the shared key ID to communicate with the VMM or other VMs or devices. In some cases, the TD operates to protect the confidentiality of data transferred to the device, such as data stored on a hard drive. Since data stored to shared memory is accessible to all software, the TD software first encrypts such data with a specific key (eg, a disk encryption key) before storing the data in memory with the shared key ID . Thus, when the VMM reads this data, it is decrypted with the shared key; however, what is decrypted is the content encrypted by the disk encryption key, so that the VMM cannot access the actual data. The TD also associates an integrity check value with this encrypted data so that subsequent attempts to tamper with the data can be detected. In one embodiment, shared hardware device 160A is connected to virtualization server 110 via network interface 104 . In another embodiment, the shared hardware device is local to virtualization server 110, eg, as shown by shared hardware device 160B.Hardware virtualization support circuitry 180 supports virtualized execution of operating systems, applications, and other software by computing device 100 . Hardware virtualization support circuitry 180 includes virtual machine extension (VMX) support by providing two execution modes: VMX root mode and VMX non-root mode. VMX root mode allows executing software to have extensive control over the computing device 100 and its hardware resources. Instead, the VMM 140 or host operating system (OS) executes in VMX root mode. The VMX non-root mode restricts access to certain hardware instructions while still implementing the normal ring/privileged system of the processor core 114 . One or more guest OSes (eg, VM's) execute in VMX non-root mode. These guest OSs execute in ring zero, similar to being executed without virtualization. Hardware virtualization support circuitry 180 also supports EPT 134, which is implemented as hardware-assisted second-level page address translation. The hardware virtualization support circuit 180 is implemented, for example, in VT-x technology. In some embodiments, as will be discussed with reference to Figure 6, the SEAM VMX root mode is designed to support TDX operations, which are entered using the SEAMCALL and SEAMEXIT instructions for a given TD, as will be discussed.Some embodiments are not limited to computer systems. Alternative embodiments of the present disclosure may be used in other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include microcontrollers, digital signal processing devices (DSPs), systems on chips, network computers (NetPCs), set-top boxes, network hubs, wide area network (WAN) switches, or capable of Embodiments any other system that executes one or more instructions.One embodiment is described in the context of a single processing device desktop or server system, but alternative embodiments are included in a multi-processing device system. Computing system 100 is an example of a "hub" system architecture. Computing system 100 includes a processor 112 to process data signals. The processor 112 includes, as an illustrative example, a complex instruction set computer (CISC) microprocessor device, a reduced instruction set computing (RISC) microprocessor device, a very long instruction word (very long instruction word) , VLIW) microprocessor device, processing device implementing a combination of instruction sets, or any other processing device such as a digital signal processing device. The processor 112 is coupled to a processing device bus that communicates data signals between the processor 112 and other components in the computing system 100, such as a memory device 130 and/or secondary storage 118, which store instructions, data or any combination of these. Other components of computing system 100 include graphics accelerators, memory controller hubs, I/O controller hubs, wireless transceivers, flash basic input/output system (BIOS), network controllers, audio controllers , serial expansion ports, I/O controllers, and more.To facilitate efficient SEAM functionality—eg, including supporting updates to SEAM module 137 while one or more logical processors remain in their respective power states—some embodiments provide an adapted Flow for executing a SEAMCALL instruction to access P-SEAMLDR 135 in reserved range 136 . For example, decoder 195 of processor core 114 includes circuitry to decode the SEAMCALL instruction, which is based on instruction set 191 . Execution unit 190 of processor core 114 includes circuitry to variously execute one or more decoded instructions based on (eg, according to or otherwise compatible with) instruction set 191 according to one embodiment. By way of illustration and not limitation, instruction set 191 supports the SEAMCALL instruction to access a persistent SEAM loader module or a specified one of SEAM modules—eg, either P-SEAMLDR 135 or SEAM module 137—which (for example) are loaded in a reserved range of system memory. Alternatively or additionally, the instruction set 191 supports the SEAMEXIT instruction to exit the logical processor from secure authentication mode. In one example embodiment, instruction set 191 includes one or more instructions whose execution is dependent on or otherwise based on access to SEAM_READY 181 , P_SEAMLDR READY 182 , P_SEAMLDR_MUTEX 183 , and/or flags 184 .FIG. 2 illustrates features of a method 200 of utilizing a SEAM loader module to provide secure authentication functionality in system memory, according to one embodiment. Method 200 is performed, for example, with circuitry that provides some or all of the functionality of virtualized server 110 .As shown in FIG. 2 , method 200 includes (at 210 ) starting a non-persistent SEAM loader (NP-SEAMLDR) ACM at a processor—eg, during startup of system 100 . With the NP-SEAMLDR ACM activated at 210, method 200 loads (at 212) the persistent SEAM loader module P-SEAMLDR in a reserved range of memory coupled to the processor. In one embodiment, a processor's range registers (eg, register SEAMRR of one or more range registers 116 ) store information identifying reserved ranges. The method 200 also includes (at 214) the P-SEAMLDR module loading a SEAM module (eg, SEAM module 137) in a reserved range of memory, the SEAM module providing functionality to implement a secure authentication mode of the logical processor.Following the loads at 212 and 214, method 200 (at 216) executes the SEAMCALL instruction, which is instruction set based, with the core of the processor. In one such embodiment, the SEAMCALL instruction (which is formatted, for example, according to the type of SEAMCALL instruction in instruction set 191) includes an opcode that instructs the logical processor—which provides the SEAMCALL instruction—to convert from a traditional SEAMCALL instruction. VMX root operation transitions to SEAM VMX root operation. The SEAMCALL instruction also includes an operand - referred to herein as an LDR-TDX operand - which specifies one (and only one) of the SEAM loader module or the SEAM module. As an illustration, the LDR-TDX operation object is conveyed in general register RAX, bit RAX[63] - for example, where RAX[63] equals one ("1") to indicate that the SEAMCALL instruction will target P-SEAMLDR, and RAX[63] 63] Equal to zero ("0") indicates that the SEAMCALL instruction will target the SEAM module.Executing the SEAMCALL instruction at 216 includes determining whether to access one of the SEAM loader module or SEAM modules specified by the LDR-TDX operand. In one such embodiment, this determination includes determining, with the executed SEAMCALL process, whether to signal failure of the SEAMCALL instruction based on, eg, SEAM_READY 181, P_SEAMLDR_READY 182, and/or one or more other such variables. In one such embodiment, the SEAMCALL process determines whether the LDR-TDX operation object specifies a SEAM module, and the SEAM_READY 181 identifies the availability of the SEAM module. Additionally or alternatively, executing the SEAMCALL instruction at 216 includes, for example, invoking execution of P-SEAMLDR to write the measurement results for the SEAM module to the measurement results register.In various embodiments, both the P-SEAMLDR and SEAM modules execute in SEAM mode and are invoked in various ways by their respective SEAMCALL instructions. However, for a logical processor (LP) providing a given SEAMCALL instruction, setting the inP_SEAMLDR flag corresponding to that logical processor to "1" (or other value indicating a call to P-SEAMLDR) unlocks a or more instructions for P-SEAMLDR to use only on this logical processor. In one embodiment, one or more unlocked instructions enable P-SEAMLDR to write to one or more measurement registers (eg, measurement register 117 ) - eg, to store measurements on the SEAM module. Accordingly, some embodiments enable selective access to measurement result registers on an LP-specific basis, wherein the SEAMCALL instruction is provided by a first LP of a plurality of LPs, wherein the first inP_SEAMLDR flag is only associated with the first LP of the plurality of LPs. An LP corresponds, where the first variable indicates that, in the P_SEAMLDR and SEAM modules, P_SEAMLDR was most recently called by the first LP, and where the ability of P_SEAMLDR to access the measurement result register is based on the first inP_SEAMLDR flag.Additionally or alternatively, determining whether to access the SEAM loader module or one of the SEAM modules includes executing a SEAMCALL procedure to determine whether a mutex lock (eg, the lock indicated by P_SEAMLDR_MUTEX 183 ) has been acquired. In some embodiments, the SEAMCALL instruction is provided by a first logical processor of a plurality of logical processors, each of which corresponds to a respective different inP_SEAMLDR flag of flags 184 . In one such embodiment, executing the SEAMCALL flow sets a specific inP_SEAMLDR flag corresponding to the first logical processor - the setting being based on determining that the LDR-TDX operand specifies P-SEAMLDR - to indicate that the P-SEAMLDR And the SEAM module, the SEAM loader module is more recently called by the first logical processor. Although some embodiments are not limited in this regard, the method 200 also includes, for example, the core executing a SEAM Retirement Based on Instruction Set (SEAMRET) instruction. In one embodiment, execution of the SEAMRET process determines - eg, based on the corresponding inP_SEAMLDR flags - whether to flush data from TLB 128, VMCS 138A, TDVMCS 138B, and/or the like.3 illustrates features of a system 300 that provides a persistent SEAM loader at a reserved area of system memory, according to one embodiment. In one embodiment, system 300 includes features of system 100—eg, where some or all of the operations of method 200 are performed with system 300.As shown in FIG. 3, a reserved area of system memory (eg, reserved area 136) is defined to extend from the base address SEAMRR.Base to another address SEAMRR.Limit—eg, where SEAMRR.Base and SEAMRR.Limit are defined at, eg, are identified in the range registers (SEAMRR) provided by one or more range registers 116 . In one such embodiment, the range register SEAMRR specifies or otherwise indicates two sub-ranges MODULE RANGE, P_SEAMLDR_RANGE that reserve the memory range - eg, where MODULE RANGE is used to store SEAM module 310 and P_SEAMLDR_RANGE is used to store persistence SEAM loader module P-SEAMLDR 320. For example, P_SEAMLDR_RANGE is defined to extend from the base address P_SEAMLDR_RR.Base to another address P_SEAMLDR_RR.Limit - eg, where P_SEAMLDR_RR.Limit is the same as SEAM_RR.Limit.In one such embodiment, during platform startup, the processor copies the NP-SEAMLDR ACM 330 into physical memory and, for example, executes a GETSEC[EnterACCS] instruction to start the NP-SEAMLDR ACM 330 . Execution of the NP-SEAMLDR ACM 330 may retrieve or otherwise access the P-SEAMLDR image 332, which is then installed in the P_SEAMLDR_RANGE. After such installation in P_SEAMLDR_RANGE, P-SEAMLDR 320 is then executed (eg, during startup, alternatively, during runtime) to install SEAM module 310 in MODULE RANGE. As a result, SEAM module 310 and P-SEAMLDR 320 are each available in a reserved memory area to be selectively accessed by execution of a SEAMCALL instruction according to one embodiment.4 is a block diagram illustrating an example computing system 400 implementing virtual machine monitoring (VMM) management of trust control boundaries with TDX access control, according to some embodiments. In various embodiments, TDRM 442 executing on computing system 400 (which includes VMM 140, for example) supports legacy VMs 410, such as CSP VM 455A, first tenant VM 255B, and second tenant VM 455C. These legacy VMs still utilize memory encryption via TME or MK-TME in this model. A VMM managed TCB 402 is provided for the CSP VM 455A and the tenant VMs 455B, VM 455C.As validated and enforced by SEAM module 437 (eg, SEAM module 137), TDRM 442 further supports two TDs—ie, TD1 420 and TD2 430—both of which are used when the TD is an untrusted CSP (eg, a virtualized server). 110) is implemented in the case of a tenant that enforces confidentiality. Thus, TD1 420 and TD2 430 rely on the execution of SEAM from a reserved range of memory (eg, reserved range 136) to implement TDX, which provides the confidentiality and protection of TD. TD1 420 is shown as a virtualization mode (eg VMX) utilized by a tenant VMM (non-root) 422 running in TD1 420 to manage tenant VMs 450A, 450B. The TD2 430 does not include software using virtualization mode, but runs the enlightened OS 450C directly in the TD3 430. TD1 420 and TD2 430 are tenant TDs with SEAM managed TCBs with TDX access control 404 as described herein. In one embodiment, TD1 420 or TD2 430 is the same as any of TD 150A or 150B described with reference to FIG. 1A .TDRM 442 and SEAM module 437 manage the lifecycle of VMs and TDs, including allocation of resources. However, TDRM442 is not in the TCB for TD types TD1 420 and TD1 430. The processor (eg, processor 112) does not impose any architectural limitations on the number or mix of TDs active on the system. However, software and certain hardware limitations in certain implementations limit the number of TDs running simultaneously on the system due to other constraints.5 is a block diagram illustrating a system 500 that provides components of TDX implemented by a processor supported by a SEAM module 537 (eg, SEAM module 137 ), according to one embodiment. In this embodiment, VMM 540 enforces access control among VMs 555A, 555B, and 555C. In order to enter a secure arbitration mode (SEAM) to implement TDX, the SEAM module 537 and other supporting data and information are stored (eg, loaded) in a reserved range 536 of memory. Before loading the SEAM module 537 into the reserved range 536 of memory, the processor sets memory encryption for the reserved range using the platform reserved encryption key used to encrypt the SEAM reserved memory range. The memory controller (eg, memory controller 120 ) encrypts the SEAM module with a platform-reserved encryption key before the SEAM module 537 is stored in the reserved range 536 of memory. The memory controller also encrypts and integrity protects data stored in and retrieved from the reserved range 536 of memory, such as other data associated with SEAM, using the platform reserved encryption key, For example page tables, VMCS per logical processor, etc.In some embodiments, SEAM module 537 facilitates the implementation of TDX to initiate and control access to one or more TDs 550A, 550B, and 550C. The SEAM module 537 is instantiated as TDs, the number of which is supported by the resources of the TDRM and SEAM modules. VMM 540 invokes the SEAMCALL instruction to request entry into SEAM. SEAM module 537 later invokes the SEAMEXIT instruction to exit SEAM and transfer root mode operational control back to VMM 540. Details of the SEAMCALL and SEAMEXIT instructions will be described in more detail with reference to Figures 7A-7E.6 is a flow diagram illustrating a virtual machine extension (VMX) and SEAM-based TDX transition 600, according to some embodiments. As mentioned earlier, SEAM is an extension to the virtual machine extension architecture to define a new VMX root mode called the SEAM VMX root mode to differentiate it from the traditional VMX root mode. This SEAM VMX root mode is used to host processor certified modules (eg, SEAM module 137) to create virtual machine (VM) guests called TDs. More specifically, a VM booted or restored from SEAM VMX root mode is a TD, while a VM booted or restored from traditional VMX root mode is a traditional VM. Startup or recovery of a VM or TD is performed using VM entry, and exiting from a VM or TD is performed using VM exit. One of the reasons to exit the TD to SEAM VMX root mode is in response to the detection of a system management interrupt (SMI). The division between TDs (see Figure 5) is implemented by SEAM using VMX hardware extensions like EPT.In some embodiments, the TD runs in processor SEAM VMX non-root mode to protect the memory contents of the TD against other software including the managed VMM (other than SEAM module 137 executing from the reserved scope 136 of memory) and processor state, unless explicitly shared by the TD itself. Software executing in SEAM VMX root mode provides arbitration of resources between TD and VMM/TDRM. In many embodiments, the code size of the software in the SEAM VMX root mode (SEAM library) is substantially smaller than that of the untrusted VMM.In one embodiment, with continued reference to Figures 1A, 1B, the SEAM module 137 executes from a reserved range 136 of memory specified with one of the range registers 116, such as the SEAM Range Register (SEAMRR) configured by the CSP. The reserved range 136 is programmed by the BIOS (not shown in FIG. 1A ) and verified by the MCHECK firmware 162 . Since the BIOS is not trusted to properly configure SEAMRR, in an embodiment, the processor 112 provides a processor-certified firmware module called MCHECK. In an embodiment, the BIOS calls MCHECK firmware 162 to activate the SEAMRR range it has configured into the SEAMRR range register. The processor executes the MCHECK firmware 162 from the protected environment created in the cache 118A of the processor core(s) 114 so that the MCHECK execution cannot be tampered with by untrusted software and other devices in the platform. Extending the MCHECK function to override this verification ensures that the range registers 116 have been identically programmed on the processor core 114 and that the reserved range 136 values of memory stored in the range registers 116 are not configured to match the specific memory reserved for the device or Other special memories like Trusted Execution Technology (TXT) memory ranges overlap (because this special memory range is not protected by MK-TME). The MCHECK firmware 162 also configures the platform-reserved encryption key of the MK-TME engine 126, which is used for encryption of data stored to a reserved range of memory and for integrity and replay protection.SEAM module 137 is software that stores into reserved range 136 programmed with range register 116 . In one embodiment, NP-SEAMLDER ACM 170 (FIG. IB) or other security logic is executed to load P-SEAMLDR 135 into reserved range 136 of memory (where P-SEAMLDR 135 in turn loads SEAM module 137 into reserved range 136 of memory). Thus, the NP-SEAMLDER ACM 170 acts as and is referred to as a non-persistent SEAM loader, which is not persistent, eg, after the P-SEAMLDR 135 has been loaded into the reserved scope 136 (and remains available from the reserved scope 136). An ACM is a processor-certified firmware module that executes, for example, from a protected environment created in cache 118A of processor core(s) 114 . ACM technology was introduced as part of Trusted Execution Technology (TXT). For example, NP-SEAMLDER ACM 170 is initiated using the GETSEC[ENTERACCS] command. The MCHECK firmware 162 informs the hardware that the reserved range 136 of memory has been verified and can be used by the P-SEAMLDR 135 and SEAM module 137 . In one embodiment, the P-SEAMLDR 135 copies the SEAM module 137 and manifest into a reserved range 136 of memory. P-SEAMLDR 135 then verifies the manifest associated with the SEAM module (eg, message digests, security version numbers (SVN), and other such information for SEAM modules and loadable components).In various embodiments, the processor transitions from legacy VMX root mode to SEAM VMX root mode in response to a SEAMCALL instruction invoked by an untrusted VMM (or TDRM). This transition is similar to a parallel VM exit to perform peer monitoring in response to a VMCALL from the VMM. The processor transitions from SEAM VMX root mode to legacy VMX root mode in response to the SEAMEXIT instruction. This transition is similar to a parallel VM entry from a peer monitor to traditional VMX root mode in response to a VMRESUME from the peer monitor. The companion monitor is called the SMM transfer monitor (STM) and is part of IntelTM.VTx.With additional reference to Figure 6, keeping the execution within the traditional VMX root mode separate from the execution within the SEAM VMX root mode ensures that sensitive data and measurements generated in SEAM operations are invisible and inaccessible to the VMM or other legacy VMs. The system management mode (SMM) of the processor 112 allows choices, such as opt-in and opt-out options to the VMX architecture, and has access to the hardware registers of the processor 112 .In one embodiment, it is assumed that the first logical processor is operating in SEAM VMX non-root mode in the first TD. Assume that the first TD detects a system management interrupt (SMI). In this case, the first TD execution VM exits to SEAM VMX root mode. SEAM VMX root mode then securely stores the secret and secret data of the first TD from the hardware registers of the processor 112 back to the memory device 130, eg, in encrypted form using the host key ID (HKID). The actual encryption and storage to memory is performed by the MK-TME 126. The SEAM module 137 then clears the secret thus saved from the processor register state so that no TD state leaks out. VMX root mode then executes the SEAMEXIT instruction to exit from SEAM VMX root mode and transfer the logical processor's virtual root operational control (eg, VMX root mode control) back to traditional VMX root mode, such as in VM 155 .In some embodiments, SMIs are masked when in SEAM VMX root mode, so that even if the VM exit is caused by the pending SMI's pending in SEAM VMX non-root mode, the SMI itself remains pending because it is in SEAM VMX non-root mode. Masked in VMX root mode. Once in legacy VMX root mode, the SMI can actually be handled and cause a transition to system management mode (SMM) or an SMI VM exit to SMM. Once in the SMM, the SMM can read the processor's register contents. However, the SMM cannot see any TD or SEAM module secrets, since such secrets have been removed by the SEAM module before executing SEAMEXIT in traditional VMX root mode. Therefore, the SMM sees the state of the processor that exists in the traditional VMX root mode.With continued reference to FIGS. 1A , 1B and 6 , the reserved range 136 of the memory device 130 for the SEAM library is allocated by the BIOS and programmed into the range register 116 (eg, SEAMRR) using the MSR. An access to the reserved range 136 of memory when not in SEAM triggers abort page redirection. When in SEAM VMX root mode, the reserved range 136 provides access to a memory type of write back (WB) if register CR0.CD=0 and a memory type of write back (WB) if register CR0.CD=1 For uncacheable (uncacheable, UC). Since the memory type of the SEAM reserved range cannot be tampered with by the VMM, the memory type protects the SEAM module 137 from VMM attacks by configuring the range with an unexpected memory type, such as configuring the range as a write combination.In various embodiments, the WRMSR microcode 160 enforces that the reserved range 136 of memory is configured as a contiguous range and is not programmed to overlap memory ranges reserved for specific uses or specific devices, such as system management range registers register, SMRR), SMRR2, processor reserved memory range register (PRMRR) or IA32_APIC_BASE. An attempt to write to a reserved range base address or mask will cause this overlap, causing a general protection fault (#GP(0) fault). Similarly, attempting to program PMRRR, SMRR, SMRR2 or IA32_APIC_BAS to overlap the reserved range 136 area will cause a general protection fault. The protected range is defined by the base address plus a mask added to the base address. The reserved range 136 of memory is also specified by the start and end addresses.In some embodiments, the BIOS assigns a base address and mask that defines a reserved range 136 of memory and sets a lock bit on the range register 116 of each processor core 114 associated with this reserved range 136 of memory. A non-core copy is maintained for the range register 116 and it is updated by the WRMSR microcode 160 .In various embodiments, the MCHECK firmware 162 is a trusted module that is embedded in the microcode patch and initiated by the microcode patch load to verify processor protected range registers and their configuration. This module is used to verify the security guard extension (SGX) memory configuration. The MCHECK firmware 162 is extended to verify the SEAM range register 116 . The MCHECK firmware verifies the configuration of the reserved range 136 stored in the SEAM range register 116 (eg, SEAMRR), as opposed to its configuration for the PMRRR (enforces configurations such as overlap with memory-mapped I/O (MMIO) the same rules) and so on. The MCHECK firmware 162 also requires the MK-TME engine 126 on the platform to be configured with integrity enabled as a prerequisite for marking SEAMRR as valid. The MCHECK firmware makes PMRRR valid as a prerequisite for marking SEAMRR as valid.In one embodiment, the physical memory range programmed into the SEAM range register 116 (eg, SEAMRR) will have a key ID of zero ("0"), which is enforced by the MCHECK firmware 162 . The ephemeral key used for SEAMRR access is not the same as the key accessed by VMM for legacy VMs by key ID zero. Instead, access to the reserved range 136 of memory is encrypted and integrity protected using a platform reserved encryption key that is also used for encryption and integrity protection of the reserved range stored in the PMRRR. This platform reserved encryption key is programmed into the MK-TME engine 126 by the MCHECK firmware 126. This platform key is randomly regenerated on every boot. Thus, even if an attacker were to capture the encrypted memory of computing system 100, the attacker would not be able to inject into range on subsequent power ups.Figures 7A-7E show various examples of pseudo-code, each of which illustrates various algorithms that facilitate SEAM functionality in accordance with respective embodiments. In various embodiments, some or all of such algorithms are provided, used, or otherwise operated in various ways based on P-SEAMLDR, which is installed in system memory by NP-SEAMLDR. Additionally or alternatively, some or all of such algorithms are implemented in various ways based on one or more of the variables SEAM_READY, P_SEAMLDR_READY, inPSEAMLDR described herein. In some embodiments, some or all of these algorithms use the mutex P_SEAMLDR_MUTEX, which is shared to control access to P-SEAMLDR by any of the various logical processors. Based on such algorithms (as well as supporting circuits, data structures, etc.), some embodiments implement an indirection in the provisioning of SEAM loader functions in various ways. This indirection facilitates updating of the SEAM module without requiring all logical processors to which the SEAM module is conditionally available to be in a dormant state (eg, waiting for a SIPI state).For example, FIG. 7A illustrates pseudocode 700 representing executed instructions and/or any of various other suitable hardware logic and/or software logic to install (or update) a persistent SEAM loader module, according to one embodiment Features of P-SEAMLDR. Algorithms such as those shown in pseudo-code 700 provide, and/or are performed with, the functionality of NP-SEAMLDR ACM 170 (for example)—for example, where one or more operations of method 200 include or are otherwise based on the described algorithm.As shown in Figure 7A, P-SEAMLDR is installed (for example) as - for example, at boot time - an operating system (OS) and (in some embodiments) a virtual machine monitor (VMM) supported by the OS executed as part of the execution. In some embodiments, the P-SEAMLDR installation is conditional on one or more application processors (and/or other processors) each being in a waiting SIPI (WFS) state.As shown in line 2 of pseudocode 700, the image of the non-persistent SEAM loader (NP-SEAMLDR) ACM is copied to physical memory. Subsequently (see line 3 of pseudocode 700), the NP-SEAMLDR ACM is initiated - eg, using the GETSEC[ENTERACCS] instruction - to unlock the reserved range of memory and install the persistent SEAM loader in the reserved range Module P-SEAMLDR. In the case of a successful installation (see line 4 of pseudocode 700), the variable P_SEAMLDR_READY is set to a value (eg, 1) indicating the P-SEAMLDR to be accessed (eg, by execution of the SEAMCALL instruction) availability. As shown on line 5 of pseudocode 700, NP-SEAMLDR—in one or more registers (eg, illustrative register R9 shown)—returns information describing the result of the GETSEC[ENTERACCS] instruction.7B illustrates pseudocode 710 representing the functionality of software instructions, execution engines, and/or any of various other suitable hardware and/or software logic to install (or update) the SEAM module, according to one embodiment. An algorithm, such as that shown in pseudo-code 710, provides and/or is performed with the functionality of P-SEAMLDR 135 - eg, where one or more operations of method 200 include or are otherwise based on the algorithm.As shown in line 1 of pseudocode 710, installing the SEAM module in one embodiment includes setting a SEAMLDR_PARAM structure that points to the signature structure (or enclave certificate) SIGSTRUCT of the SEAM module and associated data. P-SEAMLDR is then called with a SEAMCALL instruction providing the address of the SEAMLDR_PARAM structure (see line 2 of pseudocode 710). The invoked P-SEAMLDR installs (or updates) the SEAM module in a reserved area of system memory.In some embodiments, updating the SEAM module is performed by multiple logical processors calling respective SEAMCALL instructions serially - eg, where the mutex P_SEAMLDR_MUTEX restricts access to P-SEAMLDR to only one logical processor at a time . In one such embodiment, the first of the serial calls sets the variable SEAM_READY to indicate the unavailability of the SEAM module. Additionally or alternatively, the last of the serial calls - ie the call that actually performs the update to the SEAM module - sets the variable SEAM_READY to indicate the availability of the (now updated) SEAM module.Figure 7C illustrates pseudocode 720 representing software instructions, an execution engine, and/or any of various other suitable hardware and/or software logic to shut down the persistent SEAM loader module P-SEAMLDR, according to one embodiment function. An algorithm such as that shown in pseudocode 720 is performed, for example, with circuitry such as the circuitry of execution unit 190 - eg, wherein one or more operations of method 200 include or are otherwise based on the algorithm.As shown in line 1 of pseudocode 720, P-SEAMLDR is invoked with the SEAMCALL instruction, which provides an operand to indicate that P-SEAMLDR is to be shut down. If the variable P_SEAMLDR_READY indicates that P-SEAMLDR is not available, the SEAMCALL instruction is disabled. In some embodiments (see line 2 of pseudocode 720 ), shutting down P-SEAMLDR is performed by multiple logical processors serially calling respective SEAMCALL instructions—eg, where mutex P_SEAMLDR_MUTEX will be used against P-SEAMLDR The access is limited to only one logical processor at a time. As shown in line 2(i) of pseudocode 720, the first in the serial call sets the variable SEAM_READY to indicate the unavailability of the SEAM module. Additionally or alternatively (see line 2(ii) of pseudocode 720), the last of the serial calls - by the processor that loads the NP-SEAMLDR ACM to install the P-SEAMLDR (e.g., start processing executor) - set the variable P_SEAMLDR_READY to indicate the unavailability of P-SEAMLDR.FIG. 7D illustrates pseudo-code 730 representing the functionality of an execution engine and/or any of various other suitable hardware and/or software logic to execute the SEAMCALL instruction, according to one embodiment. Algorithms such as those shown in pseudocode 730 are executed, for example, with execution unit 190 .As shown in Figure 7D, the determination of whether a SEAMCALL instruction will fail is based on whether the availability of the SEAM module—as specified by the value of the SEAM_READY variable—is consistent with the SEAM specified by the LDR-TDX operation object (in RAX [63]). Module-consistent evaluation (in line 2 of pseudocode 730). Additionally or alternatively, this determination is based on an evaluation of whether the LDR-TDX operand specifies P-SEAMLDR (see line 2 of pseudocode 730 ), and if so, whether (see line 6 of pseudocode 730 ) and line 7) can acquire the mutex P_SEAMLDR_MUTEX for P-SEAMLDR. If it is determined that the SEAMCALL execution is to access P-SEAMLDR, the inP_SEAMLDR flag is set (for logical processors that provided the SEAMCALL instruction), and one or more TLBs, VMCS, and/or other data structures are flushed.FIG. 7E illustrates pseudo-code 740 representing the functionality of an execution engine and/or any of various other suitable hardware logic and/or software logic to execute the SEAMEXIT instruction, according to one embodiment. Algorithms such as those shown in pseudo-code 740 are executed, for example, with execution unit 190 .As shown in Figure 7E (see line 1 of pseudocode 740), the execution of the SEAMEXIT instruction is conditioned on the setting of the flag inP_SEAMLDER, which corresponds to the logical processor that provided the relevant SEAMEXIT instruction. The SEAMEXIT execution flow (see lines 2 and 3 of pseudocode 740) flushes various TLB, VMCS and/or other data structures to return the processor state from the state of the secure authentication mode. Subsequently, the mutex is released and the corresponding flag inP_SEAMLDR is cleared to indicate that the logical processor is not currently accessing P-SEAMLDR.The figures described herein detail exemplary architectures and systems that implement the above-described embodiments. In some embodiments, one or more of the hardware components and/or instructions described herein are emulated as detailed below, or implemented as software modules.Embodiments of the instruction(s) detailed above may be implemented in the "Generic Vector Friendly Instruction Format" detailed herein. In other embodiments, this format is not utilized and another instruction format is used, however, the descriptions herein for writing mask registers, various data transformations (marching, broadcasting, etc.), addressing, etc. generally apply For the description of embodiments of the above instruction(s). Additionally, exemplary systems, architectures, and pipelines are detailed herein. Embodiments of the above instruction(s) may be executed on such systems, architectures and pipelines, but are not limited to those detailed.An instruction set may include one or more instruction formats. A given instruction format may define various fields (eg, number of bits, position of bits) to specify the operation to perform (eg, opcode) and the operation object(s) on which the operation is to be performed and/or other data field(s) (eg, masks), etc. Some instruction formats are further broken down by the definition of instruction templates (or sub-formats). For example, an instruction template for a given instruction format can be defined to have different subsets of the fields of that instruction format (the included fields are usually in the same order, but at least some have different bit positions because fewer fields are included ) and/or are defined to have a given field interpreted differently. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes instructions for specifying operations and operation objects field. For example, an exemplary ADD instruction has a specific opcode and instruction format that includes an opcode field to specify the opcode and an operand field to select the operand (source 1/destination and source 2); and this ADD The occurrence of an instruction in the instruction stream will have specific content in the Operand field that selects a particular Operand. A collection of SIMD extensions known as Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extension (VEX) coding scheme has been published and/or published (see, for example, 64 and IA- 32 Architecture Software Developer's Manual, September 2014; and see Advanced Vector Extensions Programming Reference, October 2014).Exemplary Instruction FormatEmbodiments of the instruction(s) described herein may be implemented in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed herein. Embodiments of the instruction(s) may be executed on such systems, architectures and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations in the vector friendly instruction format.8A-8B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the present invention. Figure 8A is a block diagram illustrating a generic vector friendly instruction format and its class A instruction template according to an embodiment of the invention; and Figure 8B is a diagram illustrating a generic vector friendly instruction format and its class B instruction template according to an embodiment of the invention block diagram. Specifically, class A and class B instruction templates are defined for the generic vector friendly instruction format 800, both of which include a no memory access 805 instruction template and a memory access 820 instruction template. The term "generic" in the context of a vector friendly instruction format means that the instruction format is not tied to any particular instruction set.Although an embodiment of the invention will be described in which the vector friendly instruction format supports the following: 64-byte vector operand length (or size) with 32-bit (4-byte) or 64-bit (8-byte) data elements width (or size) (so a 64-byte vector consists of 16 doubleword-sized elements or 8 quadword-sized elements); 64-byte vector operand length (or size), which has 16 bits (2 words section) or 8-bit (1-byte) data element width (or size); 32-byte vector operand length (or size), which has 32-bit (4-byte), 64-bit (8-byte), 16 bit (2 bytes), or 8 bits (1 byte) of data element width (or size); and 16-byte vector operand length (or size), which has 32 bits (4 bytes), 64 bits ( 8 bytes), 16 bits (2 bytes), or 8 bits (1 byte) of data element widths (or sizes); but alternative embodiments may support having more, less, or different data element widths (eg , 128-bit (16-byte data element width) more, fewer, and/or different vector operand sizes (eg, 256-byte vector operands).The category A instruction template in Figure 8A includes: 1) within the no memory access 805 instruction template, the no memory access, full rounding control type operation 810 instruction template and the no memory access, data transformation type operation 815 instruction template are shown; and 2) within the memory access 820 instruction template, the memory access, transient 825 instruction template and the memory access, non-transient 830 instruction template are shown. The Class B instruction templates in FIG. 8B include: 1) Within the no memory access 805 instruction template, the no memory access, write mask control, partial round control type operation 812 instruction template and the no memory access, write mask control, vsize type operation 817 instruction template; and 2) within the memory access 820 instruction template, the memory access, write mask control 827 instruction template is shown.The generic vector friendly instruction format 800 includes the following fields listed here in the order shown in Figures 8A-8B.Format field 840 - A specific value in this field (instruction format identifier value) uniquely identifies the vector friendly instruction format, thereby identifying that an instruction in the vector friendly instruction format appears in the instruction stream. As such, this field is optional because it is not required for instruction sets that only have a generic vector-friendly instruction format.Basic Operation Field 842—The content of which distinguishes different basic operations.Register Index Field 844 - Its contents specify directly or through address generation the location of the source and destination operands, whether they are in registers or in memory. These include a sufficient number of bits to select N registers from the PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. While N may be up to three source and one destination registers in one embodiment, alternative embodiments may support more or fewer source and destination registers (eg, may support up to two sources, where these One of the sources also acts as a destination; up to three sources can be supported, where one of these sources also acts as a destination; up to two sources and one destination can be supported).Decoration field 846—the content of which distinguishes the presence of instructions in the generic vector instruction format that specify memory accesses from those that do not; that is, distinguishes between no memory access 805 instruction templates and memory access 820 instruction templates ( For example, no memory access 846A and memory access 846B for category field 846 shown in Figures 8A-8B, respectively. Memory access operations read and/or write to the memory hierarchy (in some cases using values in registers to specify source and/or destination addresses), while non-memory access operations do not read and/or write to the memory hierarchy ( For example, source and destination are registers). While in one embodiment this field also selects between three different ways of performing memory address calculations, alternative embodiments may support more, fewer or different ways to perform memory address calculations.Enhanced operation field 850—the content of which distinguishes which of a variety of different operations to perform in addition to the basic operation. This field is contextual. In one embodiment of the invention, this field is divided into a category field 868 , an alpha field 852 , and a beta field 854 . Enhanced operation field 850 allows a common group of operations to be executed in a single instruction instead of 2, 3 or 4 instructions.Scale field 860 - whose content allows scaling the contents of the index field for memory address generation (eg, for address generation using 2 scale * index + base).Displacement field 862A—its content is used as part of memory address generation (eg, for address generation using 2 scaling*index+base+displacement).Displacement factor field 862B (note that juxtaposing displacement field 862A directly above displacement factor field 862B indicates that one or the other is used)—its contents are used as part of address generation; it specifies the size to be accessed by memory ( N) Scaled displacement factor—where N is the number of bytes in the memory access (eg, for address generation using 2 scale*index+base+scaled displacement). Redundant low-order bits are ignored, and therefore, the contents of the displacement factor field are multiplied by the total memory operand size (N) to generate the final displacement to be used to calculate the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 874 (described later herein) and the data manipulation field 854C. The displacement field 862A and the displacement factor field 862B are optional because they are not used for the no memory access 805 instruction template, and/or different embodiments may implement only one or neither of the two.Data Element Width Field 864—The contents of which distinguish which of several data element widths are to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional because it is not required in cases where only one data element width is supported and/or data element widths are supported using some aspect of the opcode.Write Mask Field 870—The content of which controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the results of the base and augmentation operations. Class A instruction templates support merge-write masking, while class B instruction templates support both merge-write masking and zeroing-write masking. When merging, the vector mask allows any set of elements in the destination to be protected from updating during the execution of any operation (specified by the base and augmentation operations); in another embodiment, the destination is preserved The corresponding mask bits of each element have the old value of 0. In contrast, the zeroing vector mask allows any set of elements in the destination to be zeroed during the execution of any operation (specified by the base and augmentation operations); in one embodiment, the elements of the destination are The mask bits are set to 0 when they have a value of 0. A subset of this functionality is the ability to control the vector length of the operation being performed (ie, the span of the elements being modified, from the first to the last); however, the elements being modified do not have to be contiguous. Thus, write mask field 870 allows some vector operations, including loads, stores, arithmetic, logic, and the like. Although described where the content of the write mask field 870 selects the one of several write mask registers that contains the write mask to be used (thus the content of the write mask field 870 indirectly identifies this masking to be performed) embodiments of the present invention, but alternative embodiments instead or in addition allow the contents of the write mask field 870 to directly specify the masking to be performed.Immediate field 872 - whose content allows specification of immediate. This field is optional because it does not exist in implementations that do not support the generic vector friendly format of immediates and because it does not exist in instructions that do not use immediates.Category field 868 - its content distinguishes between different categories of instructions. Referring to Figures 8A-8B, the content of this field selects between Category A and Category B instructions. In Figures 8A-8B, rounded squares are used to indicate that a particular value exists in a field (eg, Category A 868A and Category B 868B for Category field 868 in Figures 8A-8B, respectively).Instruction Templates for Category AIn the case of a class A non-memory access 805 instruction template, the alpha field 852 is interpreted as an RS field 852A, the content of which distinguishes which of the different enhanced operation types is to be performed (eg, for no memory access round type operations 810 and no memory access data transform type operations 815 instruction templates specify rounding 852A.1 and data transform 852A.2), respectively, while beta fields 854 distinguish which of the specified types of operations are to be performed. In the no memory access 805 instruction template, scale field 860, displacement field 862A, and displacement scale field 862B are absent.No Memory Access Instruction Templates—Full Round-Controlled OperationsIn the no memory access full rounding control type operation 810 instruction template, beta field 854 is read as rounding control field 854A, whose content(s) provide static rounding. Although in the described embodiment of the invention rounding control field 854A includes suppress all floating point exceptions (SAE) field 856 and rounding operation control field 858, alternative embodiments may support that both The concepts are all encoded into the same field or there may be only one or the other of these concepts/fields (eg, there may be only the round operation control field 858).SAE field 856 - its content distinguishes whether exception event reporting is disabled; when the content of SAE field 856 indicates that suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler.Round operation control field 858—the contents of which distinguish which of a set of rounding operations are to be performed (eg, round up, round down, round toward zero, and round toward nearest). Thus, the round operation control field 858 allows the rounding mode to be changed on a per instruction basis. In one embodiment of the invention where the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 858 override the register value.No memory access instruction template - data transformation type operationsIn the no memory access data transform type operation 815 instruction template, the beta field 854 is interpreted as a data transform field 854B, the contents of which distinguish which of several data transforms are to be performed (eg, no data transform, provisioning, broadcast).In the case of a class A memory access 820 instruction template, the alpha field 852 is interpreted as an eviction hint field 852B, the content of which distinguishes which of the eviction hints is to be used (in FIG. 8A, for the memory access transient 825 instruction The template and memory access non-transient 830 instruction templates specify transient 852B.1 and non-transient 852B.2, respectively), while beta field 854 is interpreted as data manipulation field 854C, the content of which distinguishes several data manipulation operations (also known as primitive) to be performed (eg, no manipulation; broadcast; up-conversion of source; and down-conversion of destination). The memory access 820 instruction template includes a scale field 860, and optionally a displacement field 862A or a displacement scale field 862B.Vector memory instructions perform vector loads from and store to memory, with translation support. Like regular vector instructions, vector memory instructions transfer data from/to memory in data elements, where what is actually transferred is specified by the contents of the element vector mask selected as the write mask.Memory Access Instruction Template - TransientTransient data is data that is likely to be reused soon enough to benefit from caching. However, this is a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates—Non-transientNon-transient data is data that is unlikely to be reused soon enough to benefit from being cached in level 1 cache, and should be evicted preferentially. However, this is a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates for Category BIn the case of a class B instruction template, the alpha field 852 is interpreted as a write mask control (Z) field 852C, the content of which distinguishes whether the write mask controlled by the write mask field 870 should be merged or zeroed.In the case of a class B non-memory access 805 instruction template, a portion of the beta field 854 is read as the RL field 857A, the content of which distinguishes which of the different enhanced operation types is to be performed (eg, for no memory access, write Mask control, partial rounding control type operation 812 instruction templates and no memory access, write mask control, VSIZE type operation 817 instruction templates specify rounding 857A.1 and vector length (VSIZE) 857A.2 respectively), while beta The remainder of field 854 distinguishes which of the operations of the specified type is to be performed. In the no memory access 805 instruction template, scale field 860, displacement field 862A, and displacement scale field 862B are absent.In the no memory access, write mask control, partial round control type operation 812 instruction template, the remainder of the beta field 854 is interpreted as the round operation field 859A and exception reporting is disabled (the given instruction does not report any kind of floating-point exception flags and does not raise any floating-point exception handlers).Round operation control field 859A—just like round operation control field 858, its content distinguishes which of a set of rounding operations to perform (eg, round up, round down, round toward zero, and round toward nearest enter). Thus, the round operation control field 859A allows the rounding mode to be changed on a per instruction basis. In one embodiment of the invention where the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 858 override the register value.In the no memory access, write mask control, VSIZE type operation 817 instruction template, the remainder of the beta field 854 is interpreted as a vector length field 859B, the content of which distinguishes which of several data vector lengths to execute on ( For example, 128, 256 or 512 bytes).In the case of a class B memory access 820 instruction template, a portion of the beta field 854 is interpreted as a broadcast field 857B, the content of which distinguishes whether a broadcast type data manipulation operation is to be performed, and the remainder of the beta field 854 is interpreted as a vector length field 859B. The memory access 820 instruction template includes a scale field 860, and optionally a displacement field 862A or a displacement scale field 862B.With respect to the general vector friendly instruction format 800 , the full opcode field 874 is shown to include a format field 840 , a base operation field 842 , and a data element width field 864 . While one embodiment is shown in which full opcode field 874 includes all of these fields, full opcode field 874 includes fewer than all of these fields in embodiments that do not support all of these fields. The full opcode field 874 provides the operation code (opcode).Enhanced operation field 850, data element width field 864, and write mask field 870 allow these features to be specified on a per-instruction basis in the generic vector friendly instruction format.The combination of the write mask field and the data element width field creates typed instructions because they allow masks to be applied based on different data element widths.The various instruction templates found within Category A and Category B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within processors may support only class A, only class B, or both classes. For example, high performance general purpose out-of-order cores intended for general purpose computing may only support class B, cores intended primarily for graphics and/or scientific (throughput) computing may only support class A, and cores intended for both Both classes may be supported (of course, it is within the scope of this invention to have some mix of templates and directives from both classes, but not cores of all templates and directives from both classes). Additionally, a single processor may include multiple cores, all of which support the same class or where different cores support different classes. For example, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may only support category A, while one or more of the general purpose cores may be category A only B's high-performance general-purpose core with out-of-order execution and register renaming intended for general-purpose computing. Another processor without a separate graphics core may include one or more general purpose in-order or out-of-order cores supporting both class A and class B. Of course, in different embodiments of the invention, features from one class may also be implemented in another class. Programs written in high-level languages will be placed (eg, compiled just-in-time or statically into) various executable forms, including: 1) having only the class(s) supported by the target processor or 2) have alternative routines written using various combinations of all classes of instructions and have the form of control flow code that selects based on the instructions supported by the processor currently executing the code The routine to execute.Exemplary specific vector friendly instruction format9 is a block diagram illustrating an exemplary specific vector friendly instruction format in accordance with an embodiment of the present invention. FIG. 9 shows a specific vector friendly instruction format 900 in the sense that it specifies the location, size, interpretation, and order of fields, and the values of some of these fields. The specific vector friendly instruction format 900 may be used to extend the x86 instruction set such that some of the fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). This format is consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field field of the existing x86 instruction set with extensions. The figure shows the fields from FIG. 8 to which the fields from FIG. 9 map.It should be understood that although embodiments of the present invention are described with reference to a specific vector friendly instruction format 900 in the context of a generic vector friendly instruction format 800 for illustrative purposes, the present invention is not limited to a specific vector friendly instruction format 900 unless stated otherwise. . For example, general vector friendly instruction format 800 contemplates various possible sizes for various fields, while specific vector friendly instruction format 900 is shown with fields of specific sizes. As a specific example, although the data element width field 864 is shown in the specific vector friendly instruction format 900 as a one-bit field, the invention is not so limited (that is, the general vector friendly instruction format 800 contemplates a other sizes).The specific vector friendly instruction format 900 includes the following fields listed here in the order shown in Figure 9A.EVEX prefix (bytes 0-3) 902—encoded in four-byte form.Format Field 840 (EVEX Byte 0, Bits[7:0])—The first byte (EVEX Byte 0) is the format field 840 and it contains 0x62 (used in one embodiment of the invention to distinguish vector friendly instructions format unique value).The second-fourth bytes (EVEX bytes 1-3) include several bit fields that provide specific capabilities.REX field 905 (EVEX byte 1, bits [7-5])—consists of EVEX.R bit field (EVEX byte 1, bits [7]–R), EVEX.X bit field (EVEX byte 1, bits [ 6]–X) and 857BEX byte 1, bits [5]–B). EVEX.R, EVEX.X and EVEX.B bitfields provide the same functionality as the corresponding VEX bitfields and are encoded using 1s complement form, i.e. ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B . The other fields of the instruction encode the lower three bits of the register index (rrr, xxx, and bbb) as known in the art so that Rrrr, Xxxx and Bbbb.REX' field 910 - This is the first part of the REX' field 910 and is the EVEX.R' bit field used to encode the upper 16 or lower 16 of the extended 32 register set (EVEX byte 1, bits [4]-R '). In one embodiment of the invention, this bit, and others shown herein, are stored in a bit-reversed format to distinguish from the BOUND instruction (in the well-known x8632-bit mode), the actual opcode byte of the BOUND instruction is 62, but does not accept a value of 11 in the MOD field in the MOD R/M field (described herein); alternative embodiments of the present invention do not store this and other bits indicated herein in inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R and other RRRs from other fields.Opcode Map Field 915 (EVEX Byte 1, Bits[3:0]—mmmm)—the contents of which encode the implied leading opcode byte (OF, OF 38, or OF 3).Data element width field 864 (EVEX byte 2, bits [7]-W)—represented by the symbol EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data elements or 64-bit data elements).EVEX.vvvv 920 (EVEX byte 2, bits [6:3]-vvvv)—The role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source specified in reversed (one's complement) form A register operand, and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes a destination register operand specified in complement for some vector shifts; or 3) EVEX.vvvv does not encode any action objects, this field is reserved and should contain 1111b. Thus, EVEX.vvvv field 920 encodes the 4 low-order bits of the first source register specifier stored in inverted (one's complement) form. Depending on the instruction, an additional different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 868 Category field (EVEX byte 2, bits [2]-U) - if EVEX.U=0, it indicates Category A or EVEX.U0; if EVEX.U=1, it indicates Category B or EVEX.U1.Prefix Encoding Field 925 (EVEX Byte 2, Bits[1:0]-pp)—Provides additional bits for the Basic Operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this has the benefit of compressing the SIMD prefix (the EVEX prefix requires only 2 bits, rather than requiring a byte to express the SIMD prefix). In one embodiment, to support legacy SSE instructions that use SIMD prefixes (66H, F2H, F3H) in both legacy format and EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and when running is expanded to the legacy SIMD prefix before being supplied to the decoder's PLA (so the PLA can execute both legacy and EVEX formats of these legacy instructions without modification). While newer instructions may directly use the contents of the EVEX prefix encoding field as an opcode extension, some embodiments extend in a similar manner for consistency, but allow these legacy SIMD prefixes to specify different meanings. Alternative embodiments may redesign the PLA to support 2-bit SIMD prefix encoding so that no extensions are required.Alpha field 852 (EVEX byte 3, bit[7] – EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also represented by alpha )—As mentioned earlier, this field is contextual.Beta field 854 (EVEX byte 3, bits [6:4] – SSS; also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also used to map βββ display)—as mentioned earlier, this field is context-specific.REX' field 910 - This is the remainder of the REX' field and is the EVEX.V' bit field that can be used to encode the upper 16 or lower 16 of the extended 32 register set (EVEX byte 3, bits[3]-V' ). This bit is stored in bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write Mask Field 870 (EVEX Byte 3, Bits[2:0]-kkk)—its content specifies the index of the register in the Write Mask Register as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has special behavior implying that a write mask is not used for a specific instruction (this can be achieved in various ways, including using a hard-wired to all-ones Write a mask or bypass the masking hardware).The real opcode field 930 (byte 4) is also referred to as the opcode byte. Specify part of the opcode in this field.MOD R/M field 940 (byte 5) includes MOD field 942, Reg field 944, and R/M field 946. As previously mentioned, the contents of the MOD field 942 distinguish between memory access and non-memory access operations. The role of the Reg field 944 can be summarized into two situations: encoding a destination register operand or a source register operand, or being treated as an opcode extension and not being used to encode any instruction operand. The role of the R/M field 946 may include the following: encoding an instruction operand referencing a memory address, or encoding a destination register operand or a source register operand.Scale, Index, Base, SIB Byte 950 (Byte 6)—As previously described, the contents of the scale field 860 are used for memory address generation. SIB.SS952, SIB.xxx 954 and SIB.bbb 956 - The contents of these fields have been previously mentioned with respect to register indices Xxxx and Bbbb.Displacement Field 862A (Bytes 7-10) - When the MOD field 942 contains 10, bytes 7-10 are the displacement field 862A and work the same way as traditional 32-bit displacement (disp32) and at byte granularity.Displacement Factor Field 862B (Byte 7) - When the MOD field 942 contains 01, byte 7 is the displacement factor field 862B. The location of this field is the same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only be addressed between -128 and 127 byte offsets; for a 64-byte cache line, disp8 uses 8 bits that can be set to only four The only really useful values are -128, -64, 0, and 64; since a larger range is often needed, disp32 is used; however, disp32 requires 4 bytes. Unlike disp8 and disp32, displacement factor field 862B is a reinterpretation of disp8; when displacement factor field 862B is used, the actual displacement is determined by multiplying the contents of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte is used for displacement, but has a much larger range). This compressed displacement is based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and therefore, the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 862B replaces the traditional x86 instruction set 8-bit displacement. Therefore, the displacement factor field 862B is encoded in the same way as the x86 instruction set 8-bit displacement (and thus no change in ModRM/SIB encoding rules), with the only exception that disp8 is overloaded to disp8*N. In other words, there is no change in encoding rules or encoding length, but only in the hardware's interpretation of the displacement value (hardware needs to scale the displacement by the size of the memory operand to obtain the address offset in bytes). The immediate field 872 operates as previously described.full opcode field9B is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the complete opcode field 874, according to one embodiment of the present invention. Specifically, full opcode field 874 includes format field 840 , basic operation field 842 , and data element width (W) field 864 . Base operation field 842 includes prefix encoding field 925 , opcode mapping field 915 , and real opcode field 930 .register index field9C is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the register index field 844, according to one embodiment of the present invention. Specifically, register index field 844 includes REX field 905 , REX' field 910 , MODR/M.reg field 944 , MODR/M.r/m field 946 , VVVV field 920 , xxx field 954 , and bbb field 956 .Enhanced action fields9D is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the enhanced operation field 850, according to one embodiment of the present invention. When the class (U) field 868 contains 0, it represents EVEX.U0 (class A 868A); when it contains 1, it represents EVEX.U1 (class B 868B). When U=0 and MOD field 942 contains 11 (indicating no memory access operation), alpha field 852 (EVEX byte 3, bits [7]-EH) is interpreted as rs field 852A. When rs field 852A contains 1 (rounding 852A.1), beta field 854 (EVEX byte 3, bits [6:4]-SSS) is interpreted as rounding control field 854A. Rounding control field 854A includes one-bit SAE field 856 and two-bit rounding operation field 858 . When the rs field 852A contains 0 (data transform 852A.2), the beta field 854 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data transform field 854B. When U=0 and the MOD field 942 contains 00, 01 or 10 (representing a memory access operation), the alpha field 852 (EVEX byte 3, bits [7]-EH) is interpreted as an eviction hint (EH) field 852B and beta field 854 (EVEX byte 3, bits [6:4]-SSS) are interpreted as three bit data manipulation field 854C.When U=1, the alpha field 852 (EVEX byte 3, bits [7]-EH) is interpreted as a write mask control (Z) field 852C. When U=1 and MOD field 942 contains 11 (indicating no memory access operation), a portion of Beta field 854 (EVEX byte 3, bits [4]-S0) is interpreted as RL field 857A; 857A.1), the remainder of the beta field 854 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as the round operation field 859A, and when the RL field 857A contains 0 (VSIZE 857. A2), the remainder of the beta field 854 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as the vector length field 859B (EVEX byte 3, bits [6-5]-L1-0 ). When U=1 and MOD field 942 contains 00, 01 or 10 (indicating a memory access operation), beta field 854 (EVEX byte 3, bits [6:4]-SSS) is interpreted as vector length field 859B (EVEX word Section 3, bits [6-5]-L1-0) and broadcast field 857B (EVEX byte 3, bits [4]-B).Exemplary Register ArchitectureFigure 10 is a block diagram of a register architecture 1000 according to one embodiment of the invention. In the illustrated embodiment, there are 32 512-bit wide vector registers 1010; these registers are referred to as zmm0 through zmm31. The lower 256 bits of the lower 16zmm registers are overlaid on registers ymm0-16. The low order 128 bits of the low 16zmm registers (the low order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 900 operates on these overlaid register files as shown in the table below.In other words, the vector length field 859B selects between the maximum length and one or more other shorter lengths, where each such shorter length is half the length of the previous length; and there is no instruction template for the vector length field 859B Operate on the maximum vector length. Additionally, in one embodiment, the Class B instruction templates of the particular vector friendly instruction format 900 operate on packed or scalar single/double precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element positions in the zmm/ymm/xmm registers; higher order data element positions are either kept the same as they were before the instruction, or are zeroed, depending on the embodiment.Write Mask Registers 1015 - In the illustrated embodiment, there are 8 write mask registers (k0 to k7), each 64 bits in size. In an alternate embodiment, the size of the write mask register 1015 is 16 bits. As previously mentioned, in one embodiment of the invention, the vector mask register k0 cannot be used as a write mask; it is selected hardwired when the code that would normally indicate k0 is used for the write mask The write mask is 0xFFFF, which effectively disables the write mask for this instruction.General Purpose Registers 1025 - In the illustrated embodiment, there are sixteen 64-bit general purpose registers that are used in conjunction with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 to R15.Scalar floating point stack register file (x87 stack) 1045 (on which the MMX packed integer flat register file 1050 is aliased)—in the illustrated embodiment, the x87 stack is used to extend the An eight-element stack for performing scalar floating-point operations on bit-floating-point data; whereas MMX registers are used to perform operations on 64-bit packed integer data, and hold operands for some operations performed between MMX and XMM registers.Alternative embodiments of the present invention may use wider or narrower registers. Furthermore, alternative embodiments of the present invention may use more, fewer or different register files and registers.Exemplary Core Architecture, Processor and Computer ArchitectureProcessor cores may be implemented in different processors in different ways and for different purposes. For example, implementations of such cores may include: 1) general purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores intended for general purpose computing; 3) primarily intended for graphics and /or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) CPUs including one or more general purpose in-order cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2 ) includes a coprocessor of one or more dedicated cores primarily intended for graphics and/or science (throughput). Such different processors result in different computer system architectures, which may include: 1) the coprocessor on a separate die from the CPU; 2) the coprocessor in the same package as the CPU, on a separate die; 3 ) a coprocessor on the same die as the CPU (in this context, such a coprocessor is sometimes referred to as dedicated logic, such as integrated graphics and/or scientific (throughput) logic, or as a dedicated core); and 4) a system-on-a-chip, which may include the described CPU (sometimes referred to as application core(s) or application processor(s)), the co-processors described above, and additional Function. An exemplary core architecture is described next, followed by a description of an exemplary processor and computer architecture.Exemplary Core ArchitectureOrdered and unordered core block diagrams11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with an embodiment of the present invention. 11B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor in accordance with an embodiment of the present invention. The solid-line boxes in Figures 11A-11B illustrate in-order pipelines and in-order cores, while optional additions of dashed-line boxes illustrate register renaming, out-of-order issue/execution pipelines, and cores. Unordered aspects will be described considering that ordered aspects are a subset of unordered aspects.In Figure 11A, processor pipeline 1100 includes fetch stage 1102, length decode stage 1104, decode stage 1106, allocate stage 1108, rename stage 1110, schedule (also known as dispatch or issue) stage 1112, register read/memory read Fetch stage 1114 , execute stage 1116 , write back/memory write stage 1118 , exception handling stage 1122 , and commit stage 1124 .FIG. 11B shows that processor core 1190 includes front end unit 1130 coupled to execution engine unit 1150 , and both are coupled to memory unit 1170 . Core 1190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As another option, cores 1190 may be dedicated cores, eg, network or communication cores, compression engines, coprocessor cores, general purpose computing graphics processing unit (GPGPU) cores, graphics cores, and the like.Front-end unit 1130 includes branch prediction unit 1132 coupled to instruction cache unit 1134, instruction cache unit 1134 coupled to instruction translation lookaside buffer (TLB) 1136, instruction TLB 1136 coupled to instruction fetch unit 1138, instruction fetch unit 1138 Coupled to decoding unit 1140. Decode unit 1140 (or decoder) may decode the instructions and generate as output one or more micro-operations, microcode entry points, microinstructions, other instructions or other control signals, the micro-operations, microcode entry points, microinstructions , other instructions, or other control signals are decoded from, or otherwise reflect, or derived from, the original instruction. The decoding unit 1140 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memory (ROMs), and the like. In one embodiment, core 1190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (eg, in decode unit 1140 or otherwise within front end unit 1130). Decode unit 1140 is coupled to rename/distributor unit 1152 in execution engine unit 1150 .The execution engine unit 1150 includes a rename/allocator unit 1152 coupled to a retirement unit 1154 and a set of one or more scheduler units 1156 . Scheduler unit(s) 1156 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit(s) 1156 is coupled to physical register file unit(s) 1158 . Each of the physical register file units 1158 represents one or more physical register files, different ones of which store one or more different data types, eg, scalar integer, scalar floating point, packed integer, packed Floating point, vector integer, vector floating point, state (eg, instruction pointer which is the address of the next instruction to execute), etc. In one embodiment, the physical register file unit 1158 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units may provide architected vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 1158 overlaps with retirement unit 1154 to illustrate the various ways in which register renaming and out-of-order execution may be implemented (eg, utilizing reorder buffer(s) and (one or more) multiple) retirement register files; utilize future file(s), history buffer(s), and retirement register file(s); utilize register maps and pools of registers; etc.). Retirement unit 1154 and physical register file unit(s) 1158 are coupled to execution cluster(s) 1160 . Execution cluster(s) 1160 includes a set of one or more execution units 1162 and a set of one or more memory access units 1164 . Execution unit 1162 may perform various operations (eg, shift, add, subtract, multiply) on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 1156, physical register file unit(s) 1158, and execution cluster(s) 1160 are shown as possibly multiple because some embodiments are of some type data/operations to create separate pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or memory access pipeline, each with its own scheduler unit, Physical register file unit and/or execution cluster - and in the case of a separate memory access pipeline, some embodiments are implemented where only the execution cluster of this pipeline has memory access unit(s 1164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be issued/executed out-of-order and the rest in-order.The set of memory access units 1164 is coupled to memory unit 1170, which includes a data TLB unit 1172, which is coupled to a data cache unit 1174, which is coupled to a level 2 (L2) cache unit 1176. In one exemplary embodiment, memory access unit 1164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 1172 in memory unit 1170 . Instruction cache unit 1134 is further coupled to level 2 (L2) cache unit 1176 in memory unit 1170 . L2 cache unit 1176 is coupled to one or more other levels of cache and ultimately to main memory.As an example, an exemplary register renaming, out-of-order issue/execute core architecture may implement pipeline 1100 as follows: 1) instruction fetch 1138 performs fetch and length decode stages 1102 and 1104; 2) decode unit 1140 performs decode stage 1106; 3) reload Name/allocator unit 1152 performs allocation stage 1108 and rename stage 1110; 4) scheduler unit(s) 1156 performs scheduling stage 1112; 5) physical register file unit(s) 1158 and memory unit 1170 Execute register read/memory read stage 1114; execute cluster 1160 execute execute stage 1116; 6) memory unit 1170 and physical register file unit(s) 1158 execute write back/memory write stage 1118; 7) on exception Various units may be involved in processing stage 1122; and 8) retire unit 1154 and physical register file unit(s) 1158 perform commit stage 1124.The core 1190 may support one or more instruction sets (eg, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, Calif.; Sunnyvale, Calif. The ARM instruction set from ARM Holdings (with optional additional extensions such as NEON), including the instruction(s) described herein. In one embodiment, core 1190 includes logic to support compressed data instruction set extensions (eg, AVX1, AVX2), thereby allowing operations used by many multimedia applications to be performed with compressed data.It should be understood that cores may support multithreading (performing two or more parallel sets of operations or threads), and may support multithreading in various ways, including time sliced multithreading, simultaneous multithreading ( where a single physical core provides logical cores for each thread that that physical core is simultaneously multithreading), or a combination of these (eg, time-sliced fetch and decode followed by simultaneous multithreading, as in Hyperthreading technology, for example) ).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1134/1174 and a shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instruction and data, eg, level 1 ( L1) Internal cache or multi-level internal cache. In some embodiments, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific Exemplary Ordered Core Architecture12A-12B illustrate block diagrams of a more specific exemplary in-order core architecture, which would be one of several logic blocks (including other cores of the same type and/or different types) in the chip. The logic blocks communicate with certain fixed-function logic, memory I/O interfaces, and other necessary I/O logic through a high-bandwidth interconnect network (eg, a ring network), depending on the application.Figure 12A is a block diagram of a single processor core and its connection to the on-chip interconnect network 1202 and its level 2 (L2) cache local subset 1204 in accordance with an embodiment of the present invention. In one embodiment, the instruction decoder 1200 supports the x86 instruction set with the compressed data instruction set extension. L1 cache 1206 allows low latency access to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 1208 and vector unit 1210 use separate sets of registers (scalar registers 1212 and vector registers 1214, respectively) and the data transferred between them is written to memory, then is read back from level 1 (L1) cache 1206, but alternative embodiments of the present invention may use a different scheme (eg, use a single register set or include allowing data to be transferred between two register files without being written and readback communication path).The local subset 1204 of the L2 cache is part of the global L2 cache, which is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset 1204 of L2 caches. Data read by a processor core is stored in its L2 cache subset 1204 and can be accessed quickly in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own subset of L2 caches 1204 and flushed from other subsets as necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.Figure 12B is an expanded view of a portion of the processor core in Figure 12A, according to an embodiment of the present invention. 12B includes the L1 data cache 1206A portion of the L1 cache 1206, as well as more details about the vector unit 1210 and the vector registers 1214. Specifically, vector unit 1210 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1228) that executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports marshalling of register inputs with marshalling unit 1220 , value conversion by numeric conversion units 1222A-B, and duplication of memory inputs by duplication unit 1224 . Write mask register 1226 allows assertion result vector writes.13 is a block diagram of a processor 1300, which may have more than one core, may have an integrated memory controller, and may have integrated graphics, according to an embodiment of the present invention. The solid-line box in Figure 13 illustrates a processor 1300 with a single core 1302A, a system agent 1310, and a set of one or more bus controller units 1316, while an optional addition of dashed-line boxes illustrates having multiple cores 1302A -N, a set of one or more integrated memory control units 1314 in system agent unit 1310 and a replacement processor 1300 for special purpose logic 1308.Thus, different implementations of the processor 1300 may include: 1) where the dedicated logic 1308 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores) and the cores 1302A-N are one or more CPUs with general-purpose cores (eg, general-purpose in-order cores, general-purpose out-of-order cores, or a combination of the two); 2) where cores 1302A-N are large numbers primarily intended for graphics and/or science (throughput) coprocessors of dedicated cores; and 3) coprocessors in which cores 1302A-N are a large number of general purpose ordered cores. Thus, processor 1300 may be a general-purpose processor, a co-processor, or a special-purpose processor, such as a network or communications processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), a high-throughput many-core integrated integrated core, MIC) coprocessors (including 30 or more cores), embedded processors, etc. A processor may be implemented on one or more chips. Processor 1300 may be part of one or more substrates and/or may be implemented on one or more substrates using any of several process technologies, such as BiCMOS, CMOS, or NMOS.The memory hierarchy may include respective one or more levels of caches 1304A-N within cores 1302A-N, a set or one or more shared cache units 1306, and external memory coupled to the set of integrated memory controller units 1314 (not shown). The set of shared cache units 1306 may include one or more intermediate level caches (eg, level 2 (L2), level 3 (L3), level 4 (4), or other level caches), a last level cache (last level cache, LLC), and/or a combination of these. Although in one embodiment the ring-based interconnect unit 1312 interconnects the dedicated logic 1308, the set of shared cache units 1306, and the system agent unit 1310/integrated memory controller unit(s) 1314, alternative embodiments may also Such cells are interconnected using any number of known techniques. In one embodiment, coherency is maintained between one or more cache units 1306 and cores 1302A-N.In some embodiments, one or more of the cores 1302A-N are capable of multi-threading. System agent 1310 includes those components that coordinate and operate cores 1302A-N. The system agent unit 1310 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or may include the logic and components required to adjust the power states of the cores 1302A-N and the integrated graphics logic 1308. The display unit is used to drive one or more externally connected displays.Cores 1302A-N may be homogeneous or heterogeneous with respect to architectural instruction sets; that is, two or more of cores 1302A-N may be capable of executing the same instruction set, while other cores may be capable of only Execute a subset of this instruction set or a different instruction set.Exemplary Computer Architecture14-17 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors , DSP), graphics devices, video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and other system designs and configurations of various other electronic devices are also suitable. In sum, a wide variety of systems or electronic devices that can incorporate the processors and/or other execution logic disclosed herein are generally suitable.Referring now to FIG. 14, shown is a block diagram of a system 1400 according to one embodiment of the present invention. System 1400 may include one or more processors 1410 , 1415 coupled to controller hub 1420 . In one embodiment, controller hub 1420 includes graphics memory controller hub (GMCH) 1490 and Input/Output Hub (IOH) 1450 (which may be on separate chips); GMCH 1490 includes a memory and graphics controller coupled to memory 1440 and coprocessor 1445; IOH 1450 couples input/output (I/O) devices 1460 to GMCH 1490. Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 1440 and coprocessor 1445 are directly coupled to the processor 1410, and the controller hub 1420 communicates with the IOH 1450 in a single chip.The optionality of additional processors 1415 is shown in dashed lines in FIG. 14 . Each processor 1410 , 1415 may include one or more of the processing cores described herein and may be some version of processor 1300 .The memory 1440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of both. For at least one embodiment, the controller hub 1420 is connected 1495 to (one or more ) processors 1410, 1415 to communicate.In one embodiment, coprocessor 1445 is a special purpose processor, eg, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like. In one embodiment, the controller hub 1420 may include an integrated graphics accelerator.There may be various differences between processors 1410, 1415 in terms of the range of value metrics including architectural characteristics, microarchitectural characteristics, thermal characteristics, power consumption characteristics, and the like.In one embodiment, processor 1410 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. The processor 1410 identifies these coprocessor instructions as the type that should be executed by the attached coprocessor 1445 . Accordingly, processor 1410 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 1445 over a coprocessor bus or other interconnect. Coprocessor(s) 1445 accepts and executes the received coprocessor instructions.Referring now to FIG. 15, shown therein is a block diagram of a first more specific exemplary system 1500 in accordance with an embodiment of the present invention. As shown in FIG. 15 , the multiprocessor system 1500 is a point-to-point interconnect system and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550 . Each of processors 1570 and 1580 may be some version of processor 1300 . In one embodiment of the invention, processors 1570 and 1580 are processors 1410 and 1415, respectively, and coprocessor 1538 is coprocessor 1445. In another embodiment, processors 1570 and 1580 are processor 1410 and coprocessor 1445, respectively.Processors 1570 and 1580 are shown to include integrated memory controller (IMC) units 1572 and 1582, respectively. Processor 1570 also includes point-to-point (P-P) interfaces 1576 and 1578 as part of its bus controller unit; similarly, second processor 1580 includes P-P interfaces 1586 and 1588 . Processors 1570, 1580 may exchange information via point-to-point (P-P) interface 1550 using P-P interface circuits 1578, 1588. As shown in Figure 15, IMCs 1572 and 1582 couple the processors to respective memories, namely memory 1532 and memory 1534, which may be part of main memory locally attached to the respective processors.Processors 1570, 1580 may each utilize point-to-point interface circuits 1576, 1594, 1586, 1598 to exchange information with chipset 1590 via individual P-P interfaces 1552, 1554. Chipset 1590 may optionally exchange information with coprocessor 1538 via high performance interface 1592 and interconnect 1539. In one embodiment, coprocessor 1538 is a special purpose processor, eg, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like.Shared cache (not shown) may be included in either processor, or external to both processors, but connected to the processors via a P-P interconnect, so that either or both processors' locally cached information is processed can be stored in the shared cache if the device is placed in a low power mode.Chipset 1590 may be coupled to first bus 1516 via interface 1596 . In one embodiment, the first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, although the present The scope of the invention is not limited to this.As shown in FIG. 15 , various I/O devices 1514 may be coupled to a first bus 1516 , as well as a bus bridge 1518 that couples the first bus 1516 to a second bus 1520 . In one embodiment, one or more additional processors 1515, such as co-processors, high-throughput MIC processors, GPGPUs, accelerators (eg, graphics accelerators or digital signal processing (DSP) units), field programmable gates An array, or any other processor, is coupled to the first bus 1516 . In one embodiment, the second bus 1520 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1520, including, for example, a keyboard/mouse 1522, a communication device 1527, and a storage unit 1528, such as a disk drive or other mass storage device, which in one embodiment may include instructions/ Code and Data 1530. Additionally, audio I/O 1524 may be coupled to second bus 1520 . Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 15, the system may implement a multi-drop bus or other such architecture.Referring now to FIG. 16, shown therein is a block diagram of a second more specific exemplary system 1600 in accordance with an embodiment of the present invention. Similar elements in FIGS. 15 and 16 bear like numerals, and certain aspects of FIG. 15 have been omitted from FIG. 16 to avoid obscuring other aspects of FIG. 16 .16 illustrates that processors 1570, 1580 may include integrated memory and I/O control logic ("CL") 1672 and 1682, respectively. Thus, the CLs 1672, 1682 include integrated memory controller units and include I/O control logic. 16 illustrates that not only memory 1532, 1534 is coupled to CL 1672, 1682, but I/O device 1614 is also coupled to control logic 1672, 1682. Conventional I/O devices 1615 are coupled to chipset 1590 .Referring now to FIG. 17, shown therein is a block diagram of an SoC 1700 in accordance with an embodiment of the present invention. Similar elements in Figure 13 bear similar reference numerals. Also, the dotted box is an optional feature on more advanced SoCs. In Figure 17, the interconnect unit(s) 1702 are coupled to: an application processor 1710 comprising a set of one or more cores 1302A-N and a shared cache unit(s) 1306; a system proxy unit 1310; bus controller unit(s) 1316; integrated memory controller unit(s) 1314; one or more coprocessor(s) 1720, which may include integrated graphics logic, graphics processors, audio processor, and video processor; static random access memory (SRAM) unit 1730; direct memory access (DMA) unit 1732; and display unit 1740 for coupling to one or more external monitor. In one embodiment, coprocessor(s) 1720 include special purpose processors, such as network or communication processors, compression engines, GPGPUs, high throughput MIC processors, embedded processors, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the present invention may be implemented in a programmable system including at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements), at least one input device, and at least one output device computer program or program code that executes on it.Program code, such as code 1530 shown in Figure 15, may be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor, such as a digital signal processor (DSP), microcontroller, application specific integrated circuit (ASIC), or microprocessor.Program code may be implemented in a high-level procedural or object-oriented programming language to communicate with the processing system. The program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium representing various logic within a processor that, when read by a machine, cause the machine to fabricate logic to perform the techniques described herein. This representation, referred to as an "IP core," can be stored on a tangible machine-readable medium and provided to various customers or manufacturing facilities for loading into the fabrication machines that actually make the logic or processors.Such machine-readable storage media may include, but are not limited to, non-transitory tangible arrangements of items manufactured or formed by a machine or apparatus, including storage media such as: hard disks, any other type of disks (including floppy disks, optical disks) , compact disk read-only memory (compact disk read-only memory, CD-ROM), rewritable compact disk (compactdisk rewritable, CD-RW), and magneto-optical disk), semiconductor devices (such as, read-only memory (read-only memory, ROM), random access memory (RAM) such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (erasable programmable read-only memory, EPROM), flash memory, electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), phase change memory (phase change memory, PCM)), magnetic card or optical card , or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or containing design data that defines the features of the structures, circuits, devices, processors and/or systems described herein, such as a hardware description language (Hardware Description Language). Description Language, HDL). Such an embodiment may also be referred to as a program product.Simulation (including binary conversion, code transformation, etc.)In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, an instruction translator may translate an instruction (eg, using static binary translation, dynamic binary translation including dynamic compilation), warp, emulate, or otherwise translate the instruction to one or more other instructions to be processed by the core. Instruction translators may be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partly on-processor and partly off-processor.18 is a block diagram in contrast to using a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction translator is a software instruction translator, although alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations thereof. 18 shows that a program in a high-level language 1802 can be compiled with an x86 compiler 1804 to generate x86 binaries 1806 that can be natively executed by a processor 1816 having at least one x86 instruction set core. Processor 1816 having at least one x86 instruction set core represents any processor that can compatibly execute or otherwise process (1) a substantial portion of the instruction set of an Intel x86 instruction set core or (2) An object code version of an application or other software that is targeted to run on an Intel processor with at least one x86 instruction set core to perform substantially the same functions as an Intel processor with at least one x86 instruction set core in order to achieve the same functionality as an Intel processor with at least one x86 instruction set core Essentially the same results for Intel processors with at least one x86 instruction set core. The x86 compiler 1804 represents a compiler operable to generate x86 binaries 1806 (eg, object code) that, with or without additional link processing, can be used with at least one x86 instruction set core is executed on the processor 1816. Similarly, FIG. 18 shows that programs in high-level language 1802 can be compiled using an alternative instruction set compiler 1808 to generate alternative instruction set binaries 1810 that can be processed without at least one x86 instruction set core The processor 1814 (eg, a processor having a core executing the MIPS instruction set of MIPS Technologies, Inc. of Sunnyvale, Calif. and/or the ARM instruction set of ARM Holdings, Inc. of Sunnyvale, Calif.) executes natively. Instruction converter 1812 is used to convert x86 binary code 1806 into code that can be natively executed by processor 1814 without an x86 instruction set core. It is unlikely that this converted code will be identical to the alternate instruction set binary code 1810, as instruction converters capable of doing this are difficult to make; however, the converted code will implement the general operation and be represented by instructions from the alternate instruction set constitute. Thus, instruction translator 1812 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1806 through emulation, emulation, or any other process.In one or more first embodiments, a processor includes a decoder including circuitry to decode a Secure Set-Based Arbitration Mode (SEAM) Call (SEAMCALL) instruction, the SEAMCALL instruction A first field is included to provide an opcode to indicate that a logical processor is to transition from a traditional virtual machine extension (VMX) root operation, and a second field to provide an operation object to specify one of the following: a SEAM loader module loaded in a reserved scope of the processor-coupled system memory, wherein the processor's scope register stores information identifying the reserved scope, or is to be loaded in the reserved scope by the SEAM loader module a loaded SEAM module that initiates the SEAM of the processor, and an execution circuit coupled to the decoder to execute the SEAMCALL instruction, wherein the execution circuit determines whether to access based on the operation object The SEAM loader module or one of the SEAM modules.In one or more second embodiments, with respect to the first embodiment, and further, the execution circuit determining whether to access the SEAM loader module or one of the SEAM modules includes the execution circuit in A first variable (SEAM_READY) that identifies the availability of the SEAM module while determining whether to signal failure of the SEAMCALL instruction based on determining whether the operation object specifies the SEAM module, the first variable is different from identifying all The second variable (P_SEAMLDR_READY) that describes whether the SEAM loader module is available.In one or more third embodiments, with respect to the first embodiment or the second embodiment, furthermore, the execution circuit determines whether to access the SEAM loader module or one of the SEAM modules One includes the execution circuitry to determine whether a mutex lock is acquired, wherein the mutex lock is shared among a plurality of logical processors.In one or more fourth embodiments, with respect to any one of the first to third embodiments, and further, the execution circuit executing the SEAMCALL instruction further includes the execution circuit determining the operation object Specifies the SEAM loader module, and sets a first variable (inP_SEAMLDR) based on the operation object to indicate that among the SEAM loader module and the SEAM module, the SEAM loader module is more recently said logical processor call, wherein, among a plurality of logical processors to be provided by said processor, said first variable corresponds only to said logical processor.In one or more fifth embodiments, with respect to the fourth embodiment, and further, the processor further includes a translation lookaside buffer (TLB), wherein the execution circuit further executes the SEAM based on the instruction set A SEAMRET instruction includes the execution circuit determining whether to flush the TLB based on the first variable.In one or more sixth embodiments, with respect to any one of the first to third embodiments, and further, the processor further includes a measurement result register, wherein the execution circuit executes the SEAMCALL instruction Including the execution circuit invoking the execution of the SEAM loader module to write the measurement result of the SEAM module into the measurement result register.In one or more seventh embodiments, a system includes a memory, and a processor coupled to the memory, the processor including a decoder including circuitry to implement an instruction set-based secure arbitration mode (SEAM) decodes a call (SEAMCALL) instruction that includes a first field to provide an opcode to indicate that a logical processor is to transition from a legacy virtual machine extension (VMX) root operation, and a second field, to provide an operation object to specify one of the following: a SEAM loader module to be loaded in a reserved range of the memory where the processor's range registers store information identifying the reserved range, or to be loaded by the SEAM a SEAM module loaded by a loader module in the reserved scope, the SEAM module initiating the SEAM of the processor, and an execution circuit coupled with the decoder to execute the SEAMCALL instruction, wherein the execution circuit is based on the operation object to determine whether to access the SEAM loader module or one of the SEAM modules.In one or more eighth embodiments, with respect to the seventh embodiment, and further, the execution circuit determines whether to access the SEAM loader module or one of the SEAM modules comprises the execution circuit in the first A variable (SEAM_READY) that identifies the availability of the SEAM module while determining whether to signal failure of the SEAMCALL instruction based on determining whether the operation object specifies the SEAM module, the first variable being different from identifying the The second variable (P_SEAMLDR_READY) of whether the SEAM loader module is available.In one or more ninth embodiments, with respect to the seventh embodiment or the eighth embodiment, and further, the execution circuit determines whether to access the SEAM loader module or one of the SEAM modules The execution circuitry includes determining whether a mutex is acquired, wherein the mutex is shared among a plurality of logical processors.In one or more tenth embodiments, with regard to any one of the seventh to ninth embodiments, and further, the execution circuit executing the SEAMCALL instruction further includes the execution circuit determining the operation object Specifies the SEAM loader module, and sets a first variable (inP_SEAMLDR) based on the operation object to indicate that among the SEAM loader module and the SEAM module, the SEAM loader module is more recently said logical processor call, wherein, among a plurality of logical processors to be provided by said processor, said first variable corresponds only to said logical processor.In one or more eleventh embodiments, with respect to the tenth embodiment, and further, the processor further includes a translation lookaside buffer (TLB), wherein the execution circuit further executes execution based on the instruction set The SEAMRET instruction includes the execution circuit determining whether to flush the TLB based on the first variable.In one or more twelfth embodiments, with respect to any one of the seventh to ninth embodiments, and further, the processor further includes a measurement result register, wherein the execution circuit to execute the SEAMCALL instruction includes The execution circuit invokes the execution of the SEAM loader module to write the measurement result of the SEAM module into the measurement result register.In one or more thirteenth embodiments, one or more non-transitory computer-readable storage media having stored thereon instructions that, when executed by one or more processing units, cause a core of a processor to Performing a method comprising decoding an instruction set-based secure arbitration mode (SEAM) call (SEAMCALL) instruction, the SEAMCALL instruction including a first field to provide an opcode to indicate that a logical processor Legacy Virtual Machine Extension (VMX) root operation transition, and a second field to provide an operation object to specify one of the following: a SEAM loader module loaded in a reserved range of system memory coupled to the processor, where all A range register of the processor stores information identifying the reserved range, or a SEAM module loaded in the reserved range by the SEAM loader module, the SEAM module initiates the SEAM of the processor, and executes the The SEAMCALL instruction includes determining whether to access the SEAM loader module or one of the SEAM modules based on the operation object.In one or more fourteenth embodiments, with respect to the thirteenth embodiment, and further, the method further comprises initiating an authentication code module (ACM) at the processor, and utilizing the ACM, at the The SEAM loader module is loaded in the reserved scope.In one or more fifteenth embodiments, with respect to the fourteenth embodiment, and further, the method further comprises invoking execution of the SEAM loader module to load the SEAM module in the reserved scope .In one or more sixteenth embodiments, with respect to the thirteenth embodiment or the fourteenth embodiment, and furthermore, determining whether to access the SEAM loader module or one of the SEAM modules comprises Determining whether to signal failure of the SEAMCALL instruction based on determining whether the operation object specifies the SEAM module while identifying the availability of the SEAM module is a first variable (SEAM_READY), the first variable being different from identifying The second variable (P_SEAMLDR_READY) of whether the SEAM loader module is available.In one or more seventeenth embodiments, with respect to any of the thirteenth to fourteenth embodiments, and further, determining whether to access the SEAM loader module or one of the SEAM modules This includes determining whether a mutex is acquired, where the mutex is shared among multiple logical processors.In one or more eighteenth embodiments, with respect to any one of the thirteenth to fourteenth embodiments, and further, executing the SEAMCALL instruction further comprises determining that the operation object specifies the SEAM load loader module, and sets a first variable (inP_SEAMLDR) based on the operation object to indicate that among the SEAM loader module and the SEAM module, the SEAM loader module is most recently called by the logical processor , wherein, among the plurality of logical processors provided by the processor, the first variable corresponds only to the logical processor.In one or more nineteenth embodiments, with respect to the eighteenth embodiment, and further, the processor includes a translation lookaside buffer (TLB), and the method further includes executing a translation lookaside buffer (TLB) based on the instruction set A SEAMRET instruction includes determining whether to flush the TLB based on the first variable.In one or more twentieth embodiments, with respect to any one of the thirteenth to fourteenth embodiments, and further, the processor further includes a measurement result register, wherein executing the SEAMCALL instruction includes Execution of the SEAM loader module is invoked to write measurement results for the SEAM module to the measurement result register.In one or more twenty-first embodiments, a method at a processor, the method comprising decoding an instruction set-based secure arbitration mode (SEAM) call (SEAMCALL) instruction, the SEAMCALL instruction comprising a first A field to provide an opcode to indicate that a logical processor is to transition from a traditional virtual machine extension (VMX) root operation, and a second field to provide an operation object to specify one of the following: a SEAM loader module loaded in a reserved range of coupled system memory, wherein a range register of the processor stores information identifying the reserved range, or a SEAM module loaded by the SEAM loader module in the reserved range , the SEAM module initiates the SEAM of the processor, and executes the SEAMCALL instruction, including determining whether to access the SEAM loader module or one of the SEAM modules based on the operation object.In one or more twenty-second embodiments, with respect to the twenty-first embodiment, and further, the method further comprises initiating an authentication code module (ACM) at the processor, and utilizing the ACM, loads the SEAM loader module in the reserved scope.In one or more twenty-third embodiments, with respect to the twenty-second embodiment, and further, the method further comprises invoking execution of the SEAM loader module to load the reserved scope SEAM module.In one or more twenty-fourth embodiments, with respect to the twenty-first embodiment or the twenty-second embodiment, and further, determining whether to access the SEAM loader module or the SEAM One of the modules includes determining whether to signal a failure of the SEAMCALL instruction based on determining whether the operation object specifies the SEAM module while a first variable (SEAM_READY) identifies the availability of the SEAM module, the first The variable is different from the second variable (P_SEAMLDR_READY) that identifies whether the SEAM loader module is available.In one or more twenty-fifth embodiments, with respect to any one of the twenty-first to twenty-second embodiments, and further, determining whether to access the SEAM loader module or the SEAM One of the modules includes determining whether a mutex is acquired, wherein the mutex is shared among multiple logical processors.In one or more twenty-sixth embodiments, with respect to any one of the twenty-first to twenty-second embodiments, and further, executing the SEAMCALL instruction further comprises determining that the operation object specifies the the SEAM loader module, and set a first variable (inP_SEAMLDR) based on the operation object to indicate that among the SEAM loader module and the SEAM module, the SEAM loader module is more recently used by the logic A processor call, wherein, among a plurality of logical processors provided with the processor, the first variable corresponds only to the logical processor.In one or more twenty-seventh embodiments, with respect to the twenty-sixth embodiment, and further, the processor includes a translation lookaside buffer (TLB), and the method further includes executing an instruction based on the instruction A set of SEAMRET instructions includes determining whether to flush the TLB based on the first variable.In one or more twenty-eighth embodiments, with respect to any one of the twenty-first to twenty-second embodiments, and further, the processor further includes a measurement result register, wherein executing the The SEAMCALL instruction includes invoking execution of the SEAM loader module to write the measurement results for the SEAM module to the measurement results register.This article describes techniques and architectures for providing security for trusted domains. In the above description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of certain embodiments. However, it will be apparent to those skilled in the art that certain embodiments may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form to avoid obscuring the description.Reference in this specification to "one embodiment" or "one embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearances of the phrase "in one embodiment" in various places in this specification are not necessarily all referring to the same embodiment.Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless it is clear from the discussion herein to specifically state otherwise, it is to be understood that throughout are the acts and processes of a computer system or similar electronic computing device that manipulate and transform data represented as physical (electronic) quantities within the registers and memory of the computer system into computer system memory or registers or other such data. Other data similarly represented as physical quantities within an information storage, transmission, or display device.Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer-readable storage medium such as, but not limited to, any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read-only memory (ROM), random access Access memory (RAM) (eg, dynamic RAM (DRAM)), EPROM, EEPROM, magnetic or optical cards, or any type of medium suitable for storing electronic instructions and coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The necessary structure for a variety of these systems will emerge from the description herein. Furthermore, some embodiments are not described with reference to any particular programming language. It will be appreciated that various programming languages may be used to implement the teachings of such embodiments as described herein.In addition to those described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from the scope thereof. Accordingly, the descriptions and examples herein should be interpreted in a sense of description and not limitation. The scope of the invention should be measured only by reference to the appended claims. |
Systems, methods, and other embodiments associated with generating transient identifiers are described. According to one embodiment, an apparatus includes a memory device that stores a primary identifier that is unique to the apparatus. The primary identifier correlates with a displayed identifier of the apparatus that is used by a remote device to initiate communications with the apparatus. The apparatus includes identifier logic configured to generate a secondary identifier in response to receiving an association request that includes the displayed identifier when the apparatus is in a bootstrap mode. The bootstrap mode is a state of the apparatus when the apparatus is initializing and will accept a new association. The association request is a wireless communication that initiates establishing secure communications. The apparatus includes communication logic configured to establish secure wireless communications with the remote device by causing the remote device to identify the apparatus using the secondary identifier. |
CLAIMSWhat is claimed is:1. An apparatus, comprising:a memory device that stores a primary identifier that is unique to the apparatus, wherein the primary identifier correlates with a displayed identifier of the apparatus that is used by a remote device to initiate communications with the apparatus;identifier logic configured to generate a secondary identifier in response to receiving an association request that includes the displayed identifier when the apparatus is in a bootstrap mode, wherein the bootstrap mode is a state of the apparatus when the apparatus is initializing and will accept a new association with the remote device, and wherein the association request is a wireless communication that initiates establishing secure communications between the remote device and the apparatus; and communication logic configured to establish secure wireless communications with the remote device by causing the remote device to identify the apparatus using the secondary identifier instead of using the primary identifier.2. The apparatus of claim 1, wherein the identifier logic is configured to engage the bootstrap mode in response to a reset request and wherein the identifier logic is configured to disable the secondary identifier and generate the secondary identifier again as a different identifier in response to receiving the association request when in the bootstrap mode.3. The apparatus of claim 1, wherein the association request is from the remote device and includes the displayed identifier, wherein the identifier logic is configured to generate the secondary identifier by generating a new public key for the apparatus, and wherein the remote device is a management device that controls the apparatus.4. The apparatus of claim 1, wherein the identifier logic is configured to generate the secondary identifier by applying a hash function to a public key of a key pair that is assigned to the apparatus, wherein the key pair is an asymmetric key pair that is assigned to the apparatus when the apparatus is manufactured, and wherein the primary identifier is the public key.5. The apparatus of claim 1, wherein the apparatus is configured to display or includes an area that displays the displayed identifier, wherein the displayed identifier is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier, wherein the primary identifier is an out-of-box (OOB) identifier that is assigned to the apparatus by a manufacturer of the apparatus and wherein the primary identifier is a media access control (MAC) address, a public key or a random string.6. The apparatus of claim 1, wherein the apparatus includes a button configured to, in response to being activated, provide a reset request to the communication logic in response to a button of the apparatus being depressed, wherein the reset request causes the identifier logic to enter the bootstrap mode and to disable the secondary identifier, and wherein the identifier logic is configured to generate a new secondary identifier in response to receiving a subsequent association request after entering the bootstrap mode.7. The apparatus of claim 1, wherein the communication logic is configured to establish the secure wireless communications according to a WiFi protected setup (WPS) protocol, and wherein the communication logic is configured to establish the secure communications by using near-field communications to exchange information with the remote device, wherein the remote device is a master device of the apparatus and wherein the communication logic is configured to use elliptic curve cryptography (ECC) to encrypt the secure wireless communications.8. A method, comprising:storing, in a memory device of an apparatus, a primary identifier that is unique to the apparatus, wherein the primary identifier correlates with a displayed identifier of the apparatus that is used by a remote device to initiate communications with the apparatus;generating, by the apparatus, a secondary identifier in response to receiving an association request that includes the displayed identifier when the apparatus is in a bootstrap mode, wherein the bootstrap mode is a state of the apparatus when the apparatus is initializing and is open for a new association, and wherein the association request is a wireless communication that initiates establishing secure communications between the remote device and the apparatus; andestablishing secure wireless communications with the remote device by causing the remote device to identify the apparatus using the secondary identifier instead of using the primary identifier.9. The method of claim 8, further comprising:engaging the bootstrap mode in response to a reset request, wherein engaging the bootstrap mode includes disabling the secondary identifier and generating a new secondary identifier in response to receiving the association request when in the bootstrap mode, wherein the primary identifier is an out-of-box (OOB) identifier that is assigned to the apparatus by a manufacturer of the apparatus and wherein the primary identifier is a media access control (MAC) address, a public key or a random string.10. The method of claim 8, wherein the association request includes the displayed identifier from the remote device, wherein generating the secondary identifier includes generating a new public key for the apparatus, and wherein establishing the secure wireless communications includes the apparatus receiving management and control commands from the remote device.11. The method of claim 8, wherein generating the secondary identifier includes applying a hash function to a public key of a key pair that is assigned to the apparatus, wherein the key pair is an asymmetric key pair that is assigned to the apparatus when the apparatus is manufactured, and wherein the primary identifier is the public key.12. The method of claim 8, wherein the displayed identifier is displayed on the apparatus, and wherein the displayed identifier is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier.13. The method of claim 8, wherein a reset request is provided in response to a button of the apparatus being depressed, and wherein generating the secondary identifier includes generating a new secondary identifier in response to receiving a subsequent association request after engaging the bootstrap mode.14. The method of claim 8, wherein establishing the secure wireless communications uses a WiFi protected setup (WPS) protocol, and wherein establishing the secure wireless communications uses near-field communications to exchange information with the remote device, wherein the remote device is a master device of the apparatus, and wherein establishing the secure wireless communications includes using use elliptic curve cryptography (ECC) to encrypt the secure wireless communications.15. A communication device comprising:a memory device that stores a primary identifier that is unique to the communication device, wherein the primary identifier correlates with a displayed identifier of the communication device that is displayed on a label of the communication device;identifier logic configured to (i) engage a bootstrap mode for initializing the communication device and (ii) generate a secondary identifier in response to receiving an association request when the communication device is in a bootstrap mode, wherein the bootstrap mode is a state when the device accepts association requests, and wherein the association request is a wireless communication that initiates establishing secure communications between a controlling device and the communication device; and communication logic configured to establish secure wireless communications with the controlling device by causing the controlling device to identify the communication device using the secondary identifier instead of using the primary identifier or the displayed identifier.16. The communication device of claim 15, wherein the identifier logic is configured to engage the bootstrap mode in response to a reset request, wherein the identifier logic is configured to disable a previous secondary identifier and generate a new secondary identifier in response to receiving the association request when in the bootstrap mode, wherein the identifier logic is configured to authenticate the association request by verifying that the association request includes either the displayed identifier or the primary identifier and wherein the primary identifier is an out-of-box (OOB) identifier that is assigned to the communication device by a manufacturer of the communication device.17. The communication device of claim 15, wherein the association request from the controlling device includes the displayed identifier, wherein the identifier logic is configured to generate the secondary identifier by generating a new public key for the communication device, and wherein the controlling apparatus has a master role in a master/slave relationship with the device.18. The communication device of claim 15, wherein the identifier logic is configured to generate the secondary identifier by applying a hash function to a public key of a key pair that is assigned to the communication device, wherein the key pair is an asymmetric key pair that is assigned to the communication device when the communication device is manufactured, wherein the primary identifier is the public key, wherein the displayed identifier is displayed on the communication device, and wherein the displayed identifier is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier.19. The communication device of claim 15, wherein a reset request causes the identifier logic to engage the bootstrap mode and to disable a current secondary identifier, and wherein the identifier logic is configured to generate a new secondary identifier in response to receiving a subsequent association request after entering the bootstrap mode.20. The communication device of claim 15, wherein the communication logic is configured to use elliptic curve cryptography (ECC). |
SECURE DEVICE BOOTSTRAP IDENTITYBACKGROUND[0001] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventor(s), to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.[0002] Wireless networks provide a convenient way for devices to communicate and access computer networks. Communications between many different types of devices becomes simple when cumbersome wiring is replaced with the ability to connect wirelessly. However, as the popularity of wireless connectivity grows, security issues unique to this form of communication are more likely to be exploited.[0003] For example, to provide security against malicious attacks, devices establish secure relationships to enable encrypting wireless communications. In general, devices may have many different secure relationships. Accordingly, a device may identify a secure relationship with another device according to a unique identifier of a particular device.[0004] However, when ownership of a device is transferred, difficulties may arise because the unique identifier of the device is transferred along with ownership. Thus, previously established secure relationships may still be recognized as valid because the device maintains the unique identifier even though the ownership has transferred. Accordingly, using the unique identifier to establish the secure relationships may cause difficulties with security. SUMMARY[0005] In general, in one aspect this specification discloses an apparatus. The apparatus includes a memory device that stores a primary identifier that is unique to the apparatus. The primary identifier correlates with a displayed identifier of the apparatus that is used by a remote device to initiate communications with the apparatus. The apparatus includes identifier logic configured to generate a secondary identifier in response to receiving an association request that includes the displayed identifier when the apparatus is in a bootstrap mode. The bootstrap mode is a state of the apparatus when the apparatus is initializing and will accept a new association with the remote device. The association request is a wireless communication that initiates establishing secure communications between the remote device and the apparatus. The apparatus includes communication logic configured to establish secure wireless communications with the remote device by causing the remote device to identify the apparatus using the secondary identifier instead of using the primary identifier.[0006] In another embodiment, the identifier logic is configured to engage the bootstrap mode in response to a reset request. The identifier logic is configured to disable the secondary identifier and generate the secondary identifier again as a different identifier in response to receiving the association request when in the bootstrap mode.[0007] In another embodiment, the association request is from the remote device and includes the displayed identifier. The identifier logic is configured to generate the secondary identifier by generating a new public key for the apparatus. The remote device is a management device that controls the apparatus.[0008] In another embodiment, the identifier logic is configured to generate the secondary identifier by applying a hash function to a public key of a key pair that is assigned to the apparatus. The key pair is an asymmetric key pair that is assigned to the apparatus when the apparatus is manufactured. The primary identifier is the public key.[0009] In another embodiment, the apparatus is configured to display or includes an area that displays the displayed identifier. The displayed identifier is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier. The primary identifier is an out-of-box (OOB) identifier that is assigned to the apparatus by a manufacturer of the apparatus. The primary identifier is a media access control (MAC) address, a public key or a random string.[0010] In another embodiment, the apparatus includes a button configured to, in response to being activated, provide a reset request to the communication logic in response to a button of the apparatus being depressed. The reset request causes the identifier logic to enter the bootstrap mode and to disable the secondary identifier. The identifier logic is configured to generate a new secondary identifier in response to receiving a subsequent association request after entering the bootstrap mode.[0011] In another embodiment, the communication logic is configured to establish the secure wireless communications according to a WiFi protected setup (WPS) protocol. The communication logic is configured to establish the secure communications by using near- field communications to exchange information with the remote device. The remote device is a master device of the apparatus. The communication logic is configured to use elliptic curve cryptography (ECC) to encrypt the secure wireless communications.[0012] In general, in another aspect, this specification discloses a method. The method includes storing, in a memory device of an apparatus, a primary identifier that is unique to the apparatus. The primary identifier correlates with a displayed identifier of the apparatus that is used by a remote device to initiate communications with the apparatus. The method includes generating, by the apparatus, a secondary identifier in response to receiving an association request that includes the displayed identifier when the apparatus is in a bootstrap mode. The bootstrap mode is a state of the apparatus when the apparatus is initializing and is open for a new association. The association request is a wireless communication that initiates establishing secure communications between the remote device and the apparatus. The method includes establishing secure wireless communications with the remote device by causing the remote device to identify the apparatus using the secondary identifier instead of using the primary identifier.[0013] In another embodiment, the method includes engaging the bootstrap mode in response to a reset request. Engaging the bootstrap mode includes disabling the secondary identifier and generating a new secondary identifier in response to receiving the association request when in the bootstrap mode. The primary identifier is an out-of- box (OOB) identifier that is assigned to the apparatus by a manufacturer of the apparatus. The primary identifier is a media access control (MAC) address, a public key or a random string.[0014] In another embodiment, the association request includes the displayed identifier from the remote device. Generating the secondary identifier includes generating a new public key for the apparatus. Establishing the secure wireless communications includes the apparatus receiving management and control commands from the remote device.[0015] In another embodiment, generating the secondary identifier includes applying a hash function to a public key of a key pair that is assigned to the apparatus. The key pair is an asymmetric key pair that is assigned to the apparatus when the apparatus is manufactured. The primary identifier is the public key. [0016] In another embodiment, the displayed identifier is displayed on the apparatus. The displayed identifier is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier.[0017] In another embodiment, a reset request is provided in response to a button of the apparatus being depressed. Generating the secondary identifier includes generating a new secondary identifier in response to receiving a subsequent association request after engaging the bootstrap mode.[0018] In another embodiment, establishing the secure wireless communications uses a WiFi protected setup (WPS) protocol. Establishing the secure wireless communications uses near-field communications to exchange information with the remote device. The remote device is a master device of the apparatus. Establishing the secure wireless communications includes using use elliptic curve cryptography (ECC) to encrypt the secure wireless communications.[0019] In general, in another aspect, this specification discloses a communication device. The communication device includes a memory device that stores a primary identifier that is unique to the communication device. The primary identifier correlates with a displayed identifier of the communication device that is displayed on a label of the communication device. The communication device includes identifier logic configured to (i) engage a bootstrap mode for initializing the communication device and (ii) generate a secondary identifier in response to receiving an association request when the communication device is in a bootstrap mode. The bootstrap mode is a state when the device accepts association requests. The association request is a wireless communication that initiates establishing secure communications between a controlling device and the communication device. The communication device includes communication logic configured to establish secure wireless communications with the controlling device by causing the controlling device to identify the communication device using the secondary identifier instead of using the primary identifier or the displayed identifier.[0020] In another embodiment, the identifier logic is configured to engage the bootstrap mode in response to a reset request. The identifier logic is configured to disable a previous secondary identifier and generate a new secondary identifier in response to receiving the association request when in the bootstrap mode. The identifier logic is configured to authenticate the association request by verifying that the association request includes either the displayed identifier or the primary identifier. The primary identifier is an out-of-box (OOB) identifier that is assigned to the communication device by a manufacturer of the communication device.[0021] In another embodiment, the association request from the controlling device includes the displayed identifier. The identifier logic is configured to generate the secondary identifier by generating a new public key for the communication device. The controlling apparatus has a master role in a master/slave relationship with the device.[0022] In another embodiment, the identifier logic is configured to generate the secondary identifier by applying a hash function to a public key of a key pair that is assigned to the communication device. The key pair is an asymmetric key pair that is assigned to the communication device when the communication device is manufactured. The primary identifier is the public key. The displayed identifier is displayed on the communication device. The displayed identifier is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier.[0023] In another embodiment, a reset request causes the identifier logic to engage the bootstrap mode and to disable a current secondary identifier. The identifier logic is configured to generate a new secondary identifier in response to receiving a subsequent association request after entering the bootstrap mode.[0024] In another embodiment, the communication logic is configured to use elliptic curve cryptography (ECC).BRIEF DESCRIPTION OF THE DRAWINGS[0025] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate various systems, methods, and other embodiments of the disclosure. Illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. In some examples, one element may be designed as multiple elements or multiple elements may be designed as one element. In some examples, an element shown as an internal component of another element may be implemented as an external component and vice versa.[0026] FIG. 1 illustrates one embodiment of an apparatus associated with generating a unique identifier when initialized.[0027] FIG. 2 illustrates one embodiment of a method associated with using a unique transient secondary identifier that is generated/re-generated upon a reset.[0028] FIG. 3 illustrates one embodiment of an integrated circuit associated with regenerating a secondary identifier whenever a device is reset.DETAILED DESCRIPTION[0029] Described herein are examples of systems, methods, and other embodiments associated with a wireless device that generates a unique transient identifier whenever the wireless device is initialized to communicate with another device. In one embodiment, the wireless device includes a primary identifier that is unique to the wireless device. The primary identifier is, for example, static and does not change. The wireless device then uses a secondary identifier in place of the primary identifier during communications. In this way, the wireless device protects the primary identifier from exposure so that, for example, any future transfer of the wireless device can occur without having previously compromised the primary identifier.[0030] In one embodiment, to achieve security for the primary identifier, the wireless device generates the secondary identifier from the primary identifier. Accordingly, subsequent communications established between the wireless device and one or more remote devices use the secondary identifier instead of the primary identifier to identify the wireless device. In this way, the primary identifier is secured while using the secondary identifier that is transient and can be changed/re-generated if the wireless device is reset to, for example, a manufacturer's default settings.[0031] With reference to Figure 1, one embodiment of an apparatus 100 associated with generating a unique identifier when the apparatus 100 is initialized is illustrated. The apparatus 100 includes identifier logic 110, communication logic 120 and a memory device 130. In one embodiment, the memory device 130 stores a primary identifier 140. The primary identifier 140 is unique to the apparatus 100 in order to uniquely identify the apparatus 100 to other devices. In general, the primary identifier 140 may be a public key of an asymmetric key pair, a media access control (MAC) address for a network interface card (NIC) of the apparatus 100, a random string and so on.[0032] Furthermore, in one embodiment, the primary identifier 140 is assigned to the apparatus 100 by a manufacturer of the apparatus 100. That is, when the apparatus 100 is manufactured, the primary identifier 140 is generated and embedded in the memory device 130. Thus, the primary identifier 140 is, for example, static for the apparatus 100 and does not change.[0033] In one embodiment, the primary identifier 140 may be kept secret or may have limited exposure to other devices. In this way, the primary identifier 140 is not over exposed and maintains a higher level of security for subsequent uses in different locations and for different subsequent owners.[0034] In one embodiment, the apparatus 100 also includes a displayed identifier 150. The displayed identifier 150 is, for example, physically displayed on the apparatus 100. That is, the displayed identifier 150 is displayed on a physical label, a graphical display or on the apparatus 100 in some other form. By providing the displayed identifier 150, a user that physically possesses the apparatus 100 can use the displayed identifier 150 to prove possession/ownership of the apparatus 100 when, for example, attempting to establish initial communications with the apparatus 100.[0035] In one embodiment, the displayed identifier 150 is related to the primary identifier 140. For example, the displayed identifier 150 is determined at the time of manufacture along with the primary identifier 140. Thus, in one example, the displayed identifier 150 is calculated as a function of the primary identifier 140.[0036] Accordingly, the user can input the displayed identifier 150 into a remote device 170 that subsequently uses the displayed identifier 150 to authenticate with the apparatus 100 and initiate secured communications. The displayed identifier 150 permits the apparatus 100 to automatically recognize a communication from the remote device 170 as being valid when the apparatus 100 is in a bootstrap/initialization mode. In this way, secure communications can be established with the apparatus 100 for a limited time during the bootstrap/initialization mode by using the displayed identifier 150 as an authenticator.[0037] Furthermore, in one embodiment, in response to receiving an association request that includes the displayed identifier 150 while in the bootstrap/initialization mode, the identifier logic 110 generates/re-generates a secondary identifier 160. The identifier logic 110 generates the secondary identifier 160 for use by the communication logic 120 as an unique identifier of the apparatus 100 when establishing secure communications with a device that provided the association request (e.g., the remote device 170). In this way, an identifier that is static (e.g., the primary identifier 140 or the displayed identifier 150) is not used with subsequent re-configurations (e.g., "factory resets") of the apparatus 100.[0038] That is, the identifier logic 110 generates the secondary identifier 160 if the apparatus 100 is, for example, in an initialization/bootstrap mode as a result of being "reset" or being fresh out of the box from a manufacturer. This is to maintain security of the primary identifier 140 and the displayed identifier 150 so that the primary identifier 140 and/or the displayed identifier 150 are not registered with different services that may be distributed and difficult to de-register from in the event of the apparatus 100 changing ownership.[0039] Additionally, in one embodiment, the apparatus 100 is a slave device. That is, the apparatus 100 is controlled by a separate device that is a master/controlling device. Accordingly, the apparatus 100 may establish a secure relationship only when first initiated (i.e., when in the bootstrap mode) and with whichever device provides the displayed identifier 150 first. Thus, the apparatus 100 may associate with just the remote device 170 or a limited set of devices associated with the remote device 170 and use the secondary identifier 160 for communicating with that limited set of devices.[0040] Accordingly, while the bootstrap mode is engaged an association request received by the apparatus 100 will cause the identifier logic 110 to generate the secondary identifier 160 for identifying the apparatus 100 during a present life cycle of use. The identifier logic 110 generates the secondary identifier 160 by, for example, hashing the primary identifier 140, generating a new public key for the apparatus 100 as the secondary identifier 160, generating a pseudorandom number and so on. In general, the identifier logic 110 generates the secondary identifier 160 to be unique and to conform with whichever security standard that may govern interactions with the apparatus 100 (e.g., WiFi protected setup (WPS), IEEE 802.11 wireless security standards, etc.).[0041] Subsequently, the communication logic 120 uses the secondary identifier 160 to establish a secure relationship (e.g., encrypted communications) with the remote device 170 instead of using the primary identifier 140 or the displayed identifier 150 as an unique identifier of the apparatus 100. Of course, as previously mentioned, the association request from the remote device 170 may include authentication information such as the primary identifier 140 or the displayed identifier 150 of the apparatus 100 so that the apparatus 100 can authenticate the remote device 170. However, communication logic 120 causes the remote device 170 to use the secondary identifier 160 to ultimately identify the apparatus 100 and not the primary identifier 140 or the displayed identifier 150.[0042] Once associated with the remote device 170, the apparatus 100 transitions out of the bootstrap mode and the secondary identifier 160 is used for communications between the apparatus 100 and the remote device 170 until, for example, the apparatus 100 is reset. When the apparatus 100 is reset, the above described process of generating the secondary identifier 160 and establishing secure communications occurs similarly but with a different secondary identifier 160. Thus, the apparatus 100 can be identified using a transient unique identifier whenever reset and can thus avoid re-using the displayed identifier 150 or the primary identifier 140 with subsequent devices when transferred between owners.[0043] Further aspects of the apparatus 100 and how the apparatus 100 generates/regenerates transient identifiers will be discussed in relation to Figure 2. Figure 2 illustrates a method 200 associated with generating a transient secondary identifier of an apparatus (e.g., the apparatus 100). Figure 2 will be discussed from the perspective of the apparatus 100 of Figure 1. Additionally, Figure 2 will be discussed along with a general example of how the apparatus 100 is initially configured from the manufacturer and subsequently operates.[0044] At 210, the primary identifier 140 is stored. In one embodiment, storing the primary identifier occurs when the apparatus 100 is initially manufactured. That is, a manufacturer of the apparatus 100 generates or causes the apparatus 100 to generate the primary identifier 140 and stores the primary identifier 140 in the memory device 130. The primary identifier 140 is an out-of-box (OOB) identifier that is unique to the apparatus 100. As previously discussed, the primary identifier 140 may be a public key or some other unique identifying string.[0045] However, if the primary identifier 140 were used to identify the apparatus 100 to each device and/or service that may communicate with the apparatus 100, then confidentiality/security of the primary identifier 140 would be compromised. This is because, in one embodiment, the primary identifier 140 is static and does not change. [0046] Thus, each service/device that establishes a relationship with the apparatus 100 maintains a unique identifier of the apparatus 100, which would be the primary identifier 140. Accordingly, if the apparatus 100 was transferred to a different owner, then the apparatus 100 would carry over permissions established with the devices/services from a previous owner, which is undesirable and insecure. Therefore, the primary identifier 140 is not used to identify the apparatus on a long-term basis, but instead may be used to just initially establish a secure relationship.[0047] Furthermore, at 210, as part of storing the primary identifier 140, the displayed identifier 150 may also be generated and stored. In one embodiment, the displayed identifier 150 is generated as a function of the primary identifier 140 (e.g., truncated hash of the primary identifier 140). In either case, once generated the displayed identifier 150 is displayed/displayable to a user that is in possession of the apparatus 100. That is, the displayed identifier 150 is printed on a label, embossed on a surface, rendered on a display of the apparatus 100 and so on. In one embodiment, the displayed identifier 150 is a quick response (Q.R) code, a passphrase or a truncated hash of the primary identifier 140 and so on.[0048] The following elements 220-260 describe how the apparatus generates/regenerates a unique identifier to use instead of the primary identifier 140 and/or the displayed identifier 150 so that confidentiality of the primary identifier 140 and/or the displayed identifier 150 can be maintained.[0049] At 220, a bootstrap/initialization mode is engaged. In one embodiment, the bootstrap mode is engaged whenever a button is pressed on the apparatus 100, when the apparatus 100 is newly manufactured, or whenever some process engages the bootstrap mode to reset the apparatus 100. In general, the bootstrap/initialization mode is a state of the apparatus 100 during which the apparatus 100 is initializing and is open for establishing new associations/connections with devices and/or services.[0050] In one embodiment, the bootstrap/initialization mode includes disabling and/or deleting a previous secondary identifier that was in use prior to engaging the bootstrap/initialization mode. In this way, a new secondary identifier can be subsequently generated for establishing new secure communications while ensuring previously established relationships are no longer valid and can't be exploited by a subsequent owner of the apparatus 100.[0051] At 230, a check is made to determine whether an association request has been received. In one embodiment, an association request is a request received from a device (e.g., the remote device 170) or service to communicate with the apparatus 100. The association request may be a request of a controlling device (e.g., master of a master/slave relationship) to control the apparatus 100.[0052] Thus, the controlling device may be the only device with which the apparatus 100 communicates. For example, if the apparatus 100 is a hot water heater, thermostat, fitness tracker (e.g., pedometer) or other device that is associated with only one or a limited set of devices, then only a single device may need to communicate with the apparatus 100.[0053] If no association request is received, then monitoring for an association request continues until one is received. If an association request is received at 230, the association request is first, for example, analyzed to determine if the association request includes the displayed identifier 150 or the primary identifier 140. In this way, the association request can be authenticated as being from a valid device (e.g., the remote device 170) since it is assumed that whichever device knows the primary identifier 140 and/or the displayed identifier 150 is a valid device in possession of the apparatus 100.[0054] Accordingly, at 240, in response to a valid association request, a new and unique secondary identifier 160 is generated. For example, each time that the apparatus 100 is reset and placed into the bootstrap/initialization mode and subsequently receives a valid association request, the secondary identifier 160 is generated/re-generated as a different unique identifier. In this way, the secondary identifier 160 is transient/ephemeral. In one embodiment, the secondary identifier 160 is a new public key of an asymmetric key pair of the apparatus 100. In another embodiment, the secondary identifier 160 is generated according to elliptic curve cryptography (ECC), as a hash of the primary identifier 140, a truncated hash of the primary identifier 140, a random string or as any other string that uniquely defines the apparatus 100 and which has not been previously used to identify the apparatus 100.[0055] At 250, the secondary identifier 160 generated at 240 is used to establish a secure relationship through secure communications with a device (e.g., remote device 170) that provided the association request at 230. In general, the apparatus 100 causes the remote device 170 to use the secondary identifier 160 instead of the primary identifier 140 or the displayed identifier 150 to identify the apparatus 100. In this way, an identifier that can be easily changed but that still uniquely identifies the apparatus 100 can be used when establishing and maintaining communications.[0056] Furthermore, the secure communications are established using, for example, near-field communications to exchange information or some other wireless form of communication (e.g., communications that conform with IEEE 802.11 protocols). [0057] After secure communications have been established, the bootstrap/initialization mode is disengaged and secure communications according to the secure relationship established at 250 continue until a reset request is received at 260.[0058] At 260, if a reset request is received in response to a request to reset to manufacturer defaults, then the bootstrap/initialization mode is engaged once again at 220 and re-generation of the secondary identifier 160 occurs as previously specified.[0059] Figure 3 illustrates an additional embodiment of the apparatus 100 from Figure 1 that is configured with separate integrated circuits and/or chips. In this embodiment, the identifier logic 110 from Figure 1 is embodied as a separate integrated circuit 310. Additionally, the communication logic 120 is embodied on an individual integrated circuit 320. The memory device 130 is also embodied on an individual integrated circuit 330. The circuits are connected via connection paths to communicate signals. While integrated circuits 310, 320, and 330 are illustrated as separate integrated circuits, they may be integrated into a common circuit board 300. Additionally, integrated circuits 310, 320, and 330 may be combined into fewer integrated circuits or divided into more integrated circuits than illustrated. Additionally, in another embodiment, the identifier logic 110 and the communication logic 120 illustrated in integrated circuits 310 and 320 may be combined into a separate application specific integrated circuit.[0060] The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting. Both singular and plural forms of terms may be within the definitions.[0061] References to "one embodiment", "an embodiment", "one example", "an example", and so on, indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase "in one embodiment" does not necessarily refer to the same embodiment, though it may.[0062] "Computer storage medium" as used herein is a non-transitory medium that stores instructions and/or data. A computer storage medium may take forms, including, but not limited to, non-volatile media, and volatile media. Non-volatile media may include, for example, optical disks, magnetic disks, and so on. Volatile media may include, for example, semiconductor memories, dynamic memory, and so on. Common forms of a computer storage media may include, but are not limited to, a floppy disk, a flexible disk, a hard disk, a magnetic tape, other magnetic medium, an ASIC, a CD, other optical medium, a RAM, a ROM, a memory chip or card, a memory stick, and other electronic media that can store computer instructions and/or data.[0063] "Logic" as used herein includes a computer or electrical hardware component(s), firmware, a non-transitory computer storage medium that stores instructions, and/or combinations of these components configured to perform a function(s) or an action(s), and/or to cause a function or action from another logic, method, and/or system. Logic may include a microprocessor controlled by an algorithm, a discrete logic (e.g., ASIC), an analog circuit, a digital circuit, a programmed logic device, a memory device containing instructions that when executed perform an algorithm, and so on. Logic may include one or more gates, combinations of gates, or other circuit components. Where multiple logics are described, it may be possible to incorporate the multiple logics into one physical logic component. Similarly, where a single logic unit is described, it may be possible to distribute that single logic unit between multiple physical logic components.[0064] While for purposes of simplicity of explanation, illustrated methodologies are shown and described as a series of blocks. The methodologies are not limited by the order of the blocks as some blocks can occur in different orders and/or concurrently with other blocks from that shown and described. Moreover, less than all the illustrated blocks may be used to implement an example methodology. Blocks may be combined or separated into multiple components. Furthermore, additional and/or alternative methodologies can employ additional actions that are not illustrated in blocks.[0065] To the extent that the term "includes" or "including" is employed in the detailed description or the claims, it is intended to be inclusive in a manner similar to the term "comprising" as that term is interpreted when employed as a transitional word in a claim.[0066] While the disclosed embodiments have been illustrated and described in considerable detail, it is not the intention to restrict or in any way limit the scope of the appended claims to such detail. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the various aspects of the subject matter. Therefore, the disclosure is not limited to the specific details or the illustrative examples shown and described. Thus, this disclosure is intended to embrace alterations, modifications, and variations that fall within the scope of the appended claims. |
A semiconductor device comprising first and second dies is provided. The first die includes a first through-substrate via (TSV) extending at least substantially through the first die and a first substantially helical conductor disposed around the first TSV. The second die includes a second TSV coupled to the first TSV and a second substantially helical conductor disposed around the second TSV. The first substantially helical conductor is configured to induce a change in a magnetic field in the first and second TSVs in response to a first changing current in the first substantially helical conductor, and the second substantially helical conductor is configured to have a second changing current induced therein in response to the change in the magnetic field in the second TSV. |
CLAIMSI/We claim:1. A semiconductor device, comprising:a first die including:a first through-substrate via (TSV) extending at least substantially through the first die, anda first substantially helical conductor disposed around the first TSV; and a second die including:a second TSV coupled to the first TSV, anda second substantially helical conductor disposed around the second TSV.2. The semiconductor device of claim 1, wherein the first substantially helical conductor is configured to induce a change in a magnetic field in the first and second TSVs in response to a first changing current in the first substantially helical conductor, and wherein the second substantially helical conductor is configured to have a second changing current induced therein in response to the change in the magnetic field in the second TSV.3. The semiconductor device of claim 1, wherein the second TSV is coupled to the first TSV by a solder connection.4. The semiconductor device of claim 3, wherein the solder connection is separated from the first and second TSVs by a barrier material configured to prevent solder diffusion into the first and second TSVs.5. The semiconductor device of claim 3, wherein the solder connection comprises a magnetic material.6. The semiconductor device of claim 1, wherein the second TSV is coupled to the first TSV by a solder-free mechanical connection.7. The semiconductor device of claim 1, wherein the second TSV is magnetically coupled to the first TSV across a distance physically separating the first TSV and the second TSV.8. The semiconductor device of claim 1, wherein the first TSV and the second TSV are coaxially aligned.9. The semiconductor device of claim 1, wherein the first and second TSVs comprise a ferromagnetic or a ferrimagnetic material.10. The semiconductor device of claim 1, wherein the first TSV is separated from the first substantially helical conductor by an insulating material, and the second TSV is separated from the second substantially helical conductor by an insulating material.11. The semiconductor device of claim 1, wherein the first substantially helical conductor comprises more than one turn around the first TSV, and the second substantially helical conductor comprises more than one turn around the second TSV.12. The semiconductor device of claim 1, wherein the first substantially helical conductor is coaxially aligned with the first TSV.13. The semiconductor device of claim 1, wherein the second substantially helical conductor is coaxially aligned with the second TSV.14. A semiconductor device, comprising:a first die including:a first through-substrate via (TSV) extending at least substantially through the first die,a second TSV extending at least substantially through the first die, and a first substantially helical conductor disposed around one of the first and second TSVs,a second die including:a third TSV coupled to the first TSV, a fourth TSV coupled to the second TSV, anda second substantially helical conductor disposed around one of the third and fourth TSVs15. The semiconductor device of claim 14, wherein the first substantially helical conductor is configured to induce a change in a magnetic field in the first, second, third and fourth TSVs in response to a first changing current in the first substantially helical conductor, and wherein the second substantially helical conductor is configured to have a second changing current induced therein in response to the change in the magnetic field in the TSV around which the second substantially helical conductor is disposed.16. The semiconductor device of claim 14, wherein the first, second, third and fourth TSVs comprise a ferromagnetic or a ferrimagnetic material.17. The semiconductor device of claim 14, wherein the second TSV is coupled to the first TSV by an upper coupling member above the first substantially helical conductor.18. The semiconductor device of claim 17, wherein the upper coupling member comprises a ferromagnetic or a ferrimagnetic material.19. The semiconductor device of claim 14, wherein the fourth TSV is coupled to the third TSV by a lower coupling member below the second substantially helical conductor.20. The semiconductor device of claim 19, wherein the lower coupling member comprises a ferromagnetic or a ferrimagnetic material.21. The semiconductor device of claim 14, wherein the third and fourth TSVs extend at least substantially through the second die.22. The semiconductor device of claim 14, wherein the third TSV is coupled to the first TSV by a first solder connection, and the fourth TSV is coupled to the second TSV by a second solder connection.23. The semiconductor device of claim 22, wherein the first solder connection is separated from the first and third TSVs by a barrier material configured to prevent solder diffusion into the first and third TSVs.24. The semiconductor device of claim 22, wherein the second solder connection is separated from the second and fourth TSVs by a barrier material configured to prevent solder diffusion into the second and fourth TSVs.25. The semiconductor device of claim 22, wherein the first and second solder connections comprise a magnetic material.26. The semiconductor device of claim 14, wherein the third TSV is magnetically coupled to the first TSV across a first distance physically separating the third TSV and the first TSV, and the fourth TSV is magnetically coupled to the second TSV across a second distance physically separating the fourth TSV and the second TSV27. The semiconductor device of claim 14, wherein the first substantially helical conductor is disposed around the first TSV, and the second substantially helical conductor is disposed around the third TSV.28. A semiconductor package, comprising:a first die;a second die disposed over the first die; anda coupled inductor including:a magnetic core having a first through-substrate via (TSV) disposed in the first die and a second TSV disposed in the second die and coupled to the firstTSV,a primary winding disposed around the first TSV, anda secondary winding disposed around the second TSV.29. The semiconductor package of claim 28, wherein the primary winding is configured to induce a change in a magnetic field in the first and second TSVs in response to a first changing current in the primary winding, and wherein the secondary winding is configured to have a second changing current induced therein in response to the change in the magnetic field in the second TSV.30. The semiconductor package of claim 28, wherein the first TSV extends at least substantially through the first die.31. The semiconductor package of claim 28, wherein the second TSV extends at least substantially through the second die.32. The semiconductor device of claim 28, wherein the first and second TSVs comprise a ferromagnetic or a ferrimagnetic material.33. The semiconductor device of claim 28, wherein the primary winding comprises a substantially helical conductor disposed coaxially around the first TSV.34. The semiconductor device of claim 28, wherein the secondary winding comprises a substantially helical conductor disposed coaxially around the second TSV. |
MULTI-DIE INDUCTORS WITH COUPLED THROUGH-SUBSTRATEVIA CORESCROSS-REFERENCE TO RELATED APPLICATION(S)[0001] This application contains subject matter related to a concurrently -filed U.S. Patent Application by Kyle K. Kirby, entitled "SEMICONDUCTOR DEVICES WITH BACK-SIDE COILS FOR WIRELESS SIGNAL AND POWER COUPLING." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9206. US00.[0002] This application contains subject matter related to a concurrently -filed U.S. Patent Application by Kyle K. Kirby, entitled "SEMICONDUCTOR DEVICES WITH THROUGH- SUBSTRATE COILS FOR WIRELESS SIGNAL AND POWER COUPLING." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9207.US00.[0003] This application contains subject matter related to a concurrently -filed U.S. Patent Application by Kyle K. Kirby, entitled "INDUCTORS WITH THROUGH-SUBSTRATE VIA CORES." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829- 9208.US00.[0004] This application contains subject matter related to a concurrently -filed U.S. Patent Application by Kyle K. Kirby, entitled "3D INTERCONNECT MULTI-DIE INDUCTORS WITH THROUGH-SUBSTRATE VIA CORES." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9221.US00.TECHNICAL FIELD[0005] The present disclosure generally relates to semiconductor devices, and more particularly relates to semiconductor devices including multi-die inductors with through- substrate via cores, and methods of making and using the same. BACKGROUND[0006] As the need for miniaturization of electronic circuits continues to increase, the need to minimize various circuit elements, such as inductors, increases apace. Inductors are an important component in many discrete element circuits, such as impedance-matching circuits, linear filters, and various power circuits. Since traditional inductors are bulky components, successful miniaturization of inductors presents a challenging engineering problem.[0007] One approach to miniaturizing an inductor is to use standard integrated circuit building blocks, such as resistors, capacitors, and active circuitry, such as operational amplifiers, to design an active inductor that simulates the electrical properties of a discrete inductor. Active inductors can be designed to have a high inductance and a high Q factor, but inductors fabricated using these designs consume a great deal of power and generate noise. Another approach is to fabricate a spiral-type inductor using conventional integrated circuit processes. Unfortunately, spiral inductors in a single level (e.g., plane) occupy a large surface area, such that the fabrication of a spiral inductor with high inductance can be cost- and size-prohibitive. Accordingly, there is a need for other approaches to the miniaturization of inductive elements in semiconductor devices.BRIEF DESCRIPTION OF THE DRAWINGS[0008] Figure 1 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology.[0009] Figure 2 is a simplified perspective view of a substantially helical conductor disposed around a through-substrate via configured in accordance with an embodiment of the present technology.[0010] Figure 3 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology.[0011] Figure 4 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology. [0012] Figure 5 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology.[0013] Figure 6 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology.[0014] Figure 7 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology.[0015] Figure 8 is a simplified perspective view of a substantially helical conductor disposed around a through-substrate via configured in accordance with an embodiment of the present technology.[0016] Figures 9A through 9D are simplified cross-sectional views of a multi-die semiconductor device including coupled inductors with through-substrate via cores at various stages of a manufacturing process in accordance with an embodiment of the present technology.[0017] Figures 9E through 9H are simplified perspective views of a multi-die semiconductor device including coupled inductors with through-substrate via cores at various stages of a manufacturing process in accordance with an embodiment of the present technology.[0018] Figure 10 is a flow chart illustrating a method of manufacturing a multi-die semiconductor device including coupled inductors with through-substrate via cores in accordance with an embodiment of the present technology.DETAILED DESCRIPTION[0019] In the following description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with semiconductor devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology. [0020] As discussed above, semiconductor devices are continually designed with ever greater needs for inductors with high inductance that occupy a small area. These needs are especially acute in multi-die devices with coupled inductors in different dies, where the efficiency of the inductor coupling can depend in part upon the inductors having high inductance. Accordingly, several embodiments of semiconductor devices in accordance with the present technology can provide multi-die coupled inductors having through-substrate via cores, which can provide high inductance and efficient coupling while consuming only a small area.[0021] Several embodiments of the present technology are directed to semiconductor devices comprising multiple dies. A first die of the device includes a first through-substrate via (TSV) extending at least substantially through the first die and a first substantially helical conductor disposed around the first TSV. A second die of the device includes a second TSV coupled to the first TSV and a second substantially helical conductor disposed around the second TSV. The first substantially helical conductor can be a non-planar spiral configured to induce a change in a magnetic field in the first and second TSVs in response to a first changing current in the first substantially helical conductor, and the second substantially helical conductor can be a non-planar spiral configured to have a second changing current induced therein in response to the change in the magnetic field in the second TSV.[0022] Figure 1 is a simplified cross-sectional view of a multi-die semiconductor device 100 including coupled inductors with TSV cores configured in accordance with an embodiment of the present technology. The device 100 includes a first die 101 and a second die 151. The first die 101 has a first substrate 101a and a first insulating material 101b. The device 100 further includes a first TSV 102 that extends at least substantially through the first die 101 (e.g. , extending from approximately the bottom of the first substrate 101a to beyond an upper surface of the first substrate 101a - completely through the first substrate 101a - and into the first insulating material 101b). The device 100 also includes a first substantially helical conductor 103 ("conductor 103") disposed around the first TSV 102. In the present embodiment, the first conductor 103 is shown to include three complete turns (103a, 103b, and 103c) around the first TSV 102. The first conductor 103 can be operably connected to other circuit elements (not shown) by leads 120a and 120b.[0023] The turns 103a- 103c of the first conductor 103 are electrically insulated from one another and from the first TSV 102. In one embodiment, the first insulating material 101b electrically isolates the first conductor 103 from the first TSV 102. In another embodiment, the first conductor 103 can have a conductive inner region covered (e.g. , coated) by a dielectric or insulating outer layer. For example, an outer layer of the first conductor 103 can be an oxide layer, and an inner region of the first conductor 103 can be copper, gold, tungsten, or alloys thereof. The first TSV 102 can also include an outer layer and a magnetic material within the outer layer. The outer layer can be a dielectric or insulating material (e.g. , silicon oxide, silicon nitride, polyimide, etc. ) that electrically isolates the magnetic material of the first TSV 102 from the first conductor 103. One aspect of the first conductor 103 is that the individual turns 103a- 103c define a non-planar spiral with respect to the longitudinal dimension "L" of the first TSV 102. Each subsequent turn 103a- 103c is at a different elevation along the longitudinal dimension L of the first TSV 102 in the non-planar spiral of the first conductor 103.[0024] According to one embodiment of the present technology, the first substrate 101a can be any one of a number of substrate materials suitable for semiconductor processing methods, including silicon, glass, gallium arsenide, gallium nitride, organic laminates, and the like. As will be readily understood by those skilled in the art, a through-substrate via, such as the first TSV 102, can be made by etching a high-aspect-ratio hole into a substrate material and filling it with one or more materials in one or more deposition and/or plating steps. Accordingly, the first TSV 102 extends at least substantially through the first substrate 101a, which is unlike other circuit elements that are additive ly constructed on top of the first substrate 101a. For example, the first substrate 101a can be a thinned silicon wafer of about 100 um thickness, and the first TSV 102 can extend completely through the first substrate 101a, such that a lowermost portion of the first TSV 102 can be exposed for mechanical and/or electrical connection to elements in another die.[0025] The second die 151 has a second substrate 15 la, a second insulating material 15 lb, and a second TSV 152 in the second die 151 extending out of the second substrate 151a and into the second insulating material 151b. The device 100 further includes a second substantially helical conductor 153 ("conductor 153") disposed around the second TSV 152. In the present embodiment, the second conductor 153 is shown to include three complete turns ( 153a, 153b, and 153c) around the second TSV 152. The second conductor 153 can be operably connected to other circuit elements (not shown) by leads 170a and 170b.[0026] The three turns 153a-153c of the second conductor 153 are electrically insulated from one another and from the second TSV 152. In one embodiment, the second insulating material 151b electrically isolates the second conductor 153 from the second TSV 152. In another embodiment, the second conductor 153 can have a conductive inner region covered (e.g. , coated) by a dielectric or insulating outer layer. For example, an outer layer of the second conductor 153 can be an oxide layer, and an inner region of the second conductor 153 can be copper, gold, tungsten, or alloys thereof. The second TSV 152 can also include an outer layer and a magnetic material within the outer layer. The outer layer can be a dielectric or insulating material (e.g. , silicon oxide, silicon nitride, polyimide, etc.) that electrically isolates the magnetic material of the second TSV 152 from the second conductor 153. One aspect of the second conductor 153 is that the individual turns 153a-153c define a non-planar spiral with respect to the longitudinal dimension "L" of the second TSV 152. Each subsequent turn 153a- 153c is at a different elevation along the longitudinal dimension L of the second TSV 152 in the non-planar spiral of the second conductor 153.[0027] According to one embodiment of the present technology, the second substrate 151a can be any one of a number of substrate materials suitable for semiconductor processing methods, including silicon, glass, gallium arsenide, gallium nitride, organic laminates, and the like. As will be readily understood by those skilled in the art, a through-substrate via, such as the second TSV 152, can be made by etching a high-aspect-ratio hole into a substrate material and filling it with one or more materials in one or more deposition and/or plating steps. Accordingly, the second TSV 152 extends substantially into the second substrate 151a, unlike other circuit elements that are additively constructed on top of the second substrate 151a. For example, the second substrate 151a can be a silicon wafer of about 800 μπι thickness, and the second TSV 152 can extend from 30 to 100 μπι into the second substrate 151a. In other embodiments, a TSV may extend even further into a substrate material (e.g. , 150 μπι, 200 μπι, etc.), or may extend into a substrate material by as little as 10 μπι.[0028] According to one embodiment, the first conductor 103 can be configured to induce a magnetic field in the first and second TSVs 102 and 152 in response to a current passing through the first conductor 103 (e.g. , provided by a voltage differential applied across the leads 120a and 120b). By changing the current passing through the first conductor 103 (e.g. , by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the first and second TSVs 102 and 152, which in turn induces a changing current in the second conductor 153. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 103 and another comprising the second conductor 153. [0029] In another embodiment, the second conductor 153 can be configured to induce a magnetic field in the first and second TSVs 102 and 152 in response to a current passing through the second conductor 153 (e.g., provided by a voltage differential applied across leads 170a and 170b). By changing the current passing through the second conductor 153 (e.g. , by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the first and second TSVs 102 and 152, which in turn induces a changing current in the first conductor 103. In this fashion, signals and/or power can be coupled between a circuit comprising the second conductor 153 and another comprising the first conductor 103.[0030] In accordance with one embodiment of the present technology, the two TSVs 102 and 152 can include a magnetic material (e.g. , a material with a higher magnetic permeability than the materials of the first and second substrates 101a and 151a and/or the first and second insulating materials 101b and 151b) to increase the magnetic field in the two TSVs 102 and 152 when current is flowing through the first and/or second conductors 103 and/or 153. The magnetic material can be ferromagnetic, ferrimagnetic, or a combination thereof. In one embodiment, the two TSVs 102 and 152 can have the same composition, and in other embodiments, the two TSVs 102 and 152 can have different compositions. The two TSVs 102 and 152 can include more than one material, either in a bulk material of a single composition or in discrete regions of different materials (e.g., coaxial laminate layers). For example, the two TSVs 102 and 152 can include nickel, iron, cobalt, niobium, or alloys thereof.[0031] The two TSVs 102 and 152 can include a bulk material with desirable magnetic properties (e.g. , elevated magnetic permeability provided by nickel, iron, cobalt, niobium, or an alloy thereof), or can include multiple discrete layers, only some of which are magnetic, in accordance with an embodiment of the present technology. For example, following a high- aspect ratio etch and a deposition of insulator, each of the first and second TSVs 102 and 152 can be provided in a single metallization step filling in the insulated opening with a magnetic material. In another embodiment, each of the first and second TSVs 102 and 152 can be formed in multiple steps to provide coaxial layers (e.g. , two or more magnetic layers separated by one or more non-magnetic layers). For example, multiple conformal plating operations can be performed before a bottom-up fill operation to provide a TSV with a coaxial layer of nonmagnetic material separating a core of magnetic material and an outer coaxial layer of magnetic material. In this regard, a first conformal plating step can partially fill and narrow the etched opening with a magnetic material (e.g., nickel, iron, cobalt, niobium, or an alloy thereof), a second conformal plating step can further partially fill and further narrow the opening with a non-magnetic material (e.g. , polyimide or the like), and a subsequent bottom-up plating step (e.g. , following the deposition of a seed material at the bottom of the narrowed opening) can completely fill the narrowed opening with another magnetic material (e.g. , nickel, iron, cobalt, niobium, or an alloy thereof). Such a structure with laminated coaxial layers of magnetic and non-magnetic material can help to reduce eddy current losses in a TSV through which a magnetic flux is passing.[0032] In accordance with one embodiment of the present technology, the first and second TSVs 102 and 152 can be coupled in any one of a number of ways to improve the magnetic permeability of the path followed by a magnetic field generated by a current through one of the two conductors 103 and 153. For example, in the embodiment illustrated in Figure 1, the first TSV 102 is coupled to the second TSV 152 by a solder connection 140. The solder connection 140 can be separated from the first TSV 102 by a barrier material 141 and separated from the second TSV 152 by another barrier material 142. The barrier materials 141 and 142 can be configured to prevent solder diffusion into the two TSVs 102 and 152. The solder material 140 can include a magnetic material to enhance its magnetic permeability. For example, the solder material 140 can include nickel, iron, cobalt, niobium, or alloys thereof. In other embodiments, TSVs in adjacent dies can be coupled using any one of a number of other interconnect methods (e.g., copper-to-copper bonding, pill and pad, interference fit, mechanical, etc. ).[0033] A conductive winding (e.g. , the conductors 103 and 153) of an inductor disposed around a TSV magnetic core (e.g. , the TSVs 102 and 152) need not be smoothly helical in several embodiments of the present technology. Although the conductors 103 and 153 are illustrated schematically and functionally in Figure 1 as having turns that, in cross section, appear to gradually increase in distance from a surface of a respective substrate, it will be readily understood by those skilled in the art that fabricating a smooth helix with an axis perpendicular to a surface of a substrate presents a significant engineering challenge. Accordingly, a "substantially helical" conductor, as used herein, describes a conductor having turns that are separated along the longitudinal dimension L of the TSV (e.g. , the z-dimension perpendicular to the substrate surface), but which are not necessarily smoothly varying in the z-dimension (e.g. , the substantially helical shape does not possess arcuate, curved surfaces and a constant pitch angle). Rather, an individual turn of the conductor can have a pitch of zero degrees and the adjacent turns can be electrically coupled to each other by steeply-angled or even vertical connectors (e.g. , traces or vias) with a larger pitch, such that a "substantially helical" conductor can have a stepped structure. Moreover, the planar shape traced out by the path of individual turns of a substantially helical conductor need not be elliptical or circular. For the convenience of integration with efficient semiconductor processing methodologies (e.g., masking with cost- effective reticles), individual turns of a substantially helical conductor can trace out a polygonal path in a planar view (e.g., a square, a hexagon, an octagon, or some other regular or irregular polygonal shape around the first TSV 102). Accordingly, a "substantially helical" conductor, as used herein, describes a non-planar spiral conductor having turns that trace out any shape in a planar view (e.g., parallel to the plane of the substrate surface) surrounding a central axis, including circles, ellipses, regular polygons, irregular polygons, or some combination thereof.[0034] Figure 2 is a simplified perspective view of a substantially helical conductor 204 ("conductor 204") disposed around a through-substrate via 202 configured in accordance with an embodiment of the present technology. For more easily illustrating the substantially helical shape of the conductor 204 illustrated in Figure 2, the substrate material, insulating materials, and other details of the device in which the conductor 204 and the TSV 202 are disposed have been eliminated from the illustration. As can be seen with reference to Figure 2, the conductor 204 is disposed coaxially around the TSV 202. The conductor 204 of this particular embodiment has three turns (204a, 204b, and 204c) about the TSV 202. As described above, rather than having a single pitch angle, the conductor 204 has a stepped structure, whereby turns with a pitch angle of 0 (e.g. , turns laying in a plane of the device 200) are connected by vertical connecting portions that are staggered circumferentially around the turns. In this regard, planar turns 204a and 204b are connected by a vertical connecting portion 206, and planar turns 204b and 204c are connected by a vertical connecting portion 208. This stepped structure facilitates fabrication of the conductor 204 using simple semiconductor processing techniques (e.g., planar metallization steps for the turns and via formation for the vertical connecting portions). Moreover, as shown in Figure 2, the turns 204a, 204b, and 204c of the conductor 204 trace a rectangular shape around the TSV 202 when oriented in a planar view.[0035] In accordance with one embodiment, the TSV 202 can optionally (e.g., as shown with dotted lines) include a core material 202a surrounded by one or more coaxial layers, such as layers 202b and 202c. For example, the core 202a and the outer coaxial layer 202c can include magnetic materials, while the middle coaxial layer 202b can include a non-magnetic material, to provide a laminate structure that can reduce eddy current losses. Although the TSV 202 is illustrated in Figure 2 as optionally including a three-layer structure (e.g., a core 202a surrounded by two coaxially laminated layers 202b and 202c), in other embodiments any number of coaxial laminate layers can be used to fabricate a TSV.[0036] Although in the foregoing embodiments shown in Figure 1 and Figure 2 substantially helical conductors have been illustrated as having three turns about a TSV, the number of turns of a substantially helical conductor around a TSV can vary in accordance with different embodiments of the technology. As is shown in the example embodiment of Figure 2, a substantially helical conductor need not make an integer number of turns about a TSV (e.g., the top and/or bottom turn may not be a complete turn). Providing more turns can increase the inductance of an inductor compared to having fewer turns, but at an increase in the cost and complexity of fabrication (e.g., more fabrication steps). The number of turns can be as low as one, or as high as is desired. When coupled inductors are provided with the same number of windings, they can couple two electrically isolated circuits without stepping up or down the voltage from the primary winding.[0037] For example, Figure 3 is a simplified cross-sectional view of a multi-die semiconductor device 300 including coupled inductors with TSV cores configured in accordance with an embodiment of the present technology. The device 300 includes a first die 301 and a second die 351. The first die has a first substrate 301a and a first insulating material 301b. The device 300 further includes a first TSV 302 that extends at least substantially through the first die 301 (e.g. , extending from approximately the bottom of the first substrate 301a to beyond an upper surface of the first substrate 301a - completely through the first substrate 301a - and into the first insulating material 301b). The device 300 also includes a first substantially helical conductor 303 ("conductor 303") disposed around the first TSV 302. In the present embodiment, the first conductor 303 is shown to include four complete turns (303a, 303b, 303c and 303d) around the first TSV 302. The first conductor 303 can be operably connected to other circuit elements (not shown) by leads 320a and 320b.[0038] The second die 351 has a second substrate 35 la, a second insulating material 35 lb, and a a second TSV 352 in the second die 351 extending out of the second substrate 351a and into the second insulating material 351b. The device 300 further includes a second substantially helical conductor 353 ("conductor 353") disposed around the second TSV 352. In the present embodiment, the second conductor 353 is shown to include three complete turns (353a, 353b, and 353c) around the second TSV 352. The second conductor 353 can be operably connected to other circuit elements (not shown) by leads 370a and 370b. [0039] As set forth above, coaxial columns of TSVs can be coupled in any one of a number of ways to improve the magnetic permeability thereof. For example, in the present embodiment of Figure 3, the first and second TSVs 302 and 352 are mechanically coupled by a direct connection. Unlike TSVs configured to carry electrical signals, the electrical resistance of the connection between these two TSVs 302 and 352 is not a primary concern in configuring a path with high magnetic permeability. Accordingly, many of the steps utilized to improve the electrical connection between coupled TSVs (e.g. , under bump metallization, solder ball formation, solder reflow, etc.) can be omitted from a manufacturing method of the device 300, in accordance with one embodiment of the present technology.[0040] According to one embodiment, the first conductor 303 is configured to induce a magnetic field in the first and second TSVs 302 and 352 in response to a current passing through the first conductor 303 (e.g. , provided by a voltage applied across leads 320a and 320b). By changing the current passing through the first conductor 303 (e.g. , by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the two TSVs 302 and 352, which in turn induces a changing current in the second conductor 353. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 303 and another comprising the second conductor 353 (e.g. , operating the device 300 as a power transformer).[0041] The first conductor 303 and the second conductor 353 shown in Figure 3 have different numbers of turns. As will be readily understood by one skilled in the art, this arrangement allows the device 300 to be operated as a step-up or step-down transformer (depending upon which substantially helical conductor is utilized as the primary winding and which the secondary winding). For example, the application of a first changing current (e.g. , 4V of alternating current) to the first conductor 303 will induce a changing current with a lower voltage (e.g. , 3 V of alternating current) in the second conductor 353, given the 4:3 ratio of turns between the primary and secondary windings in this configuration. When operated as a step-up transformer (e.g. , by utilizing the second conductor 353 as the primary winding, and the first conductor 303 as the secondary winding), the application of a first changing current (e.g. , 3 V of alternating current) to the second conductor 353 will induce a changing current with a higher voltage (e.g. , 4V of alternating current) in the first conductor 303, given the 3 :4 ratio of turns between the primary and secondary windings in this configuration. [0042] Although the foregoing embodiments of Figures 1 and 3 have illustrated semiconductor devices with two dies, in other embodiments of the present technology, semiconductor devices can include larger stacks of any number of dies with coupled inductors. For example, Figure 4 is a simplified cross-sectional view of a multi-die semiconductor device including coupled inductors with TSV cores configured in accordance with an embodiment of the present technology. The device 400 includes a first die 410, a second die 420 and a third die 430. The first die has a first substrate 41 1a and a first insulating material 41 lb. The device 400 further includes a first TSV 412 that extends at least substantially through the first die 410 (e.g. , extending from approximately the bottom of the first substrate 41 la to beyond an upper surface of the first substrate 41 1a - completely through the first substrate 41 1a - and into the first insulating material 41 1b). The device 400 also includes a first substantially helical conductor 413 ("conductor 413") disposed around the first TSV 412. In the present embodiment, the first conductor 413 is shown to include three complete turns around the first TSV 412. The first conductor 413 can be operably connected to other circuit elements (not shown) by leads 414a and 414b.[0043] The second die 420 includes a second substrate 421a, a second insulating material 421b, and a second TSV 422 that extends at least substantially through the second die 420 (e.g., extending from approximately the bottom of the substrate 421a to beyond an upper surface of the substrate 421a - completely through the second substrate 421a - and into the second insulating material 421b). The device 400 also includes a second substantially helical conductor 423 ("conductor 423") disposed around the second TSV 422. In the present embodiment, the second conductor 423 is shown to include three complete turns around the second TSV 422. The second conductor 423 can be operably connected by leads 424a and 424b to other circuit elements (not shown), including one or more rectifiers to revert a coupled alternating current to DC and one or more capacitors or other filter elements to provide steady current.[0044] The third die 430 includes a third substrate 431a, a third insulating material 431b, and a third TSV 432 in the third die 430 extending out of the third substrate 431a and into the third insulating material 431b. The device 400 also includes a third substantially helical conductor 433 ("conductor 433") disposed around the third TSV 432. In the present embodiment, the third conductor 433 is shown to include three complete turns around the third TSV 432. The third conductor 433 can be operably connected to other circuit elements (not shown), by leads 434a and 434b which connect the third conductor 433 to pads 436a and 436b. [0045] According to one embodiment, the third conductor 433 is configured to induce a magnetic field in the three TSVs 412, 422 and 432 in response to a current passing through the third conductor 433 (e.g., provided by a voltage applied across the pads 436a and 436b). By changing the current passing through the third conductor 433 (e.g. , by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the three TSVs 412, 422 and 432, which in turn induces a changing current in the first and second conductors 413 and 423 (e.g. , through which the first and second TSVs pass). In this fashion, signals and/or power can be coupled between a circuit comprising the third conductor 433 and others comprising the first and second conductors 413 and 423.[0046] As previously set forth, coaxial columns of TSVs can be coupled in any one of a number of ways to improve the magnetic permeability thereof. For example, in the present embodiment of Figure 4, the first and second TSVs 412 and 422 are magnetically coupled across a small gap 415 (e.g., filled by insulating material and/or substrate material). The second and third TSVs 422 and 432 are similarly magnetically coupled across another small gap 425. Unlike TSVs configured to carry electrical signals, an insulating gap between coaxial TSVs is not a significant impediment in providing a path with high magnetic permeability. Accordingly, a coaxial column of coupled TSVs can be solely magnetically coupled, rather than mechanically or electrically coupled, in accordance with one embodiment of the present technology.[0047] Although the foregoing embodiments of Figures 1 through 4 have illustrated inductors with a single substantially helical conductor disposed around each TSV, other embodiments of the present technology can be configured with more than one such conductor around a TSV, as set forth in greater detail below. For example, Figure 5 is a simplified cross- sectional view of a multi-die semiconductor device 500 including coupled inductors with TSV cores configured in accordance with an embodiment of the present technology. The device 500 includes a first die 510 and a second die 520. The first die includes a first substrate 51 1a and a first insulating material 51 1b. The device 500 further includes a first TSV 512 that extends at least substantially through the first die 510 (e.g. , extending from approximately the bottom of the first substrate 51 1a to beyond an upper surface of the first substrate 511a - completely through the first substrate 51 1a - and into the first insulating material 511b). The device 500 also includes a first substantially helical conductor 513 ("conductor 513") disposed around the first TSV 512. In the present embodiment, the first conductor 513 is shown to include three complete turns around the first TSV 512. The first conductor 513 can be operably connected to other circuit elements (not shown) by leads 514a and 514b. [0048] The second die 520 includes second substrate 521a, a second insulating material 521b, and a second TSV 522 that extends out of the second substrate 521a and into the second insulating material 521b. The second TSV 522 is magnetically coupled to the first TSV 512 in the first die 510 across a small gap 515. The device 500 also includes a second substantially helical conductor 523 ("conductor 523") disposed around a portion of the second TSV 522, and a third substantially helical conductor 533 ("conductor 533") disposed around another portion of the second TSV 522. In the present embodiment, the second and third conductors 523a and 523b are shown to each include three complete turns around the second TSV 522. The second conductor 523a can be operably connected to other circuit elements (not shown) by leads 524a and 524b, and the third conductor 523b can be operably connected to still other circuit elements (not shown) by leads 524c and 524d.[0049] According to one embodiment, the first conductor 513 is configured to induce a magnetic field in the two TSVs 512 and 522 in response to a current passing through the first conductor 513 (e.g. , provided by a voltage applied across the leads 514a and 514b). By changing the current passing through the first conductor 513 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the two TSVs 512 and 522, which in turn induces a changing current in the second and third conductors 523a and 523b. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 513 and others comprising the second and third conductors 523a and 523b.[0050] Although Figure 5 illustrates an embodiment having a die with two substantially helical conductors or windings disposed around a TSV at two different heights (e.g. , coaxially but not concentrically), in other embodiments, multiple substantially helical conductors with different diameters can be provided at the same height (e.g., with radially-spaced conductive turns in the same layers). As the inductance of a substantially helical conductor depends, at least in part, on its diameter and radial spacing from the TSV around which it is disposed, such an approach can be used where a reduction in the number of layer processing steps is more desirable than an increase in the inductance of the substantially helical conductor so radially spaced.[0051] The foregoing example embodiments illustrated in Figures 1 through 5 include inductors having an open core (e.g. , a core wherein the magnetic field passes through a higher magnetic permeability material for only part of the path of the magnetic field), but embodiments of the present technology can also be provided with a closed core. For example, Figure 6 is a simplified cross-sectional view of a multi-die semiconductor device 600 including coupled inductors with TSV cores configured in accordance with an embodiment of the present technology. Referring to Figure 6, the device 600 includes a first die 610 and a second die 620. The first die 610 includes a first substrate 61 1a and a first insulating material 61 1b. The device 600 further includes first and second TSVs 612a and 612b that extend at least substantially through the first die 610 (e.g., extending from approximately the bottom of the first substrate 611a to beyond an upper surface of the first substrate 61 1a - completely through the first substrate 61 1a - and into the first insulating material 61 1b). The device 600 further includes a first substantially helical conductor 613 ("conductor 613") disposed around the first TSV 612a. In the present embodiment, the first conductor 613 is shown to include three complete turns around the first TSV 612a. The first and second TSVs 612a and 612b are coupled above the first conductor 613 by an upper coupling member 617 in the first die 610. The first conductor 613 can be operably connected to other circuit elements (not shown) by leads 614a and 614b.[0052] The second die 620 includes a second substrate 621a, a second insulating material 621b, and third and fourth TSVs 622a and 622b that extend out of the second substrate 621a and into the second insulating material 621b. The third TSV 622a is coupled to the first TSV 612a in the first die 610 by a first solder connection 615a, and the fourth TSV 622b is coupled to the second TSV 612b in the first die 610 by a second solder connection 615b. The device further includes a second substantially helical conductor 623 ("conductor 623") disposed around the third TSV 622a. In the present embodiment, the second conductor 623 is shown to include three complete turns around the third TSV 622a. The third and fourth TSVs 622a and 622b are coupled below the second conductor 623 by a lower coupling member 627 in the second die 620. The second conductor 623 can be operably connected to other circuit elements (not shown) by leads 624a and 624b.[0053] The upper coupling member 617 and the lower coupling member 627 can include a magnetic material, having a magnetic permeability higher than that of the first and second substrates 61 1a and 621a and/or the first and second insulating materials 61 1b and 621b. The magnetic material of the upper and lower coupling members 617 and 627 can be either the same material as that of the four TSVs 612a, 612b, 622a and 622b, or a different material. The magnetic material of the upper and lower coupling members 617 and 627 can be a bulk material (e.g. , nickel, iron, cobalt, niobium, or an alloy thereof), or a laminated material with differing layers (e.g. , of magnetic material and non-magnetic material). Laminated layers of magnetic and non-magnetic material can help to reduce eddy current losses in the upper and lower coupling members 617 and 627. In accordance with one aspect of the present technology, the four TSVs 612a, 612b, 622a and 622b, together with the upper coupling member 617 and the lower coupling member 627, can provide a closed path for the magnetic field induced by the second conductor 623, such that the inductance of the device 600 is greater than it would be if only the four TSVs 612a, 612b, 622a and 622b were provided.[0054] According to one embodiment, the second conductor 623 is configured to induce a magnetic field in the four TSVs 612a, 612b, 622a and 622b (and in the upper and lower coupling members 617 and 618) in response to a current passing through the second conductor 623 (e.g. , provided by a voltage applied across the leads 624a and 624b). By changing the current passing through the second conductor 623 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the four TSVs 612a, 612b, 622a and 622b (and in the upper and lower coupling members 617 and 618), which in turn induces a changing current in the first conductor 613. In this fashion, signals and/or power can be coupled between a circuit comprising the second conductor 623 and another comprising the first conductor 613.[0055] Although in the example embodiment illustrated in Figure 6 coupled inductors are illustrated sharing a closed core (e.g., a core in which a substantially continuous path of high magnetic permeability material passes through the middle of a conductive winding), in other embodiments, one or both of the upper and lower coupling members 617 and 618 could be omitted. In such an embodiment, a secondary coaxial column of TSVs (e.g. , in addition to the coaxial column of TSVs around which the windings are disposed) with elevated magnetic permeability could be situated near the coaxial column of TSVs around which the windings are disposed to provide an open core embodiment with improved inductance over an embodiment in which the secondary coaxial column of TSVs was not present.[0056] According to one embodiment, a closed magnetic core as illustrated by way of example in Figure 6 can provide additional space in which one or more windings can be disposed (e.g. , to provide a transformer or power couple). For example, although Figure 6 illustrates a device in which two windings are disposed on the same coaxial column of TSVs, with a proximate column of TSVs having no windings, in another embodiment, two proximate columns of coaxial TSVs could be provided with a single winding on each column (e.g. , a primary winding on the first column in a first die, and a secondary winding on the second column in a second die). Alternatively, additional windings can be provided in the space provided by a closed magnetic core or a proximate TSV in an open-core embodiment, to provide more than two coupled inductors that all interact with the same magnetic field. For example, Figure 7 is a simplified cross-sectional view of coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 7, a device 700 includes a first die 710 and a second die 720. The first die 710 includes a first substrate 711a and a first insulating material 711b. The device 700 further includes first and second TSVs 712a and 712b that extend at least substantially through the first die 710 (e.g. , extending from approximately the bottom of the first substrate 711a to beyond an upper surface of the first substrate 71 la - completely through the first substrate 711a - and into the first insulating material 711b). The device 700 further includes a first substantially helical conductor 713a ("conductor 713a") disposed around the first TSV 712a. In the present embodiment, the first conductor 713a is shown to include three complete turns around the first TSV 712a. The device 700 further includes a second substantially helical conductor 713b ("conductor 713b") disposed around the second TSV 712b. In the present embodiment, the second conductor 713b is shown to include three complete turns around the second TSV 712a. The first and second TSVs 712a and 712b are coupled above the first and second conductors 713a and 713b by an upper coupling member 717 in the first die 710. The first conductor 713a can be operably connected to other circuit elements (not shown) by leads 714a and 714b, and the second conductor 713b can be operably connected to other circuit elements (not shown) by leads 714c and 714d.[0057] The second die 720 includes a second substrate 721a, a second insulating material 721b, and third and fourth TSVs 722a and 722b that extend out of the second substrate 721a and into the second insulating material 721b. The third TSV 722a is coupled to the first TSV 712a in the first die 710 by a first solder connection 715a, and the fourth TSV 722b is coupled to the second TSV 712b in the first die 710 by a second solder connection 715b. In other embodiments, TSVs in adjacent dies can be coupled using any one of a number of other interconnect methods (e.g., copper-to-copper bonding, pill and pad, interference fit, mechanical, etc.). The device further includes a third substantially helical conductor 723 ("conductor 723") disposed around the third TSV 722a. In the present embodiment, the second conductor 723 is shown to include three complete turns around the third TSV 722a. The third and fourth TSVs 722a and 722b are coupled below the third conductor 723 by a lower coupling member 727 in the second die 720. The third conductor 723 can be operably connected to other circuit elements (not shown) by leads 724a and 724b.[0058] The upper coupling member 717 and the lower coupling member 727 can include a magnetic material having a magnetic permeability higher than that of the first and second substrates 71 1a and 721a and/or the first and second insulating materials 71 1b and 721b. The magnetic material of the upper and lower coupling members 717 and 727 can be either the same material as that of the four TSVs 712a, 712b, 722a and 722b, or a different material. The magnetic material of the upper and lower coupling members 717 and 727 can be a bulk material (e.g. , nickel, iron, cobalt, niobium, or an alloy thereof), or a laminated material with differing layers (e.g. , of magnetic material and non-magnetic material). Laminated layers of magnetic and non-magnetic material can help to reduce eddy current losses in the upper and lower coupling members 717 and 727. In accordance with one aspect of the present technology, the four TSVs 712a, 712b, 722a and 722b, together with the upper coupling member 717 and the lower coupling member 727, can provide a closed path for the magnetic field induced by the third conductor 723, such that the inductance of the device 700 is greater than it would be if only the four TSVs 712a, 712b, 722a and 722b were provided.[0059] According to one embodiment, the third conductor 723 is configured to induce a magnetic field in the four TSVs 712a, 712b, 722a and 722b (and in the upper and lower coupling members 717 and 718) in response to a current passing through the third conductor 723 (e.g. , provided by a voltage applied across the leads 724a and 724b). By changing the current passing through the third conductor 723 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the four TSVs 712a, 712b, 722a and 722b (and in the upper and lower coupling members 717 and 718), which in turn induces a changing current in the first and second conductors 713a and 713b. In this fashion, signals and/or power can be coupled between a circuit comprising the third conductor 723 and others comprising the first and second conductors 713a and 713b.[0060] Although in the embodiment illustrated in Figure 7 two coupled inductors on proximate are shown with the same number of turns, in other embodiments of the present technology different numbers of windings can be provided on similarly-configured inductors. As will be readily understood by one skilled in the art, by providing coupled inductors with different numbers of windings, a device so configured can be operated as a step-up or step-down transformer (depending upon which conductor is utilized as the primary winding and which the secondary winding).[0061] Although in the embodiments illustrated in Figures 6 and 7 a single additional coaxial column of coupled TSVs is provided to enhance the magnetic permeability of the return path for the magnetic field generated by a primary winding around a first coaxial column of TSVs, in other embodiments of the present technology multiple return path coaxial columns of TSVs can be provided to further improve the inductance of the inductors so configured. For example, embodiments of the present technology may use two, three, four, or any number of additional coaxial columns of coupled TSVs to provide a return path for the magnetic field with enhanced magnetic permeability. Such additional coaxial columns of coupled TSVs may be coupled by upper and/or lower coupling members to the coaxial column of coupled TSVs around which one or more substantially helical conductors are disposed (e.g., a closed core configuration), or may merely be sufficiently proximate to concentrate some of the magnetic flux of the return path of the magnetic field to enhance the performance of the device so configured.[0062] Although in the foregoing examples set forth in Figures 1 to 7 each substantially helical conductor has been illustrated as having a single turn about a TSV at a given distance from the surface of a corresponding substrate, in other embodiments a substantially helical conductor can have more than one turn about a TSV at the same distance from the substrate surface (e.g. , multiple turns arrange coaxially at each level). For example, Figure 8 is a simplified perspective view of a substantially helical conductor 804 ("conductor 804") disposed around a through-substrate via 802 configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 8, the conductor 804 includes a first substantially helical conductor 804a ("conductor 804a") disposed around the TSV 802, which is connected to a second coaxially-aligned substantially helical conductor 804b ("conductor 804b"), such that a single conductive path winds downward around TSV 802 at a first average radial distance, and winds back upward around TSV 802 at a second average radial distance. Accordingly, the conductor 804 includes two turns about the TSV 802 (e.g. , the topmost turn of conductor 804a and the topmost turn of conductor 804b) at the same position along the longitudinal dimension "L" of the TSV 802. In another embodiment, a substantially helical conductor could make two turns about a TSV at a first level (e.g. , spiraling outward), two turns about a TSV at a second level (e.g. , spiraling inward), and so on in a similar fashion for as many turns as were desired. [0063] Figures 9A-9F are simplified views of a device 900 having an inductor with a through-substrate via core in various states of a manufacturing process in accordance with an embodiment of the present technology. In Figure 9A, a substrate 901 is provided in anticipation of further processing steps. The substrate 901 may be any one of a number of substrate materials, including silicon, glass, gallium arsenide, gallium nitride, organic laminates, molding compounds (e.g. , for reconstituted wafers for fan-out wafer-level processing) and the like. In Figure 9B, a first turn 903 of a substantially helical conductor has been disposed in a layer of the insulating material 902 over the substrate 901. The insulating material 902 can be any one of a number of insulating materials which are suitable for semiconductor processing, including silicon oxide, silicon nitride, polyimide, or the like. The first turn 903 can be any one of a number of conducting materials which are suitable for semiconductor processing, including copper, gold, tungsten, alloys thereof, or the like.[0064] In Figure 9C, a second turn 904 of the substantially helical conductor has been disposed in the now thicker layer of the insulating material 902, and spaced from the first turn 903 by a layer of the insulating material 902. The second turn 904 is electrically connected to the first turn 903 by a first via 905. A second via 906 has also been provided to route an end of the first turn 903 to an eventual higher layer of the device 900. In Figure 9D, a third turn 907 of the substantially helical conductor has been disposed in the now thicker layer of the insulating material 902, and spaced from the second turn 904 by a layer of the insulating material 902. The third turn 907 is electrically connected to the second turn 904 by a third via 908. The second via 906 has been further extended to continue routing an end of the first turn 903 to an eventual higher layer of the device 900.[0065] Turning to Figure 9E, the device 900 is illustrated in a simplified perspective view after an opening 909 has been etched through the insulating material 902 and into the substrate 901. The opening 909 is etched substantially coaxially with the turns 903, 904 and 907 of the substantially helical conductor using any one of a number of etching operations capable of providing a substantially vertical opening with a high aspect ratio. For example, deep reactive ion etching, laser drilling, or the like can be used to form the opening 909. In Figure 9F, a TSV 910 has been disposed in the opening 909. The TSV 910 can include a magnetic material (e.g. , a material with a higher magnetic permeability than the substrate 901 and/or the insulating material 902) to increase the magnetic field in the TSV 910 when current is flowing through the substantially helical conductor. The magnetic material can be ferromagnetic, ferrimagnetic, or a combination thereof. The TSV 910 can include more than one material, either in a bulk material of a single composition, or in discrete regions of different materials (e.g. , coaxial laminate layers). For example, the TSV 910 can include nickel, iron, cobalt, niobium, or alloys thereof. Laminated layers of magnetic and non-magnetic material can help to reduce eddy current losses in the TSV 910. The TSV 910 can be provided in a single metallization step filling in the opening 909, or in multiple steps of laminating layers (e.g., multiple magnetic layers separated by non-magnetic layers). In one embodiment, to provide a TSV with a multiple layer structure, a mixture of conformal and bottom-up fill plating operations can be utilized (e.g. , a conformal plating step to partially fill and narrow the etched opening with a first material, and a subsequent bottom-up plating step to completely fill the narrowed opening with a second material).[0066] Turning to Figure 9G, the device 900 is illustrated after the substrate 901 has been thinned to expose or reduce the distance between a bottom surface of the substrate 901 and a bottom end of the TSV 910, to provide a thinned die 91 1. In Figure 9H, the device 900 is illustrated after the thinned die 91 1 has been disposed over a second die 912 in which another TSV is surrounded by a substantially helical conductor. The TSV 910 and the coaxially aligned TSV of the second die 912 can be coupled in a variety of ways, including by solder connection, copper-to-copper bonding, pill and pad, interference fit, mechanical connection, or magnetic coupling across a small gap (e.g. , of insulating material and/or substrate material).[0067] Figure 10 is a flow chart illustrating a method of manufacturing an inductor with a through-substrate via core in accordance with an embodiment of the present technology. The method begins in step 1010, in which a substrate is provided. In step 1020, a substantially helical conductor is disposed in an insulating material over the substrate. In step 1030, an opening is etched through the insulating material and into the substrate along an axis of the substantially helical conductor. In step 1040, a TSV is disposed into the opening. In step 1050, the substrate is thinned to expose or reduce the distance between a bottom surface of the substrate and a bottom end of the TSV. In step 1060, the die comprising the first substrate is disposed over a second die with a coaxially aligned TSV around which is disposed another substantially helical conductor. In step 1070, the first TSV and the second TSV are coupled (e.g., by a solder connection, or a mechanical connection, or by a magnetic coupling across a gap).[0068] From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims. |
The application discloses a gryatory sensing system that enhances the wearable device user experience via HMI extensions. Methods and systems may provide for a gyratory sensing system (GSS) for extending the human machine interface (HMI) of an electronic device, particularly small form factor, wearable devices. The gyratory sensing system may include a gyratory sensor and a rotatable element to engage the gyratory sensor. The rotatable element may be sized and configured to be easily manipulated by hand to extend the HMI of the electronic device such that the functions of the HMI may be more accessible. The rotatable element may include one or more rotatable components, such as a body, edge or face of a smart watch, that each may be configured to perform a function upon rotation, such as resetting, selecting, and/or activating a menu item. |
1.A wearable electronic device, comprising:a display having an edge;a rotatable input capable of being rotated at the edge of the display; andA processor circuit for:causing the display to present a first icon corresponding to an application executable by the wearable electronic device, the first icon at a first time in response to a first input event of the rotatable input , presented on the display at a first size;in response to a second input event of the rotatable input, causing the display to present at least one of the first icons at a second size at a second time, the second size being compared to the first size is exaggerated; andcausing the display at a third time to present a second icon corresponding to the at least one of the first icons, the second icon being selectable to enable at least one of the applications implement.2.The wearable electronic device of claim 1, wherein the processor circuit is configured to present the first icon based on a list corresponding to a user-selected application of the applications.3.The wearable electronic device of claim 1, wherein the first input event comprises at least one of a first pressing event or a first rotation event.4.The wearable electronic device of claim 1, wherein the processor circuit is configured to instantiate a gyroscopic sensor to detect the first input event and the second input event of the rotatable input.5.The wearable electronic device of claim 4, wherein the gyro sensor is used to detect at least one of a distance or a degree of rotation of the rotatable input.6.6. The wearable electronic device of claim 5, wherein the processor circuit is configured to render the rotatable input at a second size in response to the gyro sensor detecting a first threshold of degrees of rotation of the rotatable input the at least one first icon of the first icons.7.The wearable electronic device of claim 1, wherein the rotatable input at the edge of the display comprises a bezel.8.A machine-readable storage medium comprising instructions that, when executed, cause at least one processor to at least:causing the display to present a first icon corresponding to an application executable by the wearable electronic device, the first icon being responsive to a first input event of the rotatable input, at a first time, at a first size presented on the display, the rotatable input is at an edge of the display, the selectable input can be rotated;in response to a second input event of the rotatable input, causing the display to present at least one of the first icons at a second size at a second time, the second size being compared to the first size is exaggerated; andcausing the display at a third time to present a second icon corresponding to the at least one of the first icons, the second icon being selectable to enable at least one of the applications implement.9.9. The machine-readable storage medium of claim 8, wherein the instructions, when executed, cause the at least one processor to present the first icon based on a list corresponding to a user-selected one of the applications .10.9. The machine-readable storage medium of claim 8, wherein the instructions, when executed, cause the at least one processor to recognize at least one of a touch event or a first rotation event.11.9. The machine-readable storage medium of claim 8, wherein the instructions, when executed, cause the at least one processor to instantiate a gyroscopic sensor to detect the first input event of the rotatable input and the Second input event.12.9. The machine-readable storage medium of claim 8, wherein the instructions, when executed, cause the at least one processor to detect at least one of a distance or a degree of rotation of the rotatable input.13.9. The machine-readable storage medium of claim 8, wherein the instructions, when executed, cause the at least one processor to detect a first of the degrees of rotation of the rotatable input in response to the gyro sensor a threshold value to render the at least one of the first icons at a second size.14.A device comprising:an apparatus for rendering an image on a wearable electronic device, the rendering apparatus having an edge;means for rotatable selection at said edge, said rotatable selection means being rotatable; andApparatus for processing to:causing the image rendering device to present a first icon, the first icon corresponding to an application executable by the wearable electronic device, the first icon being responsive to a first input event of the rotatable selection device at being presented on the rendering image device at a first moment in a first size;in response to a second input event of the rotatable selection device, causing the rendering image device to present at least one of the first icons at a second size at a second time, the second size being compared to all of the first icons said first dimension is enlarged; andcausing the rendering image device to present a second icon corresponding to the at least one first icon in the first icon at a third moment, the second icon being selectable so that at least one of the first icons An application executes.15.15. The apparatus of claim 14, wherein the processing means is configured to present the first icon based on a list corresponding to a user-selected application of the applications.16.The apparatus of claim 14, wherein the processing means is configured to identify the first input event as at least one of a first pressing event or a first rotation event.17.15. The apparatus of claim 14, further comprising means for rotational sensing for detecting the first input event and the second input event of the rotatable input.18.18. The apparatus of claim 17, wherein the rotation sensing means is adapted to detect at least one of a distance or a degree of rotation of the rotatable input.19.19. The apparatus of claim 18, wherein the processing means is adapted to render the rotation angle of the rotatable selection means in a second size in response to the rotation sensing means detecting a first threshold of the rotation angle of the rotatable selection means the at least one first icon among the first icons. |
Gyroscopic Sensing System to Enhance Wearable Device User Experience via HMI ExtensionThis application is the PCT international application number PCT/US2016/032626, the international application date is May 16, 2016, and the application number entering the Chinese national phase is 201680028425.0, entitled "Enhancing wearable device user experience through HMI extension" Divisional application of the invention patent application for "Sensing System".CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims priority to US Non-Provisional Patent Application 14/740,609, filed June 16, 2015.Background techniqueSmart devices including smart phones, mobile phones, tablet computers, etc. have become ubiquitous. In addition, wearable devices such as smart watches, fitness bands and monitors, action cameras, etc. have become increasingly popular. These wearable devices can often include very small touchscreens used to interact with the device. Users of these devices may need to touch exactly the correct user interface (UI) or icon, often closely spaced from each other, and/or swipe the interface several times to search and launch applications. Additionally, some of these devices do not include a touchscreen or user interface at all. As a result, the user experience for these small wearables may be degraded due to their narrow human-machine interface (HMI). Some existing hardware and software solutions for sensing user input may include buttons, voice control, and gesture control. However, these solutions can suffer from several disadvantages, including limited states (ie, on and off states for hardware buttons), complex and expensive interfaces (ie, gesture and voice sensing requiring complex and expensive computing power) and sensors) and an outdated look (e.g., prominent hardware that is not integrated, stylish, or compatible with wearables). Simply put, conventional small wearables such as smartwatches with cramped touchscreens and/or user interfaces may not be the most useful to the wearer (ie, inaccurate, less user friendly, unintegrated and incompatible) ).Description of drawingsVarious advantages of various embodiments will become apparent to those skilled in the art from a reading of the following specification and appended claims, with reference to the following drawings, wherein:1A-B are illustrations of an example of a smart watch with a gyroscopic sensing system, according to an embodiment;1C-D are illustrations of an example of a smart watch having a gyroscopic sensing system with a rotating body, according to an embodiment;1E-F are illustrations of examples of smart watches with gyroscopic sensing systems with rotating edges, according to an embodiment;1G-H are illustrations of examples of smart watches with gyroscopic sensing systems with rotating surfaces and edges, according to an embodiment;2 is a block diagram of an example of a gyroscopic sensing system according to an embodiment;3 is a flowchart of an example of a gyro sensing process according to an embodiment;4A-D are illustrations of an example of a quick start routine for a gyroscopic sensing system according to an embodiment;5A-B are illustrations of examples of lock/unlock routines for a gyroscopic sensing system according to an embodiment; and6A-D are illustrations of examples of multi-dimensional access routines for a gyroscopic sensing system, according to an embodiment.detailed description1A-B illustrate a side view and a top view, respectively, of a wearable device 100 having a gyroscopic sensing system 110 according to an embodiment of the present disclosure. Wearable device 100 may be one or more devices with relatively little or no graphical user input (GUI), such as a touch screen, including, for example, smart watches, fitness bands or monitors, action cameras, and the like. Wearable device 100 may include a rotatable device 110 having one or more rotatable components, such as a rotatable edge component 112 (eg, a watch bezel), that may communicate with and engage with an embedded gyro sensor 120 , a rotatable face assembly 114 (eg, a cover glass), and a rotatable body assembly 116 (eg, a watch body or case). Wearable device 100 may also include a strap or watch band 130 that may be used to attach wearable device 100 to a user (eg, the user's wrist). The rotatable device 110 and associated rotatable assemblies 112, 114, 116 can be separately (ie, independently) rotatable about at least one axis of rotation (eg, an x-axis, a y-axis, or a z-axis). The gyro sensor 120 may be embodied as a single-axis, micro-electromechanical systems (MEMS) rate gyroscope chip capable of sensing three axes of motion (eg, x-axis (pitch), y-axis (roll), and z) depending on the mounting arrangement Rotation on one of the moving axes (yaw). Gyroscopic sensor 120 may be embodied in a relatively small size and low cost arrangement suitable for sensing motion in small consumer electronic devices (eg, wearable device 100). An example of a suitable gyro sensor chip for use with the present disclosure is the single-axis (z) MEMS gyroscope model ISZ-2510 available from the company of San Jose, California. The illustrated rotatable device 110 and gyro sensor 120 form at least part of a gyro sensing system that can extend the human machine interface (HMI) (eg, touch screen) of a small wearable device (eg, wearable device 100 ) into a more to be useful (ie, more accurate, user-friendly, integrated, and compatible).In the illustrated example, the wearable device 100 includes the illustrated corresponding landmark lines 101 aligned to define reset (ie, rest or ready) positions for the rotatable components 112 , 114 , 116 . Note that although the landmark line 101 is illustrated by a straight line, this is for illustrative purposes only. Other arrangements may be used to define the alignment of the rotatable assemblies 112, 114, 116, including, for example, stops, bumps, and similar structures that bring the rotatable assemblies into alignment in the reset position. The gyro sensor 120 may be arranged to receive user input via rotation (eg, about the z-axis) of one or more of the rotatable components 112, 114, 116 of the rotatable device 110 to form a user interface (ie, a touch screen) ) is extended to include more input states and allow for faster, more accurate and more reliable HMIs with user interfaces.For example, a user may rotate one or more of the rotatable components 112, 114, 116 to provide input to an application associated with the wearable device 100, such as to browse, select, and/or launch (ie, activate ). As will be discussed further below, the input may be based on various factors including, for example, the degree of rotation from the reset position, the direction of rotation, the rotation to the corresponding function icon, the order of rotation(s), and the like. The rotary, more ergonomically friendly input enabled by the rotatable device 110 may thus be enabled by allowing The increased functionality and accuracy of the input provides the wearable device 100 with greater utility. The user can thereby provide more efficient and reliable input.Turning now to FIGS. 1C-D, side and top views, respectively, of a wearable device 150 according to an embodiment of the present disclosure are shown. Wearable device 150 is similar to wearable device 100 (FIGS. 1A-B), and includes a gyroscopic sensing system with a sensor capable of surrounding z from a reset position (eg, clockwise (see arrow 157)) The rotatable body 156 of the axis is rotated (ie, so that the landmark line 151 is out of alignment) for browsing, navigating, and/or launching applications associated with the wearable device 150 . 1E-F illustrate a side view and a top view, respectively, of a wearable device 160 according to an embodiment of the present disclosure. Wearable device 160 is similar to wearable device 100 (FIGS. 1A-B), and includes the ability to rotate from a reset position (eg, clockwise (see arrow 163)) about the z-axis (ie, so that landmark 161 does not rotatable edge 162 for browsing, navigating, and/or launching applications associated with wearable device 160 . 1G-H illustrate side and top views of a wearable device 170 according to an embodiment of the present disclosure. The wearable device is similar to wearable device 100 (FIGS. 1A-B), and includes both capable of wrapping around from a reset position (eg, clockwise (see arrow 173) and counterclockwise (see arrow 175), respectively) The z-axis is rotated (ie, so that the landmark line 171 is out of alignment) for browsing, navigating, and/or launching the rotatable edge 172 and rotatable surface 174 of the application associated with the wearable device 160 .Turning now to FIG. 2, an example of a gyroscopic sensing system 200 according to an embodiment of the present disclosure is shown. System 200 may include gyroscopic sensing system 210 , processor 220 , and memory device 230 . The gyroscopic sensing system 210 may include a human machine interface (HMI) 212 (with a rotatable device) and a gyroscopic sensor 214 (both discussed in greater detail above with reference to FIGS. 1A-H ). HMI 212 may be configured and arranged to receive input from user 205 (eg, a human) and communicate the input to gyro sensor 214 . The gyroscopic sensor 214 can communicate with a processor 230 (eg, a system-on-chip (SoC) processor) and a memory device 230 and can sense user input in a manner that extends the HMI 212 to be more useful. System 200 may also include various optional components including, for example, camera 240 , display 250 , and other peripheral device(s) 260 .In use, the gyroscopic sensing system 210 may receive input from the user 205 via the rotatable device of the HMI 212 . Input(s) (corresponding to user selections of system or application functions, options, procedures, etc.) may be communicated, for example, by user 205 rotating one or more rotatable components of the rotatable device. The rotatable device may be embodied as a rotatable device as discussed above with reference to Figures 1A-D. The gyroscopic sensor 214 may communicate with the HMI 212 to receive user input(s) and sense the user input(s) based on, for example, the degree of rotation of the one or more rotating components from the reset position. User input(s) may be communicated via components of system 200 (eg, display 250). User input(s) may also be used to adjust, alter, change, navigate, browse, and/or select, etc., camera 240, display 250, or other peripheral device(s) 260 functions, options, procedures, and the like. The gyroscopic sensing system 200 may thus provide wearable devices with greater utility by allowing for improved input functionality, ergonomics, reliability, and accuracy. As an example, due to the limited size and screen real estate that can be provided by a small wearable device (eg, wearable device 150 ), gyroscopic sensing system 210 may, for example, allow comparison with wearable devices (eg, wearable device 150 ) when compared to other user interfaces , wearable device 150 ) or associated peripherals associated with more functions and/or applications (which may be represented, for example, by icons), are browsed and selected more quickly, reliably and accurately. Furthermore, in at least some embodiments, the ergonomic and haptic layout of HMI 212 may improve the speed, reliability, and accuracy of user input when compared to other user interfaces.In various embodiments, gyroscopic sensing system 200 may allow the functionality of a wearable device (eg, wearable device 150 , 160 or 170 ) to be improved by extending the usefulness of HMI 212 . In some embodiments, "extending the utility of a human-machine interface" may mean providing a wearable device with improved input functionality, ergonomics, reliability, and accuracy consistent with what is disclosed herein greater practicality. In at least some embodiments, improved input functionality may be implemented via one or more rotatable components, such as, for example, rotatable components 112, 114, 116 (FIGS. 1A-B), which The functionality to select one or more applications associated with HMI 212 in a manner that quickly and accurately launch applications is enabled (as discussed more fully below with reference to FIGS. 4A-D ). For example, a rotatable assembly may allow a user to quickly and accurately zoom in and out of one or more functions associated with the HMI so that functionality, ergonomics, reliability, and accuracy of user input may be improved. In at least some embodiments, improved input functionality may be implemented via one or more rotatable components, such as, for example, rotatable components 112, 114, 116 (FIGS. 1A-B), which The wearable device and/or one or more applications associated with the wearable device is allowed to be locked and/or unlocked quickly and accurately (as discussed more fully below with reference to FIGS. 5A-B ). In at least some embodiments, improved input functionality may be implemented via one or more rotatable components, such as, for example, rotatable components 112, 114, 116 (FIGS. 1A-B), which Fast and accurate access to one or more applications associated with HMI 212 is allowed in a manner that enables multi-dimensional access to applications (as discussed more fully below with reference to FIGS. 6A-D). In some embodiments, the various improvements disclosed herein may be combined in various arrangements not expressly disclosed herein without departing from the scope of the present disclosure.FIG. 3 shows an example of a gyro sensing process according to an embodiment of the present disclosure. Process 300 may be implemented in one or more modules in executable software as a set of logical instructions stored in a memory such as random access memory (RAM), read only memory (ROM), programmable ROM ( PROM), firmware, flash memory, etc. in a machine- or computer-readable storage medium such as, for example, a programmable logic array (PLA), a field programmable gate array (FPGA), a complex programmable logic device (CPLD) In configurable logic, it is stored in fixed function logic hardware using circuit technologies such as, for example, application specific integrated circuits (ASIC), complementary metal oxide semiconductor (CMOS), or transistor-transistor logic (TTL) technology, or any combination thereof.The illustrated process block 302 provides for remaining in a "standby" (ie, reset, rest, or ready) state. At block 304, a determination may be made as to whether one or more rotatable components of the rotatable device have been rotated by greater than a predetermined number of degrees (eg, x degrees or x°). If "no", the process 300 returns to block 302 and remains in the "standby" state. If yes, the process 300 proceeds to block 306 where the gyro sensor is triggered and interrupts the processor (ie, the SoC) for a state change, eg, an update of the input-based user interface (touch screen or GUI). At block 308, the software interrupt routine is called and the new event is executed. Once complete, the illustrated process 300 returns to block 302 . An example of suitable pseudocode for performing process 300 is provided as follows:Fake code1)Standby2)Detects rotation·(clockwise=positive, counter-clockwise=negative) (clockwise=positive, counter-clockwise=negative)3)Rotation detected above threshold, vibrates the wearable device4)If rotation value=+x5)Set action_1=action_1+x then set action_1=action_1+x6)Else, if rotation value=-y else, if rotation value=-y7)Set action_1=action_1–y then set action_1=action_1–y8)Update GUI's menu Update GUI's menu4A-D show an illustration of an example of a quick start routine 400 for a wearable device according to an embodiment of the present disclosure. The quick launch routine 400 of the wearable device 401 consistent with what is disclosed herein may define the quick launch icon 404 to launch a predetermined list and actions of applications. The quick start routine 400 may begin at (A) by engaging (ie, rotating) a rotating assembly 402 (eg, a watch face) about the z-axis in one direction (eg, counterclockwise (see arrow 403 )) about the z-axis. Rotating the rotation assembly 402 (eg, beyond a predetermined degree or predetermined position) wakes the system by triggering a gyro sensor (not shown) and interrupting the processor (eg, SoC) for a state change. Once awake, the system can display a list of shortcut icons to quickly launch applications. At (B), the system presents various application quick launch (ie, shortcut) icons 404 for the user to browse and select to quickly launch the application. The total number of items or applications to be listed can be customized by the user (eg, through software). The user selection may be made by, for example, rotating the rotation assembly 402 in the opposite direction (eg, clockwise (see arrow 405 )). At (C), the illustrated system zooms in on the selected application quick launch icon 406 (ie, the phonebook) for execution by, eg, being touched by the user and/or after a predetermined period of time (eg, after 2 seconds) Easier and more reliable activation (ie, start-up). At (D), various individual entries 409 of the selected application 408 (ie, phonebook) can be browsed via the rotatable component 402, and the selected individual entries 410 can be enlarged as time passes or upon touch and start. The illustrated quick start routine 400 may thus provide a fast, accurate and reliable means for extending the usefulness of the HMI.5A-B are illustrations of an example of a lock/unlock routine 500 of a gyroscopic sensing system according to an embodiment of the present disclosure. The locking/unlocking routine 500 (ie, unlocking routine) of the wearable device 501 consistent with what is disclosed herein may pre-determine the rotation of one or more of the rotatable components 502 , 504 of the wearable device 501 order is limited. The unlocking routine 500 may engage (ie, rotate) the first rotation component 502 (eg, a watch face) about the z-axis in a first direction (eg, counterclockwise (see arrow 503 )) for a predetermined distance or degree (eg, 30 degrees or 30°), and then rotate the second rotation component 504 (eg, the watch edge) about the z-axis in a second direction (eg, clockwise (see arrow 505 )) by a predetermined distance or degrees (eg, 60 degrees or 60°) starting at (A). Upon completion of the lock/unlock routine 500, the wearable device can transition from the locked state 506 to the unlocked state 508 quickly and reliably. Note that the wearable device 501 may similarly be locked by performing a similar operation (ie, a locking routine). The lock/unlock routine 500 may thus provide another fast, accurate and reliable means for extending the utility of the HMI.6A-D show an illustration of an example of a multi-dimensional access routine 600 for a gyroscopic sensing system according to an embodiment of the present disclosure. The multi-dimensional access routine 600 of the wearable device 601 consistent with the disclosure herein may define predetermined operations of the multi-dimensional application interface. The multi-dimensional access routine 600 may engage (ie, rotate) the first rotation component 602 (eg, a watch body) about the z-axis in a first direction (eg, clockwise (see arrow 603 )) for a predetermined distance or degrees in order to activate (ie wake up) the user interface (ie touch screen or GUI). At (B), various application icons 604 may be presented on the user interface for selection by the user. The first rotation component 602 can be further rotated in a clockwise direction, eg, to navigate the various application icons 604 . At (C), the selected application icon 606 (eg, phonebook) can be enlarged, and the second rotatable component 608 (eg, watch edge) can be rotated, eg, clockwise, to launch or "enter" "Details 610 of the selected application 606 (ie, searches for contacts in the phone book). At (D), the selected detail 612 (ie, the contact) may be zoomed in for easier activation via a user's touch or after a predetermined period of time (eg, 2 seconds). Other features may also be activated via the rotatable components 602 , 608 . For example, once a call has been initiated, the loudspeaker may be activated, eg, by rotating the first rotatable assembly 608 (ie, the watch body) in a clockwise direction (see arrow 613). The call can be ended, for example, by rotating the second rotating assembly (ie, the watch edge) in a clockwise direction (see arrow 615). The multi-dimensional access routine 600 may thus provide another fast, accurate and reliable means for extending the usefulness of the HMI.Examples of suitable pseudocode for implementing the routines disclosed herein are provided as follows:Fake code1)Standby2)Detects rotation1)(clockwise=positive, counter-clockwise=negative) (clockwise=Positive, CCW = Negative)3)Rotation detected above threshold, vibrates the wearable device4)If rotation is from face's rotation, then1)If rotation value=+x2)Set action_1=action_1+xSet action_1=action_1+x3)Else if rotation value=-y else if rotation value=-y4)Set action_1=action_1–ySet action_1=action_1–y5)Update GUI's menu Update GUI's menu5)Else, if rotation is from edge's rotation, then else, if rotation is from edge's rotation, then1)If rotation value=+x2)Set action_2=action_2+xSet action_2=action_2+x3)Else if rotation value=-y else if rotation value=-y4)Set action_2=action_2–ySet action_2=action_2–y5)Update GUI's menu Update GUI's menu6)Else, if rotation is from body's rotation, then1)If rotation value=+x2)Set action_3=action_3+xSet action_3=action_3+x3)Else if rotation value=-y else if rotation value=-y4)Set action_3=action_3–ySet action_3=action_3–y5)Update GUI's menu Update GUI's menuAdditional notes and examples:Example 1 may include a gyroscopic sensing system having a memory device, a processor in communication with the memory device, and a gyroscopic sensing system in communication with the processor. The gyroscopic sensing system may include a human-machine interface for receiving user input and a gyroscopic sensor for sensing the user input in a manner that extends the utility of the human-machine interface.Example 2 can include the system of example 1, wherein the human-machine interface includes a rotatable device having one of the functions for navigating and engaging one or more functions associated with the human-machine interface or multiple rotatable components.Example 3 can include the system of example 2, wherein the one or more rotatable components are independently rotatable.Example 4 can include the system of example 2 or example 3, wherein the one or more rotatable components include a body component, an edge component, and a surface component.Example 5 may include the system of Example 1, wherein the gyro sensor is a single-axis sensor to sense rotation on one of the three axes of motion.Example 6 may include the system of Example 5, wherein the gyro sensor is a microelectromechanical systems (MEMS) rate gyroscope.Example 7 may include the system of Example 1, wherein the processor is a system-on-chip (SoC) processor.Example 8 may include a wearable rotational sensing device having a memory device in communication with a processor and a rotational sensing system for communicating with the processor. The gyroscopic sensing system may include: a human-machine interface for receiving user input, wherein the human-machine interface forms at least part of a wearable device; and for sensing the user in a manner that expands the usefulness of the human-machine interface A gyroscopic sensor for input to make one or more functions of the wearable device more accessible.Example 9 may include the apparatus of example 8, wherein the human-machine interface includes a rotatable device having one or more rotatable components for navigating and engaging the one or more functions.Example 10 may include the apparatus of example 9, wherein the one or more functions are to be identified via one or more icons.Example 11 may include the apparatus of Example 9 or Example 10, wherein one or more icons are used to select or adjust one or more functions associated with the wearable device.Example 12 may include the apparatus of example 10, wherein the one or more icons are used to select or adjust one or more functions associated with a device other than the wearable device.Example 13 may include the apparatus of Example 10, wherein the rotatable assembly is used to zoom in and out of the one or more functions in order to improve functionality, ergonomics, reliability, or accuracy of the user input one or more of the.Example 14 may include the apparatus of example 8, wherein the gyroscopic sensor senses a distance or degrees of rotation of the human-machine interface to detect a change in the state of the gyroscopic sensing system.Example 15 may include a gyroscopic sensing method comprising receiving user input via a human-machine interface; and sensing the user input via a gyroscopic sensor in a manner that extends the utility of the human-machine interface. Receiving user input may be performed via the human-machine interface, and sensing user input may be performed via a gyroscopic sensor.Example 16 may include the method of Example 15, further comprising engaging one or more functions associated with the human-machine interface via one or more rotatable components of a rotatable device of the human-machine interface.Example 17 may include the method of Example 16, wherein the one or more rotatable components are independently rotatable.Example 18 may include the method of example 16 or example 17, wherein the one or more rotatable components include a body component, an edge component, and a surface component.Example 19 may include the method of Example 15, wherein the gyro sensor is a single-axis sensor to sense rotation on one of the three axes of motion.Example 20 may include the method of Example 19, wherein the gyro sensor is a microelectromechanical systems (MEMS) rate gyroscope.Example 21 can include at least one computer-readable storage medium having a set of instructions that, when executed by a computing device, cause the computing device to: receive user input via a human-machine interface; and extend the human-machine interface via a gyroscopic sensor The practical way to sense the user input. The user input may be received via a human-machine interface. User input can be sensed via a gyro sensor.Example 22 can include the at least one computer-readable storage medium of Example 21, wherein the instructions, when executed, cause a computing device to engage with the same via one or more rotatable components of a rotatable device of the human-machine interface one or more functions associated with the human-machine interface.Example 23 may include the at least one computer-readable storage medium of Example 22, wherein the one or more rotatable components are independently rotatable.Example 24 can include the at least one computer-readable storage medium of Example 22 or Example 23, wherein the one or more rotatable components include a body component, an edge component, and a surface component.Example 25 may include the at least one computer-readable storage medium of Example 21, wherein the gyroscopic sensor is a single-axis sensor to sense rotation on one of three axes of motion.Example 26 may include a rotational sensing apparatus having means for receiving user input, and means for sensing the user input in a manner that expands the utility of a human-machine interface.Example 27 may include the apparatus of Example 26, further comprising means for engaging one or more functions associated with the human-machine interface.Example 28 may include the apparatus of example 27, wherein the means for engaging one or more functions associated with the human-machine interface is independently operable.Example 29 may include the apparatus of Example 27 or Example 28, wherein the means for engaging one or more functions associated with the human-machine interface may include a body component, an edge component, and a surface component.Example 30 can include the apparatus of Example 26, wherein the means for sensing can include a single-axis sensor to sense rotation on one of the three axes of movement.Example 31 can include the apparatus of Example 30, wherein the means for sensing can include a microelectromechanical systems (MEMS) rate gyroscope.As will be appreciated by those of ordinary skill in the art, the specific arrangements disclosed herein may be arranged and/or rearranged in various combinations to include one or more of the possible arrangements that may or may not have been specifically discussed herein. Rotate the assembly without departing from the scope of this disclosure. For example, certain embodiments may include arrangements having a rotatable body, a rotatable edge, and a rotatable surface. Furthermore, the direction of rotation of the rotatable components is not intended to be limiting, and may be reversed and/or rearranged without departing from the scope of the present disclosure. Even more, various embodiments may utilize rotation to specific angles, which may be further facilitated through the use of various stops, bumps, vibrations, haptics, sounds, mechanical notches, and other arrangements that provide feedback to indicate specific angles . These embodiments may be particularly useful for users with impaired vision and/or impaired sensitivity to touch. Additionally, although the embodiments disclosed herein have been shown with respect to wearable devices having a substantially circular shape, other shapes may also be used. For example, a rectangular smart watch design could be used. In such use, once the rotatable component (eg, watch body or edge) has been rotated, eg, during text entry, the keyboard orientation can be switched to a landscape orientation in order to utilize (ie, match) the device design and improve the user experience.Embodiments may be implemented using hardware elements, software elements, or a combination of software and hardware elements. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASICs), programmable logic devices (PLDs), Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), logic gates, registers, semiconductor devices, chips, microchips, chip sets, etc. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, programs, software An interface, application programming interface (API), instruction set, computational code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether to use hardware elements and/or software elements to implement an embodiment may vary according to any number of factors, such as desired compute rate, power level, thermal tolerance, processing cycle budget, input data rate, output Data rates, memory resources, data bus speeds, and other design or performance constraints.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in a processor, the instructions, when read by a machine, cause the machine to be fabricated for execution The logic of the techniques described herein. Such representations, referred to as "IP cores," can be stored on tangible machine-readable media and provided to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processors.Embodiments are suitable for use with various types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, and the like. Additionally, in some of the drawings, signal conductors are represented by lines. Some lines may differ to represent more constitutive signal paths, have numerical labels to indicate the number of constitutive signal paths, and/or have arrows at one or more ends to indicate primary information flow. However, this should not be interpreted in a restrictive manner. Rather, such increased detail may be used in conjunction with one or more exemplary embodiments to provide an easier understanding of the circuit. Any signal line represented (with or without additional information) may actually include one or more signals that may propagate in multiple directions and may be implemented with any suitable type of signaling scheme, such as with differential pairs Digital or analog lines, fiber optic lines, and/or single-ended lines.Example sizes/models/values/ranges have been given, although the embodiments are not so limited. As fabrication techniques (eg, photolithography) mature over time, smaller size devices are expected to be fabricated. Additionally, for simplicity of illustration and description, power/ground connections and other components well known to IC chips may or may not be shown in the figures and so as not to obscure certain aspects of the described embodiments. obscure. Furthermore, various configurations may be shown in block diagram form in order to avoid obscuring the various embodiments, and in view of the fact that the specific details of implementation with respect to these block diagram configurations are highly dependent on the platform on which the described embodiments are implemented, namely These specific details should fall within the purview of those skilled in the art. Where specific details (eg, circuits) are set forth in order to describe exemplary embodiments, it will be apparent that those skilled in the art can practice the embodiments without or with changes to those specific details. As such, the description is to be regarded as illustrative rather than restrictive.Some embodiments may be implemented, for example, using a machine or a tangible computer-readable medium or article of manufacture that may store instructions or a set of instructions that, if executed by a machine, may cause the machine to perform the functions according to various embodiments. method and/or action. The machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, etc., and may be implemented using any suitable combination of hardware and/or software. A machine-readable medium or work may include, for example, any suitable type of memory unit, memory device, memory work, memory medium, storage device, storage work, storage medium and/or storage unit, such as memory, removable or non-removable media , erasable or non-erasable media, writable or rewritable media, digital or analog media, hard disks, floppy disks, compact disk read only memory (CD-ROM), compact disk recordable (CD-R), compact disk Rewritable (CD-W), optical discs, magnetic media, magneto-optical media, removable memory cards or discs, various types of digital versatile discs (DVDs), magnetic tapes, tape cassettes, and the like. Instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, etc., using any suitable high-level, low-level, object-oriented, visual , compiled and/or interpreted programming languages.Unless specifically stated otherwise, it should be appreciated that terms such as "processing," "computing," "operating," "determining," and the like refer to the actions and/or processes of a computer or computing system, or similar electronic computing device, which The processing and/or conversion of data represented as physical quantities (e.g., electronic) within the registers and/or memory of other data. Embodiments are not limited to this context.The term "coupled" is used herein to denote any type of direct or indirect relationship between the components in question, and may apply to electrical, mechanical, fluidic, optical, electromagnetic, electromechanical or other connections . Additionally, the terms "first," "second," and the like, are used herein to facilitate discussion only and have no particular temporal or chronological meaning unless otherwise stated.Those skilled in the art will appreciate from the foregoing description that the broad techniques of the described embodiments may be implemented in a variety of forms. Thus, although embodiments of the invention have been described in connection with specific examples thereof, the true scope of the embodiments should not be limited by such, for other modifications may be readily apparent to those skilled in the art upon study of the drawings, specification, and subsequent claims. Personnel will become apparent. |
A processor includes a decode unit to decode an instruction that is to indicate a page of a protected container memory, and a storage location outside of the protected container memory. An execution unit, in response to the instruction, is to ensure that there are no writable references to the page of the protected container memory while it has a write protected state. The execution unit is to encrypt a copy of the page of the protected container memory. The execution unit is to store the encrypted copy of the page to the storage location outside of the protected container memory, after it has been ensured that there are no writable references. The execution unit is to leave the page of the protected container memory in the write protected state, which is also valid and readable, after the encrypted copy has been stored to the storage location. |
A system on a chip comprising:an execution unit, the execution unit for accessing a control structure in response to an instruction indicating that data from an encrypted portion of a virtual machine of a source computer system is to be encrypted for transfer to a memory location outside of the encrypted portion of the virtual machine, the control structure to store a migration capable cryptographic key being capable of being migrated from the source computer system to a destination computer system;a cryptographic unit, in response to the instruction, to:decrypt a copy of the data of the encrypted portion of the virtual machine; andencrypt the decrypted copy of the data with the migration capable cryptographic key; anda memory controller, in response to the instruction, to store the encrypted copy of the data, after the encryption by the cryptographic unit, to the memory location outside of the encrypted portion of the virtual machine,wherein the system on a chip is to leave the data within the encrypted portion of the virtual machine valid and readable after the encrypted copy of the data has been stored to the memory location outside of the encrypted portion of the virtual machine.The system on a chip of claim 1, wherein the system on a chip is to perform message authentication code computations based on the copy of the data in response to the instruction.The system on a chip of claim 1 or 2, wherein the control structure is also to store crypto-metadata.The system on a chip of claim 1, wherein the system on a chip, in response to the instruction, is to ensure no writable permissions for the data are cached in a processor of the system on a chip, while the data within the encrypted portion of the virtual machine has a write protected state, by ensuring all translations for the data within the encrypted portion of the virtual machine have been flushed from all translation lookaside buffers of the processor.A method of transferring data from an encrypted portion of a virtual machine of a source computer system to a memory location outside of the encrypted portion of the virtual machine, comprising:accessing a control structure in response to an instruction indicating that the data from the encrypted portion of the virtual machine of the source computer system is to be encrypted for transfer to the memory location outside of the encrypted portion of the virtual machine, wherein the control structure stores a migration capable cryptographic key being capable of being migrated from the source computer system to a destination computer system;in response to the instruction, decrypting a copy of the data of the encrypted portion of a virtual machine with a cryptographic unit; and encrypting the decrypted copy of the data with the migration capable cryptographic key with the cryptographic unit; andin response to the instruction, storing the encrypted copy of the data, after the encryption by the cryptographic unit, to the memory location outside of the encrypted portion of the virtual machine,wherein the data within the encrypted portion of the virtual machine is left valid and readable after the encrypted copy of the data has been stored to the memory location outside of the encrypted portion of the virtual machine.The method of claim 5, including performing message authentication code computations based on the copy of the data in response to the instruction.The method of claim 5 or 6, wherein the control structure is also to store crypto-metadata.The method of claim 5, further including, in response to the instruction, ensuring that no writable permissions for the data are cached in a processor of the system on a chip, while the data within the encrypted portion of the virtual machine has a write protected state, by ensuring all translations for the data within the encrypted portion of the virtual machine have been flushed from all translation lookaside buffers of the processor. |
BACKGROUNDTechnical FieldEmbodiments described herein generally relate to processors. In particular, embodiments described herein generally relate to processors having architectures that support enclaves or other protected containers.Background InformationDesktop computers, laptop computers, smartphones, servers, routers and other network elements, and various other types of computer systems are often used to process secret or confidential information. A few representative examples of such secret or confidential information include, but are not limited to, passwords, account information, financial information, information during financial transactions, confidential company data, enterprise rights management information, personal calendars, personal contacts, medical information, other personal information, and the like. It is generally desirable to protect such secret or confidential information from inspection, tampering, and the like.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments. In the drawings:Figure 1 is a block diagram of a computing environment in which a protected container may be migrated from a source computer system to a destination computer system.Figure 2 is a block diagram of a first example embodiment of a software environment in which live migration may be performed on a protected container of a virtual machine.Figure 3 is a block diagram of a second example embodiment of a software environment in which migration may be performed on a protected container of an operating system container.Figure 4 is a block flow diagram of an embodiment of a method of migration of a protected container from a source computer system to a destination computer system.Figure 5 is a block flow diagram of an embodiment of a method of write protecting pages of a protected container memory, and storing encrypted copies of the write protected pages outside of the protected container memory, while leaving the write protected pages valid and readable in the protected container memory.Figure 6 is a block diagram of an embodiment of a processor that is operative to perform an embodiment of a set of one or more instructions to support live migration of protected containers.Figure 7 is a block diagram of an embodiment of a processor that is operative to perform an embodiment of a protected container page write protect instruction.Figure 8 is a block diagram of an embodiment of a processor that is operative to perform an embodiment of a protected container page encrypt and store encrypted copy outside of protected container memory instruction.Figure 9 is a block diagram of an embodiment of a processor that is operative to perform an embodiment of a protected container page write unprotect and encrypted page copy invalidation instruction.Figure 10A is a block diagram illustrating an embodiment of an in-order pipeline and an embodiment of a register renaming out-of-order issue/execution pipeline.Figure 10B is a block diagram of an embodiment of processor core including a front end unit coupled to an execution engine unit and both coupled to a memory unit.Figure 11A is a block diagram of an embodiment of a single processor core, along with its connection to the on-die interconnect network, and with its local subset of the Level 2 (L2) cache.Figure 11B is a block diagram of an embodiment of an expanded view of part of the processor core of Figure 11A.Figure 12 is a block diagram of an embodiment of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics.Figure 13 is a block diagram of a first embodiment of a computer architecture.Figure 14 is a block diagram of a second embodiment of a computer architecture.Figure 15 is a block diagram of a third embodiment of a computer architecture.Figure 16 is a block diagram of a fourth embodiment of a computer architecture.Figure 17 is a block diagram of use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to embodiments of the invention.DETAILED DESCRIPTION OF EMBODIMENTSDisclosed herein are instructions that perform operations useful during live migration of protected containers, processors to execute the instructions, methods performed by the processors when processing or executing the instructions, and systems incorporating one or more processors to process or execute the instructions. Although the instructions are mainly described in conjunction with the live migration of protected containers, it is to be appreciated that the instructions are not limited to such uses but rather have general utility and may optionally be used for other uses entirely unrelated to live migration of protected containers. In the following description, numerous specific details are set forth (e.g., specific instruction operations, data structures and contents thereof, processor configurations, microarchitectural details, sequences of operations, etc.). However, embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail to avoid obscuring the understanding of the description.Figure 1 is a block diagram of a computing environment in which a protected container 107 may be migrated from a source computer system 100 to a destination computer system 121. The source and destination computer systems may represent desktop computer systems, laptop computer systems, smartphones, servers, network elements, or other types of computer systems. As shown, the source and destination computer systems may be coupled, or otherwise in communication with one another, by one or more intervening networks 120. In one aspect, the source and destination computer systems may be coupled over the Internet (e.g., the "cloud"). Alternatively, the source and destination computer systems may be coupled more directly by one or more local wired or wireless links.The source computer system includes at least one processor 112, a regular memory 102, and a protected container memory 106. The regular memory and the protected container may represent different portions of system memory that may include one or more types of physical memory (e.g., dynamic random access memory (DRAM), flash memory, etc.). The regular and protected container memories may have different levels of protection or security enforced in part by logic of the processor. The regular memory may represent a portion of the system memory of the type commonly used to store applications, data, and the like. As shown, the regular memory may store privileged system software 103 (e.g., a virtual machine monitor, one or more operating systems, etc.). The regular memory may also store one or more user-level applications (e.g., network management applications, database applications, email applications, spreadsheet applications, etc.). In one aspect, the source computer system may represent a so-called "open" system that generally does not significantly restrict user choice with regards to the system software and user-level applications that may be loaded onto the system.The protected container memory 106 may have a higher level of protection and/or security than the regular memory 102. The higher level of protection and/or security may be enforced, controlled, or otherwise provided at least in part by hardware and/or other on-die logic of the processor. In some embodiments, the protected container memory may represent a portion of processor reserved memory that is reserved exclusively for use by the processor, whereas the regular memory may not be part of the processor reserved memory. By way of example, the processor may have one or more range registers that correspond to the protected container memory. The range registers may be used to store data associated with a range of the protected container memory, and may be consulted upon attempted accesses to the protected container memory as part of providing the protection and/or security. In one aspect, the data or ranges may be stored in the range registers by a basic input/output system (BIOS) during boot.A protected container 107 (e.g., one or more pages 108 of the protected container) may be stored in the protected container memory 106. The processor may have an instruction set 113 that includes instructions to interact with (e.g., create, destroy, enter, exit, manage paging in, perform security operations on, etc.) the protected container. Some of the instructions may be privileged-level instructions that can be performed by privileged-level software (e.g., software 103) but not by unprivileged or user-level software. Other instructions may be unprivileged or user-level instructions. As one example, the protected container may be created for a protected container utilization domain 105, such as, for example, a virtual machine module, application module, or the like. For example, the privileged system software may create the protected container. The protected container may be generally opaque to the privileged-level software (e.g., the privileged level software may not be able to see the code and/or data within the protected container, yet may be managed by the privileged-level software (e.g., through privileged-level instructions of the instruction set).The protected container utilization domain may store secret or confidential data in the protected container. The protected container may help to provide confidentiality and in some cases optionally one or more other protections (e.g., integrity protection, replay protection, etc.) to the secret or confidential information in the protected container. Confidentiality generally involves preventing data disclosure. Integrity generally involves ensuring there is no data tampering. At least some hardware logic of the processor may help to provide such confidentiality and/or other protections. In various embodiments, the protected container may represent a secure enclave, hardware enforced container, hardware managed execution environment, hardware managed isolated execution region, secure and/or private memory region to be used by an application, or other protected container. In some embodiments, the protected container may represent an Intel® Software Guard Extensions (Intel® SGX) enclave, although the scope of the invention is not so limited. In some embodiments, the protected container memory may represent an Intel® SGX enclave page cache (EPC) that is operative to store pages of one or more running or executing secure enclave, although the scope of the invention is not so limited.In some embodiments, a protected container page metadata structure (PCPMS) 109 may be used to store metadata (e.g., security metadata, access control metadata, etc.) for the protected container 107. As shown, in some embodiments, the PCPMS may optionally be stored in the protected container memory, although this is not required. In some embodiments, the PCPMS may store such metadata for each page stored in the protected container memory. In one aspect, the PCPMS may be structured to have different entries for different corresponding pages in the protected container memory, although other ways of structuring the PCPMS are also possible (e.g., other types of tables, data structures, etc.). Each entry may store metadata for the corresponding page. Examples of suitable types of metadata for protected container pages include, but are not limited to, information to indicate whether the page is valid or invalid, information to indicate a protected container to which the protected container page belongs, information to indicate the virtual address through which the protected container page is allowed to be accessed, information to indicate read/write/execute permissions for the protected container page, and the like, and various combinations thereof, depending upon the particular implementation. Alternatively, less metadata, additional metadata, or other combinations of metadata may optionally be stored in different embodiments. The scope of the invention is not limited to any known type of metadata to be stored in the PCPMS. One example of a suitable PCPMS, for some embodiments, is an Intel® SGX enclave page cache map (EPCM), although the scope of the invention is not so limited.As shown in Figure 1, the protected container (e.g., the pages thereof) may optionally be stored in the protected container memory. Likewise, the PCPMS may optionally be stored in the protected container memory. In addition, or alternatively, the protected container (e.g., the pages thereof) and/or the PCPMS may optionally be stored in an on-die protected container storage of the processor. For example, the processor may have one or more caches 115 to store the protected container pages and/or the PCPMS. By way of example, one or more dedicated caches may be used, dedicated portions of one or more caches may be used, or a combination thereof. As another option, the processor may have another type of dedicated storage besides caches to store such pages or structures. A combination of both off-die memory and on-die cache(s) or other storage is also suitable.Referring again to Figure 1, the processor may also have protected container logic 116. The protected container logic may include hardware, firmware, or a combination thereof, to perform the instructions and/or otherwise support the protected containers (e.g., control accesses to the protected containers). The protected container logic includes access control logic 117. The access control logic may be operative to enforce, control, or otherwise provide access controls for the protected container memory and data of the protected container memory when it is resident on-die of the processor (e.g., in the cache(s), registers, other structures, etc.). Different types of access control logic 117 are suitable in different embodiments, depending upon the particular implementation.In some embodiments, the access control logic may include a memory management unit (MMU) and/or a page miss handler (PMH) unit that may be operative to control access to the protected container and/or the protected container memory in part by consulting with page tables, range registers, the PCPMS 109, or the like, or a combination thereof, depending upon the particular implementation. In some embodiments, the access control logic may include logic that is operative to control access to code and/or data of the protected container when the code and/or data is resident within the processor. For example, the logic may be operative to control access to the code and/or data when it is stored or otherwise resident in an unencrypted form in caches, registers, and other structures or components within the processor during runtime when used for computation. In one aspect, the logic may be operative to allow authorized accesses to the code and/or data of a protected container (whether it is stored in the protected container memory or is resident on-die of the processor) from code of the same protected container, but may prevent unauthorized accesses to the code and/or data of the protected container (whether it is stored in the protected container memory or is resident on-die of the processor) by code outside of the protected container.The protected container logic may also include a cryptographic unit 118. The cryptographic unit may be operative to perform encryption and decryption. In some embodiments, the cryptographic unit may automatically encrypt code and/or data of protected containers before the code and/or data is stored out of the processor (e.g., to system memory), such as, for example, during writes to the system memory, eviction of cache lines holding protected container code and/or data, etc. This may help to prevent the code and/or data from being viewed (e.g., help to provide for confidentiality of the data). The cryptographic unit may also be operative to decrypt encrypted code and/or data of enclave pages when they are received into the processor (e.g., from system memory).In some embodiments, the cryptographic unit may also optionally be operative to cryptographically provide integrity protection and/or authentication to the code and/or data of protected containers. For example, in some embodiments, the cryptographic unit may automatically compute a message authentication code, or other authentication or integrity check data, for code and/or data of protected containers before the code and/or data is stored out of the processor (e.g., to system memory). The cryptographic unit may also optionally be operative to use such authentication or integrity check data to authenticate or ensure the integrity of code and/or data of protected container pages when they are received into the processor (e.g., from system memory). This may help to allow for authentication or integrity checking of the data to help detect any tampering or changing of the data. The logic may be operative to detect integrity violations of protected container pages and prevents access to tampered code/data upon detection.In one aspect, such cryptographic operations may be performed automatically and autonomously by the cryptographic unit, and transparently to software (e.g., as opposed to software having to perform multiple instructions of a software cryptographic algorithm). In some embodiments, the cryptographic unit may optionally selectively perform such cryptographic operations for the code and/or data of the protected containers but generally not for code and/or data of regular pages not belonging to protected containers.In some embodiments, the protected container logic 116 may optionally include logic to generate and use version information associated with code and/or data of protected containers. For example, pages of the protected container may optionally be assigned version information (e.g., a unique version value, version counter, etc.) when they are stored out of the processor (e.g., to system memory). The protected container logic may optionally include logic to review such version information when the code and/or data (e.g., the pages) of the protected container are reloaded. In some embodiments, the protected container logic may only allow protected container pages indicated by be legitimate or valid by the version information (e.g., only the last evicted version) to be loaded. This may help to prevent replay of protected container code and/or data.To further illustrate certain concepts, certain types of protection or security have been described above. However, it is to be appreciated that the types and levels of protection or security may vary from one implementation to another depending upon the particular implementation, environment, need for security, cost versus security tradeoffs, and the like. The embodiments described herein may be used in conjunction with protected containers of varying levels of security or protection. The scope of the invention is not limited to any known type or level of protection or security.Referring again to Figure 1, in some embodiments, the privileged system software 103 of the source computer system 110 may include a protected container live migration module 104 that is operative to control or manage live migration of the protected container 107 from the source computer system to the destination computer system 121. The term "live" means that the migration may be performed in part while the protected container is running on the source computer system. As further shown, in some embodiments, the processor includes one or more instructions 114 to support live migration of protected containers (e.g., the protected container 107). As mentioned above, these instructions are not limited to only being used for live migration of protected containers but rather may also optionally be used for various other purposes according to the creative genius of the programmer. In some embodiments, the one or more instructions may be privileged-level instructions that may only be executed at one or more privileged levels of execution, but not at a user-level or unprivileged-level of execution. The one or more privileged levels of execution are higher than a user-level of execution privilege that is used by user-level applications (e.g., word processing applications, email applications, spreadsheets, etc.). In some embodiments, the protected container live migration module 104 may use the one or more instructions 114. Moreover, in some embodiments, the processor includes logic 119 operative to perform the one or more instructions 114 to support the live migration of the protected containers. Advantageously, the instruction(s) and logic to perform the instructions may help to increase the efficiency of performing a live migration of the protected container, although the instructions may alternatively optionally be used for other purposes.In some embodiments, the source computer system may include a key manager protected container 110 that may be operative to manage one or more migration capable or migratable keys (e.g., a key hierarchy) that may correspond to the protected container 107. As shown, the key manager protected container may also optionally be stored in the protected container memory. The key manager protected container may represent a trusted entity to control or manage the keys and allow them to be virtualized and migrated from the source computer system to the destination computer system in conjunction with the protected container being migrated from the source computer system to the destination computer system. In some embodiments, the key manager protected container may represent an architectural protected container. One suitable example of the key manager protected container, in an Intel® SGX implementation embodiment, is a migration engine (MigE), although the scope of the invention is not so limited. In other embodiments, the migration capable keys may optionally be implemented differently, such as, for example, stored in a special type of page in the protected container, controlled or managed by a different type of trusted entity, etc.Depending upon the particular implementation, one or more other structures 111 may also optionally be used along with the protected container. For example, in some embodiments, there may be a structure to hold the one or more migration capable or migratable keys (e.g., a key hierarchy). For example, in an Intel® SGX implementation embodiment, there may be an SGX domain control structure (SDCS) to store migratable platform SGX keys, counters, and domain state. As another, in some embodiments, one or more version pages may optionally be included to store version information for protected container pages. For example, in an Intel® SGX implementation embodiment, there may be one or more version array pages operative to store version arrays for pages in the protected container memory. For example, there may be VA pages for pages stored from the EPC and invalidated in the EPC and an embodiment of VAX pages for pages stored from the EPC and according to an embodiment retained in a write protected, valid, and readable state in the EPC, as will be explained further below. As yet another example, an Intel® SGX implementation embodiment, there may be a paging crypto metadata (PCMD) structure that is operative to store crypto meta-data associated with a paged-out page, a page metadata structure (PGMD) that is operative to store metadata about the page. It is to be appreciated that data and/or metadata associated with protected containers may optionally be partitioned or combined in many different ways in different implementations, and that the scope of the invention is not limited to any known such way of partitioning or combining the data and/or metadata.After the migration is complete, the destination computer system may have a migrated protected container 122. A simplified version of the destination computer system is shown, although it is to be appreciated that the destination computer system may optionally be similar to or the same as the source computer system. In general, it may be desirable to live migrate the protected container for various different reasons, and the scope of the invention is not limited to any known reason. In one aspect, the protected container may be migrated in conjunction with load balancing. For example, a virtual machine or other protected container utilization domain running on a source server of a datacenter, or cloud computing environment, may be using the protected container, and the domain as well as the protected container may be migrated from the source server to a destination server in order to balance workloads on the source and destination servers. In other embodiments, protected containers may be migrated for other reasons, such as, for example, to relocate workloads from a source computer system that is to be serviced, maintained, or upgraded, to relocate workloads from a running desktop to a portable computer, etc.Figure 2 is a block diagram of a first example embodiment of a software environment in which live migration may be performed on a protected container 207 of a virtual machine (VM) 205-N. The computer system includes a virtual machine monitor (VMM) 203, and a first VM 205-1 through an Nth VM 205-N. The VMM may represent a host program, often in firmware, that is operative to provide virtualization management or control to allow the computer system to support the VMs. Representatively, the VMM may manage one or more processors, memory, and other resources of the computer system and allocate resources associated therewith to the VMs. Each VM may represent a virtual or software implementation of a machine that emulates execution of programs like a physical machine. Each VM may support the execution of a guest operating system (OS). As shown, the first VM may have a first guest operating system (OS) 224-1 and the Nth VM may have an Nth guest OS 224-N. The operating systems may either be multiple instances of the same OS or different OSs.In the illustrated example, the Nth VM is to utilize the protected container 207. In some embodiments, the VMM may include a protected container live migration module 204 to control or manage the migration of the protected container out of the computer system. One or more processors (not shown) of the computer system may have one or more instructions, and logic to perform the instructions, to support the live migration, as described elsewhere herein. In some embodiments, the first VM may include a key manager protected container 210. In some embodiments, the key manager protected container may control or manage a set of one or more per-VM or VM specific keys (e.g., key hierarchies) each corresponding to a different one of the VMs that has a corresponding protected container.Figure 3 is a block diagram of a second example embodiment of a software environment in which migration may be performed on a protected container 307 of an operating system container 305. The computer system includes an operating system (OS) 303, the OS container 305 having the protected container 307, and an OS control service 325. The OS 303 may represent the kernel and may provide for container-based virtualization or OS virtualization. The OS container may represent an application within the OS that represent the virtualization layer similar to guest virtual machines. In some embodiments, the OS may include a protected container live migration module 304 to control or manage migration of the protected container out of the computer system. As before, one or more processors (not shown) of the computer system may have one or more instructions, and logic to perform the instructions, to support live migration of the protected container, as described elsewhere herein. Likewise, in some embodiments, the OS control service may include a key manager protected container 310. In some embodiments, the key manager protected container may control or manage a set of one or more per-OS container or OS container specific keys (e.g., key hierarchies) each corresponding to a different one of the OS containers that has a corresponding protected container.Figure 4 is a block flow diagram of an embodiment of a method 430 of migration of a protected container from a source computer system to a destination computer system. In some embodiments, the method may be controlled or managed by cooperating protected container live migration modules on the source and destination computer systems.At block 431, copies of pages of the protected container may be stored from a protected container memory of the source computer system to encrypted copies in a regular memory of the destination computer system, while an application or domain (e.g., a VM, OS container, etc.) that is using the protected container is running on the source computer system. In some embodiments, the operation at block 431 may optionally be performed using the method of Figure 5, or a portion thereof, including any of the variations mentioned therefor, although the scope of the invention is not so limited.In some embodiments, each page may be write protected in the protected container memory of the source computer system, before a copy (e.g., an encrypted copy) of the page is stored from the protected container memory of the source computer system to an outside location (e.g., to a regular memory of the source computer system). In some embodiments, before storing the copy (e.g., the encrypted copy) of each write protected page from the protected container memory of the source computer system to the outside location, the corresponding processor of the source computer system may ensure or verify that there are no writable references to the write protected page. In some embodiments, after the copy of each write protected page has been stored from the protected container memory of the source computer system to the outside location, the write protected page may be retained as write-protected, but valid and readable, in the protected container memory of the source computer system. That is, in some embodiments, copies of the page may exist simultaneously in the protected container memory (e.g., an EPC) of the source computer system, and also as an encrypted copy outside of the protected container memory (e.g., in regular memory of the source or destination computer systems). Advantageously, this may allow the running application or domain of the source computer system to read data from the page while the protected container live migration module concurrently works to migrate the page to the destination computer system.In some embodiments, in order to reduce the amount of downtime needed to achieve the full migration of the protected container, from most to substantially as many pages as possible, may be copied from the protected container memory to the regular memory of the destination storage location, while the application or domain (e.g., the VM or OS container) is running on the source system. These pages may at least conceptually be viewed as a write working set and a non-write working set. Pages in the write working set tend to be written during the migration window or timespan while the application or domain is running. In contrast, pages in the non-write working set tend to not be written or not likely be written during the migration window or timespan while the application or domain is running. Generally, from most to substantially all of the pages in the non-write working set, and potentially some of the pages in the write working set (e.g., those which have not been written after they have been copied from the protected container memory) may potentially be copied from the protected container memory to the outside location, while the application or domain is running. In addition, write protected pages copied from the protected container memory may still be read from, even after they have been copied from the protected container memory, since these pages are write protected, but are still valid (unless a subsequent write to the pages has been detected), and readable. This may allow the application or domain to read from these pages while the migration of these pages progresses.In one aspect, the protected container live migration module may iterate through all of the protected container pages one or more times, copying them from the protected container memory, assuming the pages are not in the write working set. If an attempted write to a write protected page is detected, the speculative copy of the page stored outside of the protected container memory may be invalidated. Otherwise, the page may be retained in the protected container in the write protected but valid and readable state. After the first iteration, the protected container live migration module may optionally iterate through the remaining uncopied protected container pages one or more times, a predetermined number of times, until the number of such remaining uncopied pages becomes small enough (e.g., decreases below a threshold number or proportion), or according to some other desired criteria. Typically, after a few iterations, the set of remaining uncopied protected container pages should approximately converge to the write working set of pages that tend to be written during the migration window or timeframe. Advantageously, write protecting the protected container pages, and allowing them to be valid and readable in the protected container memory, may help to reduce the downtime of the application or domain needed to achieve the live migration of the protected container. Rather than invalidating the pages, the pages may still remain readable in the protected container memory. Effectively approximately all pages outside of the write working set may be copied from the protected container memory instead of just those pages outside of the larger set of pages representing the working set (e.g., which additionally includes those pages the application tends to read from during the migration window). This may help to reduce the number of pages that need to be copied after the application is de-scheduled or stopped, which may help to reduce downtime.Referring again to Figure 4, at block 432, execution of the application that is using the protected container on the source computer system may be stopped. In various embodiments, the application may optionally be a VM or an OS container, although this is not required.At block 433, copies of any remaining uncopied pages, and optionally any remaining uncopied special pages, may be copied from the protected container memory of the source computer system to encrypted copies in regular memory of the destination computer system, after the application or domain that was using the protected container has stopped running. In some embodiments, one or more special pages may optionally be used, although the scope of the invention is not so limited. As one example, one or more special pages may optionally be used to store migration capable keys. As another example, one or more special pages may optionally be used to version information for pages of the protected container. For example, in an Intel® SGX implementation embodiment, an SDCS page may be used to store migration capable keys, and one or more version array pages (VAX) may be used to store version information for pages written out of the protected container memory but retained in a write protected, valid, and readable state. In other embodiments, a single special page may be used to store both migration capable keys and version information, or migration capable keys and/or version information may instead be stored in the protected container pages themselves, or such information may optionally be stored in other data structures, etc.At block 434, an application or domain that is to use the protected container on the destination computer system may start running. In various embodiments, the application may optionally be a VM or an OS container, although this is not required.At block 435, encrypted pages, and optionally encrypted special pages, may be loaded from the regular memory of the destination computer system to unencrypted pages in a protected container memory of the destination computer system. For example, a protected container may be created and initialized in the protected container memory of the destination computer system, and then pages may be loaded into the protected container. In some embodiments, the special pages may optionally be loaded into the protected container memory before regular pages are loaded into the protected container memory, although the scope of the invention is not so limited.Figure 5 is a block flow diagram of an embodiment of a method 538 of write protecting pages of a protected container memory, and storing encrypted copies of the write protected pages outside of the protected container memory, while leaving the write protected pages valid and readable in the protected container memory. In some embodiments, the method may be controlled or managed by a protected container live migration module of a source computer system, while migrating a protected container to a destination computer system, although the scope of the invention is not so limited. For example, in some embodiments, the method may optionally be performed at block 431 of Figure 4, although the scope of the invention is not so limited. The method may either be performed with the computer system of Figure 1, or a similar or different computer system. The characteristics and components described for the computer system 100 may optionally be used in the method, but are not required. In some embodiments, the method may be performed with one or more instructions (e.g., the instructions 114 of Figure 1 and/or the instructions 614 of Figure 6), of an instruction set of a processor, which are operative to support live migration of the protected container.At block 539, pages of the protected container memory may be write protected. The pages of the protected container memory may either be stored in their secure storage within the protected container memory in system memory or may be cached or otherwise stored in secure storage in caches or other on-die storage locations of a processor that is operative to keep them secure therein. In some embodiments, each page may be write protected responsive to execution or performance of a single instruction of the instruction set. In some embodiments, each page may be write protected by configuring a write protection indication in a protected container page metadata structure (e.g., PCPMS 109) to indicate that the page is write protected. For example, in an embodiment of an Intel® SGX implementation, enclave pages may be write protected by configuring (e.g., setting) a write protect (WP) bit in an enclave page cache map (EPCM). In some embodiments, while write protected, the pages may be valid and readable. In some embodiments, each write protected page may also be made read only in paging structures (e.g., extended paging tables), although the scope of the invention is not so limited. In some embodiments, the modification of the paging structures may optionally be outside of the confines of the instruction that is used to modify the write protection indication in the PCPMS. In various aspects, from an overall algorithmic perspective, the pages may be write protected one at a time, or in batches, or all pages in the protected container may be write protected at one time.At block 540, a processor may ensure or verify that there are no writeable references to the write protected pages. This may be done in different ways in different embodiments. In some embodiments, this may be implemented with a TLB tracking mechanism. TLBs may cache translations from a virtual to physical addresses associated with pages. Permissions associated with accessing those permissions, such as read and write permissions, may also be cached in the TLBs. These permissions cached in the TLBs reflect the permissions at the time the translations were performed when the page table walks were performed. On memory access requests, if the MMU finds the translation in the TLBs, it may bypass the page table lookup and use the translation, as well as the permissions, which is cached in the TLBs. That is, the MMU may use permissions from the TLBs, which could be outdated, instead of looking up the permissions in the page table and/or checking the permissions in the PCPMS (e.g., the EPCM). In a case where a page has permissions in the TLB that indicate it is writable, a thread could write to the page even after the page has been write protected (e.g., as described at block 539). To enhance security, the method may ensure that such cached permissions are flushed from the TLB(s). This may be done before making the copy of the write protected page. This may be done in different ways in different embodiments. As one example, epoch counters may be used to determine when a thread may have access to such a TLB mapping, and when such a TLB mapping must have been cleared (e.g., the thread must have observed the write protection of the page). Once all threads have observed the write protection of the page, then it may be ensured that there are no more writable references to write protected pages. In some embodiments, in an Intel® SGX implementation embodiment, an ETRACK instruction and associated mechanism may optionally be used to ensure that write mappings to a page being migrated are removed from the TLB prior to writing the page out to main memory. By way of example, the ETRACK instruction may be used to configure micro-architectural tracker logic to detect when all logical processors executing in an enclave at the time of execution of the ETRACK instruction have exited the enclave and therefore all the TLB entries have been evicted (e.g., TLB entries created during enclave execution may be evicted when exiting the enclave).At block 541, encrypted copies of the write protected pages of the protected container memory may be generated. In some embodiments, the encrypted copy of each write protected page may be generated responsive to execution or performance of a single instruction of the instruction set. In some embodiments, a cryptographic unit (e.g., cryptographic unit 118) of a processor, which may be used to encrypt protected container pages when they are written out of the processor to anywhere in system memory including into the protected container memory and regular system memory outside of the protected container memory, may be used. In some embodiments, different encryptions may optionally be used to store encrypted protected container pages to the regular memory versus the protected container memory, although this is not required.At block 542, the encrypted copies of the write protected pages of the protected container memory (e.g., the encrypted copies generated at block 540) may be stored out of the protected container memory (e.g., in regular memory of the same computer system or otherwise in non-protected container memory). In some embodiments, the encrypted copies may be stored in the regular memory, while the corresponding write protected pages remain valid and readable in the protected container memory. In some embodiments, an application or domain using the associated protected container may be allowed to read from the write protected pages in the protected container memory, after the encrypted copies have been stored out of the protected container memory. Another alternative possible approach would be to invalidate the pages (e.g., instead of write protecting them and allowing them to remain valid and readable), although this may have potential drawbacks, such as not allowing the pages to be read from and/or reducing the overall number of pages that can be stored out of the protected container memory, while the application is running. In some embodiments, the encrypted copy of each write protected page may be stored out of the protected container memory (e.g., to the regular memory) responsive to the execution or performance of a single instruction of the instruction set. In some embodiments, the encrypted copies may be stored out of the protected container memory (e.g., to the regular memory) only after ensuring or verifying that there are no writable references to the write protected pages (e.g., after block 541).At block 543, a determination may be made whether an attempt to write to a write protected page of the protected container memory has been detected. If no such attempted write has been detected (i.e., "no" is the determination at block 543), then the method may advance to block 545. This may allow the page to remain write protected, but valid and readable, in the protected container memory, while only one true non-dirty copy of the page exists thereby allowing security to be maintained.Conversely, if such an attempt to write to a write protected page has been detected (i.e., "yes" is the determination at block 543), then the method may advance to block 544. Representatively, such an attempt may potentially be detected by logic of the processor (e.g., responsive to an extended page table violation), and responsive thereto the logic may signal a fault. At block 544, the write protected page may be write-unprotected (e.g., the page may be made writable), and any encrypted copies outside of the protected container memory (e.g., in the regular memory of the source computer system, or in the regular memory of the destination computer system) may be invalidated. Invalidating the copies outside of the protected container memory may help to ensure security, such as, for example, by ensuring that there is only one true copy of a page (e.g., that the contents of the encrypted copy and the page in the protected container memory do not diverge). In some embodiments, the page may be write-unprotected, and the encrypted copies outside of the protected container memory may be invalidated, responsive to the execution or performance of a single instruction of the instruction set. In some embodiments, the page may be write unprotected by configuring a write protection indication in a protected container page metadata structure (e.g., PCPMS 109) to indicate that the page is write unprotected. For example, in an embodiment of an Intel® SGX technology implementation, each page may be write unprotected by configuring (e.g., clearing) a write protect (WP) bit in an enclave page cache map (EPCM). In some embodiments, the page may also be made readable and writable in paging structures (e.g., extended paging tables), although the scope of the invention is not so limited. In some embodiments, the modification of the paging structures may be outside of the confines of the instruction that is used to modify the write protection indication in the PCPMS.The method may advance from either block 543 or block 544 to block 545. At block 545, a determination may be made whether or not to repeat the method. If the method is to be repeated (i.e., "yes" is the determination at block 545), the method may revisit block 539. Alternatively, the method may end. The determination of whether to repeat the method may be performed in different ways in different embodiments. For example, the method may be performed a predetermined or configurable number of times (e.g., the determination may involve a loop counter with a threshold). As another example, the determination may involve determining whether additional pages are still being write protected (e.g., at block 539) and stored in the immediately prior iteration, if more additional pages are being write protected (e.g., at block 539) and stored in the immediately prior iteration than the number of pages un-write protected (e.g., at block 544), or the like. Alternatively, the method may optionally be performed only once and block 545 may optionally be omitted.It is to be appreciated that this is just one illustrative example of a method. Other methods may include a subset of the illustrated blocks. For example, an alternate method may include only block 539. Another alternate method may include blocks 540, 541, and 542. Yet another block may include only block 544. Various other combinations of the blocks are also contemplated. Also, additional operations may optionally be added to the method. Moreover, some operations or blocks may optionally be overlapped (e.g., blocks 540 and 541 may be overlapped), or performed in a different order (e.g., block 541 may be performed before block 540, block 543 may be performed continuously throughout the method, etc.).Figure 6 is a block diagram of an embodiment of a processor 612 that is operative to perform an embodiment of a set of one or more instructions 614 to support live migration of protected containers. As previously mentioned, the instructions may also be used for other purposes besides supporting live migration of protected containers. In some embodiments, the processor may be a general-purpose processor (e.g., a general-purpose microprocessor or central processing unit (CPU) of the type used in desktop, laptop, or other computers). Alternatively, the processor may be a special-purpose processor. Examples of suitable special-purpose processors include, but are not limited to, cryptographic processors, network processors, communications processors, co-processors, embedded processors, digital signal processors (DSPs), and controllers (e.g., microcontrollers). The processor may have any of various complex instruction set computing (CISC) architectures, reduced instruction set computing (RISC) architectures, very long instruction word (VLIW) architectures, hybrid architectures, other types of architectures, or have a combination of different architectures (e.g., different cores may have different architectures).The set of the one or more instructions 614 to support live migration of a protected container may be instructions of an instruction set of the processor. The instructions may represent macroinstructions, machine code instructions, or assembly language instructions. In the illustrated example embodiment, the instructions include four different instructions, although the scope of the invention is not so limited. Specifically, the instructions include a protected container page write protect instruction 650, a protected container page encrypt and store encrypted copy outside of protected container memory instruction 651, an optional protected container page write unprotect and page copy invalidation instruction 652, and an optional protected container version array page create instruction 653.In other embodiments, fewer or more than four instructions may optionally be used. For example, in an alternate embodiment, any single one of these instructions may optionally be included and the others optionally omitted. As one example, only a protected container page write protect instruction may optionally be included. As another example, only a protected container page encrypt and store encrypted copy outside of protected container memory instruction may optionally be included. As yet another example, only a protected container page write protect instruction and a protected container page encrypt and store encrypted copy outside of protected container memory instruction may optionally be included. Also, one or more other instructions may optionally be added, along with one or more of the four instructions shown.In still other embodiments, the functionality of these four instructions may be apportioned differently. As one specific example, the operations of the protected container write protect instruction 650 and that of the protected container page encrypt and store encrypted copy outside of protected container memory instruction 651 may optionally be combined into a single instruction. As another specific example, the encrypt operation of the protected container page encrypt and store encrypted copy outside of protected container memory instruction 651 may optionally instead be apportioned to and performed by the protected container write protect instruction 614. As yet another specific example, the operations of the protected container version array page create instruction 653 and that of either the protected container page write protect instruction 650 or the protected container page encrypt and store encrypted copy outside of protected container memory instruction 651 may optionally be combined into a single instruction. These are just a few illustrative examples. Other variations will be apparent to those skilled in the art and having the benefit of the present disclosure.Referring again to Figure 6, the processor includes a decode unit or decoder 654. The decode unit may receive and decode any of the one or more instructions 614. The decode unit may output one or more relatively lower-level instructions or control signals (e.g., one or more microinstructions, micro-operations, micro-code entry points, decoded instructions or control signals, etc.), which reflect, represent, and/or are derived from the received relatively higher-level instructions. In some embodiments, the decode unit may include one or more input structures (e.g., port(s), interconnect(s), an interface) to receive an instruction, an instruction recognition and decode logic coupled therewith to recognize and decode the received instruction, and one or more output structures (e.g., port(s), interconnect(s), an interface) coupled therewith to output the lower-level instruction(s) or control signal(s). The decode unit may be implemented using various different mechanisms including, but not limited to, microcode read only memories (ROMs), look-up tables, hardware implementations, programmable logic arrays (PLAs), and other mechanisms suitable to implement decode units.The processor also includes a set of registers 667 (e.g., general-purpose registers). Each of the registers may represent an on-die storage location that is operative to store data. The registers may represent architecturally-visible or architectural registers that are visible to software and/or a programmer and/or are the registers indicated by instructions of the instruction set of the processor to identify operands. These architectural registers are contrasted to other non-architectural registers in a given microarchitecture (e.g., temporary registers, reorder buffers, retirement registers, etc.). The registers may be implemented in different ways in different microarchitectures and are not limited to any particular type of design. Examples of suitable types of registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, and combinations thereof. In some embodiments, the registers may be used to store input and/or output data associated with the instructions. In one aspect, the registers may include any of the general-purpose registers shown and described for any of Figures 7-9 and may have any of the described input and/or output mentioned therefor.An execution unit 655 is coupled with the decode unit 654 and the registers 667. The execution unit may receive the one or more decoded or otherwise converted instructions or control signals that represent and/or are derived from any one of the instructions being decoded (e.g., any one of the instructions 614). The execution unit is operative in response to and/or as a result of the instruction being decoded (e.g., in response to one or more instructions or control signals decoded from the instruction) to perform one or more operations 668 to achieve the operations of the instruction. As shown, the execution unit may be coupled with, or otherwise in communication with, other logic of the processor 616 and/or pages or structures in a memory 606 to implement the operations of the particular instruction being performed. In some embodiments, the execution unit may be any of the execution units shown and described for any of Figures 7-9 and may perform any of the operations described therefor.The execution unit and/or the processor may include specific or particular logic (e.g., transistors, integrated circuitry, or other hardware potentially combined with firmware (e.g., instructions stored in non-volatile memory) and/or software) that is operative to perform such operations in response to and/or as a result of the instructions (e.g., in response to one or more instructions or control signals decoded from the instructions). In some embodiments, the execution unit may include one or more input structures (e.g., port(s), interconnect(s), an interface) to receive source data, circuitry or logic coupled therewith to receive and process the source data, and one or more output structures (e.g., port(s), interconnect(s), an interface) coupled therewith to effect the operations.To avoid obscuring the description, a relatively simple processor has been shown and described. However, the processor may optionally include other processor components. For example, various different embodiments may include various different combinations and configurations of the components shown and described for any of Figures 10-12. All of the components of the processor may be coupled together to allow them to operate as intended.Figure 7 is a block diagram of an embodiment of a processor 712 that is operative to perform an embodiment of a protected container page write protect instruction. The processor includes a decode unit 754, an execution unit 755, and optionally a set of registers 767 (e.g., general-purpose registers). Unless specified, or otherwise clearly apparent, these components may optionally have some or all of the characteristics of the correspondingly named components of Figure 6. To avoid obscuring the description, the different and/or additional characteristics will primarily be described without repeating all of the optionally common characteristics. Moreover, in some embodiments, the processor 712 may be used in the source computer system 100 of Figure 1. Alternatively, the processor 712 may be used in a similar or different computer system.The decode unit 754 may receive the protected container page write protect instruction 750. In some embodiments, the protected container page write protect instruction may be a privileged-level instruction that can only be performed at a privileged-level of execution, but not at an unprivileged or user-level of execution. For example, the protected container page write protect instruction may only be performed at a ring 0 level of privilege, for example, by an operating system, a virtual machine monitor (VMM), or other privileged system software, but not by user-level application software. In some embodiments, the instruction may indicate additional instruction specification information 774, although this is not required. For example, in an Intel® SGX implementation embodiment, the instruction may be a privileged-level ENCLS instruction, which may be executed at ring 0 privilege level, and may implicitly indicate general-purpose register EAX as having a leaf function index value of "28h" to indicate an EPC page write protect operation, although the scope of the invention is not so limited. Although the instruction may be used in various different ways not related to migration of protected containers, in one aspect such privileged software may potentially use the protected container page write protect instruction to write protect a protected container page, before an encrypted copy of the page is stored out of the protected container memory, in conjunction with live migration of a protected container.In some embodiments, the protected container page write protect instruction may indicate a protected container page 708. The page may be indicated in different ways in different embodiments. In some embodiments, the instruction may explicitly specify (e.g., through one or more fields or a set of bits), or otherwise indicate (e.g., implicitly indicate), a register (e.g., one of the general-purpose registers 767) that is to store an effective address or other indication 775 of the protected container page. As one example, the instruction may optionally have a register specification field to specify a register that is to have the effective address to indicate the protected container page. As another example, the instruction may optionally implicitly or impliedly indicate an implicit register that is to have the effective address to indicate the protected container page. Upon receiving the instruction it may be understood, although not expressed explicitly, to use the implicit register to find the effective address. As one specific example, in an Intel® SGX implementation embodiment, the implicit general-purpose register RCX may store the effective address of an enclave page cache (EPC) page.The processor may be operative to combine the effective address with other address information in order to obtain the address of the protective container page. For example, the data segment (DS) segment may be used to create a linear or virtual address. As shown, in some embodiments, the protected container memory 706 may optionally be in a system memory 770 coupled with the processor. Alternatively, the protected container memory may optionally be one or more caches or other on-die storage of the processor. As one specific example, in an Intel® SGX implementation embodiment, the protected container memory may be an enclave page cache (EPC).An execution unit 755 is coupled with the decode unit 754 and the optional general-purpose registers 767. The execution unit, in response to the protected container page write protect instruction, may be operative to write protect the indicated protected container page 708 of the protected container memory 706, which may either be in system memory or an on-die cache or other on-die storage. In some embodiments, the execution unit may have a write protect unit that is operative to write protect the page by configuring a write protection indicator 777, which corresponds to the indicated page, to indicate that the page is write protected. Different types of write protection indicators are suitable for different embodiments. In some embodiments, the write protection indicator may be implemented as one or more bits in the protected container memory, in an access control protected data structure in processor reserved memory, in an access control protected register or other structure of the processor, or the like. As shown, in some embodiments, the write protection indicator may optionally be included in a protected container page security metadata structure (PSPMS) 709 that is to store metadata (e.g., security and access control metadata) for pages in the protected container memory. In some embodiments, the PSPMS may have different write protection indicators for each corresponding different page in the protected container memory. According to one possible convention, the write protection indicator may be a single bit that may be configured to have a first binary value (e.g., set to binary one) to indicate that the corresponding indicated protected container page 708 is not write protected, or a second different binary value (e.g., cleared to binary zero) to indicate that the corresponding indicated protected container page is write protected. As one specific example, in an Intel® SGX implementation embodiment, the execution unit may write protect an enclave page of an enclave page cache (EPC) by setting a write protect (WP) bit in an enclave page cache map (EPCM) to indicate that the page as well as non-supervisory fields in the EPCM are write protected, although the scope of the invention is not so limited. Representatively, when the WP bit of the EPCM is set, the page miss handler (PMH) unit and/or the translation lookaside buffer (TLB) may signal a fault (e.g., a write protect fault, page fault, etc.) if a write access to the page is attempted.In some embodiments, the instruction may optionally explicitly specify or otherwise indicate a metadata structure 711 that is to be used to store metadata 778 for the indicated protected container page 708 of the protected container memory. The metadata structure may be indicated in different ways in different embodiments. In some embodiments, the instruction 750 may explicitly specify (e.g., through one or more fields or a set of bits), or otherwise indicate (e.g., implicitly indicate), a register (e.g., one of the general-purpose registers 767) that is to store an effective address or other indication 776 of the metadata structure. As one example, the instruction may optionally have a register specification field to specify a register that is to have the effective address to indicate the metadata structure. As another example, the instruction may optionally implicitly or impliedly indicate an implicit register that is to have the effective address to indicate the metadata structure. As one specific example, in an Intel® SGX implementation embodiment, the implicit general-purpose register RBX may store the effective address of a page metadata (PGMD) structure, although the scope of the invention is not so limited. In other embodiments, other data structures may optionally be used to store the metadata (e.g., a PCPMS). The execution unit, responsive to the instruction, may be operative to store metadata 778 pertaining to the indicated protected container page in the metadata structure 711. As shown, the execution unit may have an optional metadata store unit 782 to store the metadata in the metadata structure. Alternatively, in other embodiments, such storage of metadata may optionally be omitted (e.g., may not be needed, may be performed by another instruction, etc.).In some embodiments, the execution unit, before write protecting the protected container page, may optionally be operative to perform one or more security or verification checks. In some embodiments, the execution unit may include security check unit 783 to check or verify that a migration capable key structure 779, which has migration capable keys 780, has control over the indicated protected container page 708. For example, in an Intel® SGX implementation embodiment, the execution unit may be operative to determine that a current SGX domain control structure (SDCS), which may have migration capable SGX keys, counters, and crypto-meta data, has control over the protected container page, although the scope of the invention is not so limited. Alternatively, in other embodiments, such security or verification checks may optionally be omitted (e.g., may not be needed, may be performed by another instruction, etc.).Figure 8 is a block diagram of an embodiment of a processor 812 that is operative to perform an embodiment of a protected container page encrypt and store encrypted copy outside of protected container memory instruction 851. The processor includes a decode unit 854, an execution unit 855, and optionally a set of registers 867 (e.g., general-purpose registers). Unless specified, or otherwise clearly apparent, these components may optionally have some or all of the characteristics of the correspondingly named components of Figure 6. To avoid obscuring the description, the different and/or additional characteristics will primarily be described without repeating all of the optionally common characteristics. Moreover, in some embodiments, the processor 812 may be used in the source computer system 100 of Figure 1. Alternatively, the processor 812 may be used in a similar or different computer system.The decode unit 854 may receive the instruction 851. In some embodiments, the instruction may be a privileged-level instruction that can only be performed at a privileged-level of execution, but not at an unprivileged or user-level of execution. In some embodiments, the instruction may indicate additional instruction specification information 874, although this is not required. For example, in an Intel® SGX implementation embodiment, the instruction may be a privileged-level ENCLS instruction, and may implicitly indicate general-purpose register EAX as having a leaf function index value of "2Ah" to indicate a store encrypted page from EPC operation and leave page readable in EPC operation, although the scope of the invention is not so limited. Although the instruction may be used in general ways, in one aspect such privileged software may potentially use the instruction to store an encrypted copy of a write protected page (e.g., one write protected by a previous write protect instruction as disclosed herein) out of protected container memory (e.g., to regular memory), while a protected container is in operation in conjunction with live migration of the protected container.In some embodiments, the instruction may indicate a write protected page 808 of a protected container memory 806. The write protected page may be indicated in different ways in different embodiments. In some embodiments, the instruction may explicitly specify (e.g., through one or more fields or a set of bits), or otherwise indicate (e.g., implicitly indicate), a register (e.g., one of the general-purpose registers 867) that is to store an effective address or other indication 875 of the write protected page 808. As one specific example, in an Intel® SGX implementation embodiment, the implicit general-purpose register RCX may store the effective address of the write protected EPC page that is to be stored out of the EPC (e.g., to regular memory). As shown, in some embodiments, the protected container memory 806 may optionally be in a system memory 870 coupled with the processor (e.g., in a hardware reserved portion of the system memory). Alternatively, the protected container memory 806 may optionally be one or more caches or other on-die storage of the processor. A combination is also suitable. As one specific example, in an Intel® SGX implementation embodiment, the protected container memory may be an enclave page cache (EPC).An execution unit 855 is coupled with the decode unit and the optional general-purpose registers 867. The execution unit, in response to the protected container page write protect instruction, may be operative to ensure that there are no writable references to the write protected page of the protected container memory, while the page of the hardware enforced protected container memory has a write protected state. As shown, the execution unit may include a writable reference tracker unit 890 that may be coupled with a TLB tracking logic 891. The writable reference tracker logic may be operative to communicate with the TLB tracking logic to ensure that there are no writable references to the write protected page of the protected container memory. This may optionally be performed as described elsewhere herein, or by other approaches. In some embodiments, an ETRACK instruction and associated mechanism may optionally be used to ensure that write mappings to a page being migrated are removed from the TLB prior to writing the page out to main memory. By way of example, the ETRACK instruction may be used to configure micro-architectural tracker logic to detect when all logical processors executing in an enclave at the time of execution of the ETRACK instruction have exited the enclave and therefore all the TLB entries have been evicted (e.g., TLB entries created during enclave execution may be evicted when exiting the enclave).The execution unit, in response to the protected container page write protect instruction, may also be operative to encrypt a copy of the indicated write protected page 808 of the protected container memory. As shown, the execution unit may include an encryption unit 818 (e.g., which may be a part of the cryptographic unit 118 of Figure 1) to perform such encryption.The execution unit may further be operative to store the encrypted copy 887 of the write protected page of the protected container memory to a destination location 886, which is outside of the protected container memory 806, after it has been ensured that there are no writable references to the page of the hardware enforced protected container memory. In some embodiments, the destination location may be in regular memory, such as, for example, memory used to store user-level applications (e.g., Internet browsers, database applications, word processing applications, etc.). In some embodiments, the write protected page in the protected container memory may be in processor reserved memory, but the encrypted copy may be stored outside of the processor reserved memory. In some embodiments, the instruction may explicitly specify or otherwise indicate an indication 885 of the destination storage location, such as, for example, by having a specified or implicit register to store this indication. The execution unit may further be operative to leave the write protected page 808 in the protected container memory 806 in the write protected state, which is also valid and readable (e.g., as opposed to being invalidated), after the encrypted copy 887 of the write protected page has been stored to the indicated destination location 886 outside the protected container memory (e.g., in regular memory that is non-processor reserved memory). Allowing the write protected page to remain valid and readable may offer advantages as described elsewhere herein, such as, for example, allowing the page to be read from, reducing the downtime following live migration, etc.In some embodiments, the execution unit may optionally be operative to store version information 889 for the write protected page 808 that is stored out of the protected container memory. For example, the execution unit may include a page version storage unit 892 to store the version information 889. In some embodiments, the instruction may indicate a version page 888 that is to store version information for pages in the protected container memory. Alternatively, instead of a version page another structure may be used to store the version information. In some embodiments, the instruction may explicitly specify, or otherwise indicate, a register (e.g., one of the general-purpose registers 867) that is to store an effective address or other indication 876 of the version page 888 or version structure. As one specific example, in an Intel® SGX implementation embodiment, the implicit general-purpose register RDX may store the effective address of a VAX page slot, although the scope of the invention is not so limited. In one aspect, a VAX page may represent a dedicated type of version array page that is used to store version array information for write protected pages stored out of the EPC and is different form VA pages used to store version array information for invalidated pages stored out of the EPC. In other embodiments, version information may be stored in other types of pages, in other structures (e.g., protected structures in memory, protected structures on-die, etc.). The version information may help to protect against replay of the encrypted page. Alternatively, it may not be intended or desired to provide such protections against replay for a given implementation, and such version information may optionally be omitted.In some embodiments, the execution unit, before storing the encrypted copy 887 in the location outside of the protected container memory, may optionally be operative to perform one or more security or verification checks. As shown, the execution unit may include a security check unit 883. In some embodiments, the security check unit may be operative to check or verify that a migration capable key structure 879 that has migration capable keys 880 has control over the write protected container page 808 to be stored out of the protected container memory. In some embodiments, this may also optionally include checking or verifying that the migration capable key structure has control over the version page 888 or other version storage structure which is to be used to store the version information. For example, in an Intel® SGX implementation embodiment, this may include determining that a current SGX domain control structure (SDCS), which may have migration capable SGX keys, counters, and crypto-meta data, has control over the write protected page and the VAX page, although the scope of the invention is not so limited. Alternatively, in other embodiments, such security or verification checks may optionally be omitted (e.g., may not be needed, may be performed by another instruction, etc.).In some embodiments, the instruction may optionally explicitly specify or otherwise indicate a metadata structure 811 that is to be used to store metadata 878 for the stored out write protected page 808. In some embodiments, the instruction 851 may explicitly specify, or otherwise indicate, a register (e.g., one of the general-purpose registers 867) that is to store an effective address or other indication 884 of the metadata structure 811. As one specific example, in an Intel® SGX implementation embodiment, the instruction may indicate a (PCMD) structure, although the scope of the invention is not so limited. The execution unit, responsive to the instruction, may be operative to store metadata 878 pertaining to the stored out write protected page in the metadata structure. As shown, the execution unit may include an optional metadata storage unit 882. By way of example, the metadata storage unit may be operative to store metadata, such as, for example, a page type, read-write-execute permission status, pending status, modified status, and the like, and various combinations thereof, corresponding to the indicated page. Such metadata may potentially be used to ensure the integrity of the metadata when the page is reloaded (e.g., in a migrated protected container). Alternatively, in other embodiments, such storage of metadata may optionally be omitted (e.g., may not be needed, may be performed by another instruction, etc.).Figure 9 is a block diagram of an embodiment of a processor 912 that is operative to perform an embodiment of a protected container page write unprotect and encrypted page copy invalidation instruction 952. The processor includes a decode unit 954, an execution unit 955, and optionally a set of registers 967 (e.g., general-purpose registers). Unless specified, or otherwise clearly apparent, these components may optionally have some or all of the characteristics of the correspondingly named components of Figure 6. To avoid obscuring the description, the different and/or additional characteristics will primarily be described without repeating all of the optionally common characteristics. Moreover, in some embodiments, the processor 912 may be used in the source computer system 100 of Figure 1. Alternatively, the processor 912 may be used in a similar or different computer system.The decode unit 954 may receive the instruction 952. In some embodiments, the instruction may be a privileged-level instruction that can only be performed at a privileged-level of execution, but not at an unprivileged or user-level of execution. In some embodiments, the instruction may indicate additional instruction specification information 974, although this is not required. For example, in an Intel® SGX implementation embodiment, the instruction may be a privileged-level ENCLS instruction, and may implicitly indicate general-purpose register EAX as having a leaf function index value of "29h" to indicate an EPC page write unprotect operation, although the scope of the invention is not so limited. Although the instruction may be used in general ways, in one aspect such privileged software may potentially use the instruction to resolve a fault on a write protected page (e.g., following an attempted write to a write protected page), while a protected container is in operation in conjunction with live migration of the protected container.In some embodiments, the instruction may indicate a write protected page 908 of a protected container. In some embodiments, the instruction may explicitly specify, or otherwise indicate, a register (e.g., one of the general-purpose registers 967) that is to store an effective address or other indication 975 of the write protected page 908. As one specific example, in an Intel® SGX implementation embodiment, the implicit general-purpose register RCX may store the effective address of the write protected EPC page. As shown, in some embodiments, the protected container memory 906 may optionally be in a system memory 970 coupled with the processor (e.g., in a hardware reserved portion of the system memory). Alternatively, the protected container memory 906 may optionally be one or more caches or other on-die storage of the processor. A combination of such approaches is also suitable. As one specific example, in an Intel® SGX implementation embodiment, the protected container memory may be an enclave page cache (EPC).An execution unit 955 is coupled with the decode unit and the optional general-purpose registers 967. The execution unit, in response to the protected container page write protect instruction, may be operative to write unprotect the indicated page 908 of the protected container memory, which may either be in system memory or an on-die cache or other on-die storage. In one aspect, this may place the page in a valid and available state in which both reads and writes are permitted. In some embodiments, the execution unit may have a write unprotect unit 993 that is operative to write unprotect the page by configuring a write protection indicator 977, which corresponds to the indicated page 908, to indicate that the page is not write protected. The same types of write protection indicators mentioned above for the write protect indicator are suitable. In some embodiments, the same write protect indicator may be used for both instructions. The write unprotect instruction may perform substantially the opposite configuration of the indicator as the write protect instruction. As shown, in some embodiments, the write protection indicator may optionally be included in a protected container page security metadata structure (PSPMS) 909 that is to store metadata (e.g., security and access control metadata) for pages in the protected container memory. In an Intel® SGX implementation embodiment, the execution unit may write unprotect a write protected enclave page of an enclave page cache (EPC) by clearing a write protect (WP) bit in an enclave page cache map (EPCM) to indicate that the page as well as non-supervisory fields in the EPCM are write unprotected, although the scope of the invention is not so limited.In some embodiments, the execution unit may optionally be operative to invalidate copies of the write protected page of the protected container memory that are outside of the protected container memory. In some embodiments, a version 989 corresponding to the write protected page 908 may be used. In some embodiments, the execution unit, in response to the instruction, may be operative to change the version corresponding to the page. As shown, the execution unit may include a version change unit 994 to change the version of the write protected page. As shown, the version may optionally be stored in a version page 988. Alternatively, another protected data structure in memory, or a protected structure on-die, may optionally be used to store the version. By way of example, the execution unit may invalidate an entry corresponding to the write protected page in a migration version array page, for example, by clearing the version in the entry, or otherwise changing the version so that it will not match.In some embodiments, the instruction may explicitly specify, or otherwise indicate, a register (e.g., one of the general-purpose registers 967) that is to store an effective address or other indication 976 of a migration version array slot that is to have the version 989 of the write protected page to be changed or invalidated. As one specific example, in an Intel® SGX implementation embodiment, the implicit general-purpose register RDX may store the effective address of migration version array slot, although the scope of the invention is not so limited. In other embodiments, version information may be stored in other types of pages, in other structures (e.g., protected structures in memory, protected structures on-die, etc.).Advantageously, this may help to provide additional security in that there may only be one true copy or version of the write protected page. When the write protected page becomes dirtied or modified, other copies of the write protected page may be invalidated so that the dirtied or modified copy of the page is the only true copy or version of the page. This may also help to protect against replay of the encrypted page. Alternatively, it may not be intended or desired to provide such protections for a given implementation, and such invalidation of the copies may optionally be omitted. Moreover, other ways of invalidating the copies of the pages outside of the protected container are also contemplated besides using version information.In some embodiments, the execution unit, before invalidating the copies of the pages outside of the protected container may optionally be operative to perform one or more security or verification checks. As shown, the execution unit may include a security check unit 983. In some embodiments, the security check unit may be operative to check or verify that the correct write protected page is invalidated. For example, the execution unit may be operative to compare the version of the page to the version of the page being invalidated to ensure that the correct version information is being cleared or otherwise changed. The instruction may optionally fail if the page is invalid or if the version does not match the value stored in the version page. Alternatively, in other embodiments, such security or verification checks may optionally be omitted (e.g., may not be needed, may be performed by another instruction, etc.).In some embodiments, a decoder of a processor may also optionally be operative to decode a protected container version page or structure create instruction (e.g., instruction 653 in Figure 6). In some embodiments, the protected container version page or structure create instruction may explicitly specify, or otherwise indicate, a register (e.g., a general-purpose register) that is to store an effective address or other indication of a page in a protected container memory. In some embodiments, the page may be an invalid page that is available to be used for a new page. The processor may also include an execution unit that may be operative, in response to the instruction, to create a version page or structure to hold version information. In some embodiments, the version page may be created in the protected container memory at the indicated page in a protected container memory (e.g., at the indicated effective address). In some embodiments, this may include zeroing out the page. In some embodiments, the version page may be a special type of page dedicated to storing version information for pages for which encrypted copies were stored out of the protected container memory but the pages remain stored in the protected container memory in a valid and readable state. In some embodiments, the created page may be an empty page into which version information may subsequently be stored (e.g., by a different instruction). In some embodiments, the execution unit responsive to the instruction may also initialize or configure one or more entries, fields, or other portions of a protected container page metadata structure (PCPMS), such as, for example, an EPCM. By way of example, this may include indicating that the page is a version type of page, setting the page to valid, and configuring read-write-execute permissions for the page.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFigure 10A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 10B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 10A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 10A, a processor pipeline 1000 includes a fetch stage 1002, a length decode stage 1004, a decode stage 1006, an allocation stage 1008, a renaming stage 1010, a scheduling (also known as a dispatch or issue) stage 1012, a register read/memory read stage 1014, an execute stage 1016, a write back/memory write stage 1018, an exception handling stage 1022, and a commit stage 1024.Figure 10B shows processor core 1090 including a front end unit 1030 coupled to an execution engine unit 1050, and both are coupled to a memory unit 1070. The core 1090 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1090 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 1030 includes a branch prediction unit 1032 coupled to an instruction cache unit 1034, which is coupled to an instruction translation lookaside buffer (TLB) 1036, which is coupled to an instruction fetch unit 1038, which is coupled to a decode unit 1040. The decode unit 1040 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1040 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1090 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1040 or otherwise within the front end unit 1030). The decode unit 1040 is coupled to a rename/allocator unit 1052 in the execution engine unit 1050.The execution engine unit 1050 includes the rename/allocator unit 1052 coupled to a retirement unit 1054 and a set of one or more scheduler unit(s) 1056. The scheduler unit(s) 1056 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1056 is coupled to the physical register file(s) unit(s) 1058. Each of the physical register file(s) units 1058 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1058 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1058 is overlapped by the retirement unit 1054 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1054 and the physical register file(s) unit(s) 1058 are coupled to the execution cluster(s) 1060. The execution cluster(s) 1060 includes a set of one or more execution units 1062 and a set of one or more memory access units 1064. The execution units 1062 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1056, physical register file(s) unit(s) 1058, and execution cluster(s) 1060 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 1064 is coupled to the memory unit 1070, which includes a data TLB unit 1072 coupled to a data cache unit 1074 coupled to a level 2 (L2) cache unit 1076. In one exemplary embodiment, the memory access units 1064 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1072 in the memory unit 1070. The instruction cache unit 1034 is further coupled to a level 2 (L2) cache unit 1076 in the memory unit 1070. The L2 cache unit 1076 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1000 as follows: 1) the instruction fetch 1038 performs the fetch and length decoding stages 1002 and 1004; 2) the decode unit 1040 performs the decode stage 1006; 3) the rename/allocator unit 1052 performs the allocation stage 1008 and renaming stage 1010; 4) the scheduler unit(s) 1056 performs the schedule stage 1012; 5) the physical register file(s) unit(s) 1058 and the memory unit 1070 perform the register read/memory read stage 1014; the execution cluster 1060 perform the execute stage 1016; 6) the memory unit 1070 and the physical register file(s) unit(s) 1058 perform the write back/memory write stage 1018; 7) various units may be involved in the exception handling stage 1022; and 8) the retirement unit 1054 and the physical register file(s) unit(s) 1058 perform the commit stage 1024.The core 1090 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1090 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1034/1074 and a shared L2 cache unit 1076, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFigures 11A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 11A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1102 and with its local subset of the Level 2 (L2) cache 1104, according to embodiments of the invention. In one embodiment, an instruction decoder 1100 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 1106 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1108 and a vector unit 1110 use separate register sets (respectively, scalar registers 11112 and vector registers 1114) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 1106, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1104 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1104. Data read by a processor core is stored in its L2 cache subset 1104 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1104 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.Figure 11B is an expanded view of part of the processor core in Figure 11A according to embodiments of the invention. Figure 11B includes an L1 data cache 1106A part of the L1 cache 1104, as well as more detail regarding the vector unit 1110 and the vector registers 1114. Specifically, the vector unit 1110 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1128), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1120, numeric conversion with numeric convert units 1122A-B, and replication with replication unit 1124 on the memory input. Write mask registers 1126 allow predicating resulting vector writes.Processor with integrated memory controller and graphicsFigure 12 is a block diagram of a processor 1200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 12 illustrate a processor 1200 with a single core 1202A, a system agent 1210, a set of one or more bus controller units 1216, while the optional addition of the dashed lined boxes illustrates an alternative processor 1200 with multiple cores 1202A-N, a set of one or more integrated memory controller unit(s) 1214 in the system agent unit 1210, and special purpose logic 1208.Thus, different implementations of the processor 1200 may include: 1) a CPU with the special purpose logic 1208 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1202A-N being a large number of general purpose in-order cores. Thus, the processor 1200 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1206, and external memory (not shown) coupled to the set of integrated memory controller units 1214. The set of shared cache units 1206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1212 interconnects the integrated graphics logic 1208, the set of shared cache units 1206, and the system agent unit 1210/integrated memory controller unit(s) 1214, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1206 and cores 1202-A-N.In some embodiments, one or more of the cores 1202A-N are capable of multi-threading. The system agent 1210 includes those components coordinating and operating cores 1202A-N. The system agent unit 1210 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1202A-N and the integrated graphics logic 1208. The display unit is for driving one or more externally connected displays.The cores 1202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1202A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFigures 13-21 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 13, shown is a block diagram of a system 1300 in accordance with one embodiment of the present invention. The system 1300 may include one or more processors 1310, 1315, which are coupled to a controller hub 1320. In one embodiment the controller hub 1320 includes a graphics memory controller hub (GMCH) 1390 and an Input/Output Hub (IOH) 1350 (which may be on separate chips); the GMCH 1390 includes memory and graphics controllers to which are coupled memory 1340 and a coprocessor 1345; the IOH 1350 is couples input/output (I/O) devices 1360 to the GMCH 1390. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1340 and the coprocessor 1345 are coupled directly to the processor 1310, and the controller hub 1320 in a single chip with the IOH 1350.The optional nature of additional processors 1315 is denoted in Figure 13 with broken lines. Each processor 1310, 1315 may include one or more of the processing cores described herein and may be some version of the processor 1200.The memory 1340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1320 communicates with the processor(s) 1310, 1315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1395.In one embodiment, the coprocessor 1345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1320 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1310, 1315 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1345. Accordingly, the processor 1310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1345. Coprocessor(s) 1345 accept and execute the received coprocessor instructions.Referring now to Figure 14, shown is a block diagram of a first more specific exemplary system 1400 in accordance with an embodiment of the present invention. As shown in Figure 14, multiprocessor system 1400 is a point-to-point interconnect system, and includes a first processor 1470 and a second processor 1480 coupled via a point-to-point interconnect 1450. Each of processors 1470 and 1480 may be some version of the processor 1200. In one embodiment of the invention, processors 1470 and 1480 are respectively processors 1310 and 1315, while coprocessor 1438 is coprocessor 1345. In another embodiment, processors 1470 and 1480 are respectively processor 1310 coprocessor 1345.Processors 1470 and 1480 are shown including integrated memory controller (IMC) units 1472 and 1482, respectively. Processor 1470 also includes as part of its bus controller units point-to-point (P-P) interfaces 1476 and 1478; similarly, second processor 1480 includes P-P interfaces 1486 and 1488. Processors 1470, 1480 may exchange information via a point-to-point (P-P) interface 1450 using P-P interface circuits 1478, 1488. As shown in Figure 14, IMCs 1472 and 1482 couple the processors to respective memories, namely a memory 1432 and a memory 1434, which may be portions of main memory locally attached to the respective processors.Processors 1470, 1480 may each exchange information with a chipset 1490 via individual P-P interfaces 1452, 1454 using point to point interface circuits 1476, 1494, 1486, 1498. Chipset 1490 may optionally exchange information with the coprocessor 1438 via a high-performance interface 1439. In one embodiment, the coprocessor 1438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1490 may be coupled to a first bus 1416 via an interface 1496. In one embodiment, first bus 1416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 14, various I/O devices 1414 may be coupled to first bus 1416, along with a bus bridge 1418 which couples first bus 1416 to a second bus 1420. In one embodiment, one or more additional processor(s) 1415, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1416. In one embodiment, second bus 1420 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1420 including, for example, a keyboard and/or mouse 1422, communication devices 1427 and a storage unit 1428 such as a disk drive or other mass storage device which may include instructions/code and data 1430, in one embodiment. Further, an audio I/O 1424 may be coupled to the second bus 1420. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 14, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 15, shown is a block diagram of a second more specific exemplary system 1500 in accordance with an embodiment of the present invention. Like elements in Figures 14 and 15 bear like reference numerals, and certain aspects of Figure 14 have been omitted from Figure 15 in order to avoid obscuring other aspects of Figure 15.Figure 15 illustrates that the processors 1470, 1480 may include integrated memory and I/O control logic ("CL") 1472 and 1482, respectively. Thus, the CL 1472, 1482 include integrated memory controller units and include I/O control logic. Figure 15 illustrates that not only are the memories 1432, 1434 coupled to the CL 1472, 1482, but also that I/O devices 1514 are also coupled to the control logic 1472, 1482. Legacy I/O devices 1515 are coupled to the chipset 1490.Referring now to Figure 16, shown is a block diagram of a SoC 1600 in accordance with an embodiment of the present invention. Similar elements in Figure 12 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 16, an interconnect unit(s) 1602 is coupled to: an application processor 1610 which includes a set of one or more cores 152A-N and shared cache unit(s) 1206; a system agent unit 1210; a bus controller unit(s) 1216; an integrated memory controller unit(s) 1214; a set or one or more coprocessors 1620 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1630; a direct memory access (DMA) unit 1632; and a display unit 1640 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1620 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1430 illustrated in Figure 14, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Figure 17 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 17 shows a program in a high level language 1702 may be compiled using an x86 compiler 1704 to generate x86 binary code 1706 that may be natively executed by a processor with at least one x86 instruction set core 1716. The processor with at least one x86 instruction set core 1716 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1704 represents a compiler that is operable to generate x86 binary code 1706 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1716. Similarly, Figure 17 shows the program in the high level language 1702 may be compiled using an alternative instruction set compiler 1708 to generate alternative instruction set binary code 1710 that may be natively executed by a processor without at least one x86 instruction set core 1714 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1712 is used to convert the x86 binary code 1706 into code that may be natively executed by the processor without an x86 instruction set core 1714. This converted code is not likely to be the same as the alternative instruction set binary code 1710 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1712 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1706.Components, features, and details described for any of Figures 5-6 may also optionally apply to any of Figures 7-9. Moreover, components, features, and details described for any of the apparatus may also optionally apply to any of the methods, which in embodiments may be performed by and/or with such apparatus. Any of the processors described herein may be included in any of the computer systems disclosed herein (e.g., Figures 13-16). In some embodiments, the computer system may include a dynamic random access memory (DRAM). Alternatively, the computer system may include a type of volatile memory that does not need to be refreshed or flash memory. The instructions disclosed herein may be performed with any of the processors shown herein, having any of the microarchitectures shown herein, on any of the systems shown herein.In the description and claims, the terms "coupled" and/or "connected," along with their derivatives, may have be used. These terms are not intended as synonyms for each other. Rather, in embodiments, "connected" may be used to indicate that two or more elements are in direct physical and/or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical and/or electrical contact with each other. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. For example, an execution unit may be coupled with a register and/or a decode unit through one or more intervening components. In the figures, arrows are used to show connections and couplings.The term "and/or" may have been used. As used herein, the term "and/or" means one or the other or both (e.g., A and/or B means A or B or both A and B).In the description above, specific details have been set forth in order to provide a thorough understanding of the embodiments. However, other embodiments may be practiced without some of these specific details. The scope of the invention is not to be determined by the specific examples provided above, but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form and/or without detail in order to avoid obscuring the understanding of the description. Where considered appropriate, reference numerals, or terminal portions of reference numerals, have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar or the same characteristics, unless specified or clearly apparent otherwise.Certain operations may be performed by hardware components, or may be embodied in machine-executable or circuit-executable instructions, that may be used to cause and/or result in a machine, circuit, or hardware component (e.g., a processor, potion of a processor, circuit, etc.) programmed with the instructions performing the operations. The operations may also optionally be performed by a combination of hardware and software. A processor, machine, circuit, or hardware may include specific or particular circuitry or other logic (e.g., hardware potentially combined with firmware and/or software) is operative to execute and/or process the instruction and store a result in response to the instruction.Some embodiments include an article of manufacture (e.g., a computer program product) that includes a machine-readable medium. The medium may include a mechanism that provides, for example stores, information in a form that is readable by the machine. The machine-readable medium may provide, or have stored thereon, an instruction or sequence of instructions, that if and/or when executed by a machine are operative to cause the machine to perform and/or result in the machine performing one or operations, methods, or techniques disclosed herein.In some embodiments, the machine-readable medium may include a non-transitory machine-readable storage medium. For example, the non-transitory machine-readable storage medium may include a floppy diskette, an optical storage medium, an optical disk, an optical data storage device, a CD-ROM, a magnetic disk, a magneto-optical disk, a read only memory (ROM), a programmable ROM (PROM), an erasable-and-programmable ROM (EPROM), an electrically-erasable-and-programmable ROM (EEPROM), a random access memory (RAM), a static-RAM (SRAM), a dynamic-RAM (DRAM), a Flash memory, a phase-change memory, a phase-change data storage material, a non-volatile memory, a non-volatile data storage device, a non-transitory memory, a non-transitory data storage device, or the like. The non-transitory machine-readable storage medium does not consist of a transitory propagated signal. In some embodiments, the storage medium may include a tangible medium that includes solid matter.Examples of suitable machines include, but are not limited to, a general-purpose processor, a special-purpose processor, a digital logic circuit, an integrated circuit, or the like. Still other examples of suitable machines include a computer system or other electronic device that includes a processor, a digital logic circuit, or an integrated circuit. Examples of such computer systems or electronic devices include, but are not limited to, desktop computers, laptop computers, notebook computers, tablet computers, netbooks, smartphones, cellular phones, servers, network devices (e.g., routers and switches.), Mobile Internet devices (MIDs), media players, smart televisions, nettops, set-top boxes, and video game controllers.Reference throughout this specification to "one embodiment," "an embodiment," "one or more embodiments," "some embodiments," for example, indicates that a particular feature may be included in the practice of the invention but is not necessarily required to be. Similarly, in the description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention.EXAMPLE EMBODIMENTSThe following examples pertain to further embodiments. Specifics in the examples may be used anywhere in one or more embodiments.Example 1 is a processor that includes a decode unit to decode an instruction. The instruction is to indicate a page of a protected container memory, and is to indicate a storage location outside of the protected container memory. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the instruction, is to ensure that no writable permissions for the page of the protected container memory are cached in the processor, while the page of the protected container memory has a write protected state. The execution unit is also to encrypt a copy of the page of the protected container memory. The execution unit is further to store the encrypted copy of the page to the indicated storage location outside of the protected container memory, after it has been ensured that there are no writable references to the page of the protected container memory. The execution unit is also to leave the page of the protected container memory in the write protected state, which is also to be valid and readable, after the encrypted copy of the page has been stored to the indicated storage location outside of the protected container memory.Example 2 includes the processor of Example 1, in which the decode unit is to decode the instruction which is to indicate the page of the protected container memory that is already to have the write protected state.Example 3 includes the processor of Example 1, in which the execution unit, in response to the instruction, is to write protect the indicated page of the protected container memory.Example 4 includes the processor of Example 1, in which the decode unit is to decode the instruction which is to indicate the page of the protected container memory, which is to be in a processor reserved memory, and the instruction is to indicate the storage location which is to be outside of the processor reserved memory.Example 5 includes the processor of Example 1, wherein the execution unit is to ensure that there are no writable references to the page of the protected container memory by ensuring they are removed from translation lookaside buffers.Example 6 includes the processor of any one of Examples 1 to 5, in which the execution unit, in response to the instruction, is to store a version of the page having the write protected state in the protected container memory.Example 7 includes the processor of any one of Examples 1 to 5, in which the execution unit, in response to the instruction, is to determine that a migration capable key structure, which is to have one or more migration capable cryptographic keys, has control over the page of the protected container memory prior to the encrypted copy of the page being stored to the indicated storage location.Example 8 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction which is to indicate a page metadata structure. The execution unit, in response to the instruction, is to store metadata corresponding to the indicated page in the page metadata structure. The metadata is to include a plurality of a page type, a modification status, a read permission status, a write permission status, and an execution permission status, all corresponding to the indicated page, in the page metadata structure.Example 9 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction which is to indicate the page of the protected container memory which is to be an enclave page in an enclave page cache.Example 10 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction which is to have an implicit general-purpose register that is to have an indication of the page of the protected container memory.Example 11 includes the processor of any one of Examples 1 to 5, in which the decode unit is to decode the instruction which is to be a privileged-level instruction.Example 12 is a method of performing from one to three machine instructions in a processor to perform operations including write protecting a page of a protected container memory, ensuring that no writable permissions for the page of the protected container memory are cached in the processor, and encrypt a copy of the page of the protected container memory. The operations also include storing the encrypted copy of the page of the protected container memory to a storage location that is outside of the protected container memory, after said ensuring that there are no writable references to the write protected page of the protected container memory, and leaving the write protected page of the protected container memory in a valid and readable state after said storing the encrypted copy of the page of the protected container memory to the storage location that is outside of the protected container memory.Example 13 includes the method of Example 12, further including reading the write protected page after said storing the encrypted copy of the page to the storage location.Example 14 includes the method of Example 12, in which said write protecting the page includes configuring a write protection indication in a protected container page metadata structure to indicate that the page is write protected, in which the protected container page metadata structure stores security metadata for the write protected page.Example 15 includes the method of Example 14, in which said configuring the write protection indication in the protected container page metadata structure includes setting a write protect bit in an enclave page cache map.Example 16 includes the method of Example 12, further including detecting an attempted write to the write protected page of the protected container memory, write unprotecting the page of the protected container memory, and invalidating the encrypted copy of the page stored in the storage location that is outside of the protected container memory.Example 17 includes the method of Example 12, in which said write protecting is performed in response to performing a first of the machine instructions, and in which said encrypting, said ensuring, said storing, and said leaving are performed in response to performing a second of the machine instructions.Example 18 is a system to process instructions that includes an interconnect, and a processor coupled with the interconnect. The processor is to receive an instruction that is to indicate a page of a protected container memory, and is to indicate a storage location outside of the protected container memory. The processor, in response to the instruction, is to ensure that there are no writable references to the page of the protected container memory, while the page of the protected container memory has a write protected state, and encrypt a copy of the page of the protected container memory. The processor is also to store the encrypted copy of the page to the indicated storage location outside of the protected container memory, after it has been ensured that there are no writable references to the page of the protected container memory, and leave the page of the protected container memory in the write protected state, which is also to be valid and readable, after the encrypted copy of the page has been stored to the indicated storage location outside of the protected container memory. The system also includes a dynamic random access memory (DRAM) coupled with the interconnect.Example 19 includes the system of Example 18, in which the processor is to receive the instruction which is to indicate the page of the protected container memory that is already to have the write protected state.Example 20 is an article of manufacture including a non-transitory machine-readable storage medium. The non-transitory machine-readable storage medium stores from one to three machine instructions that if executed by a machine are to cause the machine to perform operations including write protecting a page of a protected container memory, and ensuring that there are no writable references to the write protected page of the protected container memory. The operations also include, encrypting a copy of the page of the protected container memory, and storing the encrypted copy of the page of the protected container memory to a storage location that is outside of the protected container memory, after said ensuring that there are no writable references to the write protected page of the protected container memory. The operations also include leaving the write protected page of the protected container memory in a valid and readable state after said storing the encrypted copy of the page of the protected container memory to the storage location that is outside of the protected container memory.Example 21 includes the article of manufacture of Example 18, in which the non-transitory machine-readable storage medium further stores from one to two machine instructions that if executed by a machine are to cause the machine to perform operations including write unprotecting the page of the protected container memory after detecting an attempted write to the write protected page of the protected container memory, and invalidating the encrypted copy of the page stored in the storage location that is outside of the protected container memory.Example 22 is a processor that includes a decode unit to decode a protected container page write protect instruction. The instruction is to indicate a page of a protected container memory. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the protected container page write protect instruction, is to write protect the indicated page of the protected container memory.Example 23 includes the processor of Example 22, in which the execution unit is to write protect the indicated page by configuration of a write protection indicator, which corresponds to the indicated page, in a protected container page metadata structure that is to store metadata for the indicated page.Example 24 includes the processor of Example 23, in which the execution unit is to write protect the indicated page by configuration of a write protect bit in an enclave page cache map.Example 25 includes the processor of any one of Examples 22 to 24, in which the execution unit, in response to the instruction, is to determine that a migration capable key structure, which is to have one or more migration capable cryptographic keys, has control over the page of the protected container memory prior to the page being write protected.Example 26 includes the processor of any one of Examples 22 to 24, in which the decode unit is to decode the instruction which is to have an implicit register that is to have an effective address of the page of the hardware enforced protected container memory.Example 27 is a processor that includes a decode unit to decode a protected container page write unprotect and copy invalidation instruction. The instruction is to indicate a page of a protected container memory. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the instruction, is to write unprotect the indicated page of the protected container memory, and invalidate any copies of the page of the protected container memory which are to be outside of the protected container memory.Example 28 includes the processor of Example 27, in which the decode unit is to decode the instruction that is to indicate version information, and in which the execution unit is to invalidate said any copies of the page by changing the indicated version information.Example 29 includes the processor of Example 28, in which version information is to be stored in the protected container memory.Example 30 includes the processor of any one of Examples 27 to 29, in which the execution unit, in response to the instruction, is to write unprotect the indicated page by configuration of the write protection indicator in a protected container page metadata structure that is to store security metadata for pages of the protected container memory.Example 31 includes the processor of any one of Examples 1 to 11, further including an optional branch prediction unit to predict branches, and an optional instruction prefetch unit, coupled with the branch prediction unit, the instruction prefetch unit to prefetch instructions including the instruction. The processor may also optionally include an optional level 1 (L1) instruction cache coupled with the instruction prefetch unit, the L1 instruction cache to store instructions, an optional L1 data cache to store data, and an optional level 2 (L2) cache to store data and instructions. The processor may also optionally include an instruction fetch unit coupled with the decode unit, the L1 instruction cache, and the L2 cache, to fetch the instruction, in some cases from one of the L1 instruction cache and the L2 cache, and to provide the instruction to the decode unit. The processor may also optionally include a register rename unit to rename registers, an optional scheduler to schedule one or more operations that have been decoded from the instruction for execution, and an optional commit unit to commit execution results of the instruction.Example 32 is a processor or other apparatus to perform or operative to perform the method of any one of Examples 12 to 17.Example 33 is a processor or other apparatus that includes means for performing the method of any one of Examples 12 to 17.Example 34 is an article of manufacture that includes an optionally non-transitory machine-readable medium, which optionally stores or otherwise provides an instruction, which if and/or when executed by a processor, computer system, electronic device, or other machine, is operative to cause the machine to perform the method of any one of Examples 12 to 17.Example 35 is a processor or other apparatus substantially as described herein.Example 36 is a processor or other apparatus that is operative to perform any method substantially as described herein.Example 37 is a processor or other apparatus to perform (e.g., that has components to perform or that is operative to perform) any instruction substantially as described herein.Example 38 is a computer system or other electronic device that includes a processor having a decode unit to decode instructions of a first instruction set. The processor also has one or more execution units. The electronic device also includes a storage device coupled with the processor. The storage device is to store a first instruction, which may be any of the instructions substantially as disclosed herein, and which is to be of a second instruction set. The storage device is also to store instructions to convert the first instruction into one or more instructions of the first instruction set. The one or more instructions of the first instruction set, when performed by the processor, are to cause the processor to emulate the first instruction. |
Techniques for encrypting the data in the memory of a computing device are provided. An example method for protecting data in a memory according to the disclosure includes encrypting data associated with a store request using a memory encryption device of the processor to produce encrypted data. Encrypting the data includes: obtaining a challenge value, providing the challenge value to a physically unclonable function module to obtain a response value, and encrypting the data associated with the store request using the response value as an encryption key to generate the encrypted data. The method also includes storing the encrypted data and the challenge value associated with the encrypted data in the memory. |
1.A method for protecting data in a memory includes:The data associated with the storage request is encrypted using a memory encryption device associated with the processor to generate encrypted data, wherein encrypting the data includes:Get the value of the inquiry,Providing the challenge value to a physical non-replicable function module to obtain a response value, andEncrypting the data associated with the storage request using the response value as an encryption key to generate the encrypted data; andThe encrypted data and the challenge values associated with the encrypted data are stored in the memory.2.The method of claim 1, wherein obtaining the challenge value comprises obtaining the challenge value from a random number generator associated with the processor.3.The method of claim 1, wherein obtaining the challenge value comprises obtaining the challenge value from a monotonic counter associated with the processor.4.The method of claim 1, wherein encrypting the data associated with the storage request using the challenge value includes applying an exclusive OR XOR operation to the data and the response associated with the storage request Values to generate the encrypted data.5.The method of claim 1, wherein using the challenge value to encrypt the data associated with the storage request includes applying an XOR XOR operation to the data associated with the storage request, generating the data The response value of the encrypted data, and the address associated with the memory location in which the encrypted data is to be written.6.The method of claim 1, further comprising:Obtaining the encrypted data and the challenge value associated with the encrypted data from the memory in response to a read request; andDecrypting the encrypted data to generate decrypted data, wherein decrypting the data includes:Providing the challenge value to the physical non-replicable function module to obtain a recovered response value, andUsing the recovered response value to decrypt the data associated with the storage request, andThe decrypted data is provided to the processor.7.The method of claim 6, wherein decrypting the data associated with the storage request using the recovered response value comprises applying an exclusive OR XOR operation to the encrypted data and the recovered response value The decrypted data is generated.8.The method of claim 6, wherein decrypting the data associated with the storage request using the recovered response value comprises applying an exclusive OR XOR operation to the encrypted data, the recovered response value, And an address associated with the memory location to which the encrypted data is written to generate the encrypted data.9.A device includes:The device for obtaining the inquiry value;Means for providing the challenge value to a physical non-replicable function module to obtain a response value;An apparatus for encrypting data associated with a storage request using the response value as an encryption key to generate encrypted data; andMeans for storing the encrypted data and the challenge value associated with the encrypted data.10.The apparatus according to claim 9, wherein said means for obtaining said challenge value comprises means for obtaining said challenge value from a random number generator.11.The apparatus according to claim 9, wherein said means for obtaining said challenge value comprises means for obtaining said challenge value from a monotonic counter.12.The apparatus of claim 9, wherein said means for encrypting said data associated with said storage request using said challenge value comprises for applying an exclusive OR XOR operation to be associated with said storage request Said data and said response value to generate said encrypted data device.13.The apparatus of claim 9, wherein said means for encrypting said data associated with said storage request using said challenge value comprises for applying an exclusive OR XOR operation to be associated with said storage request Said data, said response value for generating said encrypted data, and means for associating an address with a memory location in which said encrypted data is to be written.14.The apparatus of claim 9, further comprising:Means for obtaining the encrypted data and the challenge value associated with the encrypted data from a memory in which the encrypted data is stored in response to a read request;Means for providing the challenge value to a physical non-replicable function module to obtain a recovered response value;Means for using the recovered response value to decrypt the data associated with the storage request, andMeans for providing the decrypted data to a processor.15.The apparatus of claim 14, wherein the means for decrypting the data associated with the storage request using the recovered response value comprises for applying an exclusive OR XOR operation to the encrypted data And the recovered response value to generate the decrypted data.16.The apparatus of claim 14, wherein the means for decrypting the data associated with the storage request using the recovered response value comprises for applying an exclusive OR XOR operation to the encrypted data The recovered response value, and the device associated with the address to which the encrypted data is written to generate the encrypted data.17.A computing device includes:Processor; anda memory coupled to the processor, andThe processor includes a memory encryption device that is configured to:Get the value of the inquiry;Providing the challenge value to a physical non-replicable function module to obtain a response value;Encrypting the data associated with the storage request received from the processor using the response value as an encryption key to generate encrypted data; andThe encrypted data and the challenge values associated with the encrypted data are stored in the memory.18.The computing device of claim 17, wherein the memory encryption device is configured to obtain the challenge value from a random number generator associated with the processor.19.The computing device of claim 17, wherein the memory encryption device is configured to obtain the challenge value from a monotonic counter associated with the processor.20.The computing device of claim 17, wherein the memory encryption device is configured to encrypt using the challenge value by applying an exclusive OR XOR operation to the data and the response value associated with the storage request The data associated with the storage request to generate the encrypted data.21.The computing device of claim 17, wherein the memory encryption device is configured to generate the response value of the encrypted data by applying an exclusive OR XOR operation to the data associated with the storage request , and the address associated with the memory location in which the encrypted data is to be written, using the challenge value to encrypt the data associated with the storage request.22.The computing device of claim 17, wherein the memory encryption device is further configured to:Obtaining the encrypted data and the challenge value associated with the encrypted data from the memory in response to a read request; andThe encrypted data is decrypted to produce decrypted data, wherein the memory encryption device is configured to:Providing the challenge value to the physical non-replicable function module to obtain a recovered response value, andUsing the recovered response value to decrypt the data associated with the storage request, andThe decrypted data is provided to the processor.23.The computing device of claim 22, wherein the memory encryption device is configured to use the recovered response value to decrypt the data by applying an exclusive OR XOR operation to the encrypted data and the recovered response value. The storage of the requested data is requested to generate the decrypted data.24.The computing device of claim 22, wherein the memory encryption device is configured to apply an exclusive OR XOR operation to the encrypted data, the recovered response value, and to be written with the encrypted data The address associated with the memory location therein decrypts the data associated with the storage request using the restored response value to generate the encrypted data.25.A non-transitory computer-readable medium having stored thereon computer-readable instructions for protecting data in a memory, the instructions including instructions configured to cause a computer to perform the following operations:Get the value of the inquiry;Providing the challenge value to a physical non-replicable function module to obtain a response value;Encrypting the data associated with the storage request using the response value as an encryption key to generate encrypted data; andThe encrypted data and the challenge value associated with the encrypted data are stored.26.The non-transitory computer readable medium of claim 25, wherein the instructions configured to cause the computer to obtain the challenge value comprise a configuration configured to cause the computer to obtain the challenge value from a random number generator Instructions.27.The non-transitory computer-readable medium of claim 25, wherein the instructions configured to cause the computer to obtain the challenge value include instructions configured to cause the computer to obtain the challenge value from a monotonic counter .28.The non-transitory computer readable medium of claim 25, wherein the instructions configured to cause the computer to use the challenge value to encrypt the data associated with the storage request include being configured to cause The computer applies an exclusive OR (XOR) operation to the data associated with the storage request and the response value to generate the encrypted data instruction.29.The non-transitory computer readable medium of claim 25, wherein the instructions configured to cause the computer to use the challenge value to encrypt the data associated with the storage request include being configured to cause The computer applies an exclusive-or XOR operation to the data associated with the storage request, generates the response value of the encrypted data, and is associated with a memory location in which the encrypted data is to be written The address of the instruction.30.The non-transitory computer readable medium of claim 25, further comprising instructions configured to cause the computer to perform the following operations:Obtaining the encrypted data and the challenge value associated with the encrypted data from a memory in which the encrypted data is stored in response to a read request;Providing the challenge value to the physical non-replicable function module to obtain a recovered response value;Using the recovered response value to decrypt the data associated with the storage request; andThe decrypted data is provided to a processor. |
Physical non-replication function assisted memory encryption device technologyBackground techniqueThe content of the memory of the computing device is vulnerable to attacks from a malicious party who may attempt to gain unauthorized access to the contents of the memory of the computing device and/or by assuming a program being executed by a processor of the computing device. The control of the code flow results in the control of the computing device. Some attempts to encrypt data stored in the memory of a computing device relying on one or more encryption keys stored or built into the processor of the computing device have been made, but such methods are vulnerable to attacker availability Create a key and crack the provided encryption attack and/or reverse engineering.Summary of the InventionAn example method for protecting data in a memory according to the present invention includes encrypting data associated with a storage request using a processor's memory encryption device to generate encrypted data. The encrypted data includes obtaining a challenge value, providing the challenge value to a physical non-replicable function module to obtain a response value, and encrypting the data associated with the storage request using the response value as an encryption key to generate encrypted data. The method also includes storing the encrypted data and the challenge values associated with the encrypted data in a memory.Implementations of this method may include one or more of the following features. Obtaining the challenge value includes obtaining the challenge value from a random number generator associated with the processor. Encrypting the data associated with the storage request using the challenge value includes applying an exclusive OR (XOR) operation to the data associated with the storage request and the response value to generate the encrypted data. Encrypting the data associated with the storage request using the challenge value includes applying an exclusive OR (XOR) operation to the data associated with the storage request, generating the encrypted data response value, and the memory location to which the encrypted data is to be written The associated address. Encrypted data and challenge values associated with the encrypted data are obtained from the memory in response to the read request; and the encrypted data is decrypted to produce decrypted data. Decrypting the data includes providing the challenge value to a powerful physical non-replicable function module to obtain a recovered response value, and decrypting the data associated with the storage request using the recovered response value. Provide decrypted data to the processor. Decrypting the data associated with the storage request using the recovered response value includes applying an exclusive OR (XOR) operation to the encrypted data and the recovered response value to generate decrypted data. Decrypting data associated with the storage request using the recovered response value includes applying an exclusive OR (XOR) operation to the encrypted data, the restored response value, and the address associated with the memory location to which the encrypted data was written to generate The encrypted data.The device according to the invention comprises means for encrypting the data associated with the storage request using a memory encryption device of the processor to generate encrypted data. Means for encrypting data in response to storing a request include means for obtaining an inquiry value, means for providing a challenge value to a physical non-replicable function module to obtain a response value, and for using the response value as an encryption key Encrypt the data associated with the storage request to produce encrypted data devices. The apparatus also includes means for storing the encrypted data and the challenge values associated with the encrypted data in a memory. The means for obtaining the challenge value includes means for obtaining the challenge value from a random number generator associated with the processor.Implementations of this device may include one or more of the following features. The means for encrypting data associated with the storage request using the challenge value includes means for applying an exclusive OR (XOR) operation to the data and response values associated with the storage request to generate the encrypted data. An apparatus for encrypting data associated with a storage request using an interrogation value includes: applying an exclusive OR (XOR) operation to data associated with a storage request, generating a response value of the encrypted data, and processing the encrypted data The device that writes the address associated with the memory location therein. Means for obtaining encrypted data and interrogated values associated with the encrypted data from memory in response to a read request, and means for decrypting the encrypted data to generate decrypted data. The means for decrypting data includes means for providing a challenge value to a powerful physical non-replica function module to obtain a recovered response value, and means for decrypting the data associated with the storage request using the recovered response value. Means for providing decrypted data to a processor. The means for decrypting data associated with the storage request using the recovered response value includes means for applying an exclusive OR (XOR) operation to the encrypted data and the recovered response value to generate decrypted data. The apparatus for decrypting data associated with a storage request using the recovered response value includes for applying an exclusive OR (XOR) operation to the encrypted data, the restored response value, and the memory location to which the encrypted data is written Joint address to produce encrypted data.The computing device according to the present invention includes a processor, a memory, and a memory encryption device. The memory encryption device is configured to encrypt data associated with a storage request received from the processor to generate encrypted data. When encrypting data, the memory encryption device is configured to obtain the challenge value, provide the challenge value to the physical non-replica function module to obtain the response value, and use the response value as the encryption key to encrypt the data associated with the storage request to generate the encrypted data. data. The memory encryption device is also configured to store the encrypted data and the challenge values associated with the encrypted data in a memory.The memory encryption device is configured to obtain the challenge value from a random number generator associated with the processor. The memory encryption device is configured to encrypt data associated with the storage request using the challenge value by applying an exclusive OR (XOR) operation to the data and response values associated with the storage request to generate the encrypted data. The memory encryption device is configured to apply a XOR operation to data associated with a storage request, generate a response value for the encrypted data, and an address associated with a memory location in which the encrypted data is to be written. Use the challenge value to encrypt the data associated with the storage request. The memory encryption device is further configured to: obtain encrypted data from the memory in response to the read request and the challenge value associated with the encrypted data; and decrypt the encrypted data to generate decrypted data. When decrypting the data, the memory encryption device is configured to: provide the challenge value to a strong physical non-replica function module to obtain a recovered response value, and decrypt the data associated with the storage request using the restored response value. The memory encryption device is configured to provide decrypted data to the processor. The memory encryption device is configured to decrypt the data associated with the storage request using the recovered response value by applying an exclusive OR (XOR) operation to the encrypted data and the recovered response value to generate the decrypted data. The memory encryption device is configured to decrypt and store using the recovered response value by applying an exclusive OR (XOR) operation to the encrypted data, the recovered response value, and the address associated with the memory location to which the encrypted data is written. The associated data is requested to produce encrypted data.An example non-transitory computer-readable medium according to the present invention stores thereon computer-readable instructions for protecting data in a memory. The instructions are configured to cause a computer to obtain an inquiry value, provide an inquiry value to a physical non-replicable function module to obtain a response value, encrypt the data associated with the storage request using the response value as an encryption key to generate encrypted data, and store Encrypted data and query values associated with encrypted data.The implementation of this non-transitory computer-readable medium may include one or more of the following features. The instructions configured to cause the computer to obtain the challenge value include instructions configured to cause the computer to obtain the challenge value from the random number generator. The instructions configured to cause the computer to obtain the challenge value include instructions configured to cause the computer to obtain the challenge value from the monotonic counter. An instruction configured to cause a computer to encrypt data associated with a storage request using an challenge value includes instructions configured to cause a computer to apply an exclusive OR (XOR) operation to data and response values associated with the storage request to generate encrypted data . An instruction configured to cause a computer to encrypt data associated with a storage request using an challenge value includes a response value configured to cause a computer to apply an exclusive OR (XOR) operation to data associated with the storage request, to generate encrypted data, and The instruction of the address associated with the memory location in which the encrypted data is to be written. An instruction configured to cause a computer to perform the following operations: in response to a read request obtaining encrypted data and an interrogated value associated with the encrypted data from a memory in which the encrypted data is stored, providing an interrogation value to the physical non-replicable function module A restored response value is obtained, the data associated with the storage request is decrypted using the recovered response value, and the decrypted data is provided to the processor.Description of the drawingsFIG. 1 is a block diagram of a computing device 100 that can be used to implement the techniques disclosed herein.FIG. 2 is a flowchart of an example process for protecting data in a memory according to the techniques discussed herein.FIG. 3 is a flowchart of an example process for encrypting data according to the techniques disclosed herein.FIG. 4 is a flowchart of an example process for obtaining challenge values according to the techniques discussed herein.FIG. 5 is a flowchart of an example process for encrypting data according to the techniques disclosed herein.FIG. 6 is a flowchart of an example process for encrypting data according to the techniques discussed herein.FIG. 7 is a flowchart of an example process for decrypting data according to the techniques disclosed herein.FIG. 8 is a flowchart of an example process for decrypting data according to the techniques disclosed herein.FIG. 9 is a flowchart of an example process for decrypting data according to the techniques disclosed herein.FIG. 10 is a flowchart of an example process for decrypting data according to the techniques disclosed herein.detailed descriptionTechniques are disclosed for protecting data in a memory of a computing device using a memory encryption device that provides strong protection for data stored in a memory of a computing device. The techniques discussed herein use a memory encryption device (MED) to encrypt data before the data is stored in the memory of the computing device. The MED of the techniques discussed herein may use a Physical Unreplicated Function (PUF) module to generate a key to be used by the MED to encrypt data transmitted across the bus and/or stored in a memory of the computing device. The encryption key is never transmitted across the data bus or stored with the encrypted data or in the chip. Instead, query values are used to obtain response values from the PUF module that can be used as encryption keys to encrypt a particular set of data. The challenge value is stored along with the encrypted data, and the MED can use the challenge value to recover the encryption key used to encrypt the encrypted data. Even if the attacker were able to obtain the challenge value associated with a particular portion of the encrypted data, the attacker would still only be able to obtain the key associated with that particular challenge-response pair from the PUF. The MED may be configured to use different challenges for each part of the data to be encrypted. For example, the MED may be configured such that each block of data may be encrypted with a different key supplied by the PUF module, and the challenge to recover this key may be stored in memory along with the encrypted block of data. When the processor needs an encrypted block of data, the MED may retrieve the encrypted block of data from the memory and the challenge value, obtain the encryption key from the PUF by providing the challenge value to the PUF, and decrypt the block of encrypted data.The MED used for the techniques disclosed herein is an improvement over conventional MEDs, which rely on security by obscurity to ensure data confidentiality in internal and/or external memory and on the bus. Conventional MEDs use a private key embedded in the silicon of the chip to encrypt the data. This conventional method is vulnerable to cryptanalysis attacks. Cryptographic analysis can be used to expose one or more keys embedded in the silicon of the chip. Once the attacker has such keys, the attacker can decrypt the data encrypted by the MED. The use of a PUF in the techniques disclosed herein does not face these disadvantages because the keys used by the MED are not stored in silicon and are generated by the PUF as needed.Example hardwareFIG. 1 is a block diagram of a computing device 100 that can be used to implement the techniques disclosed herein. The computing device may be used to at least partially implement the processes illustrated in FIGS. 2-10. The computing device 100 includes a CPU 105, a memory encryption device (MED) 110, a physical non-reproducible function module 115, an interrogation value generator 125, and a memory 130. The example computing device 100 illustrated in FIG. 1 is merely an example to illustrate the concepts discussed herein. The techniques discussed herein may be implemented on a computing device that may have additional components not described herein and/or in place of the components included in the example illustrated in FIG. 1 .A central processing unit (CPU) 105 (also referred to herein as a processor) includes electronic circuitry for implementing computer program instructions. The CPU 105 may include components to perform various actions based on computer program instructions, including basic arithmetic, logic operations, control operations, and input/output (I/O) operations. CPU 105 may be configured to receive storage instructions that cause CPU 105 to store data in memory 130 and read instructions that cause CPU 105 to retrieve data stored in memory 130 .The MED 110 may be configured to encrypt data to be stored in the memory 130 and/or sent across the data bus 135 and store the encrypted data and the challenge values associated with the encrypted data. MED 110 may implement the encryption and decryption processes illustrated in FIGS. 2 to 10 . MED 110 may be configured to perform an encryption step in response to a request to store data from CPU 105 (eg, where CPU 105 provides data 160 to MED 110). The MED 110 may encrypt the data 160 and output the encrypted data 165, which may be stored in the memory 130 by sending the encrypted data 165 and the challenge value 145 associated with the encrypted data 165 to the memory 130 across the data bus 135. The encrypted data and the challenge value 145 associated with the encrypted data 165 may be stored at the memory location 170 in the memory 130 . In the example illustrated in FIG. 1, there is only a single instance of the encrypted data 165 and the challenge value 145 associated with the single instance of the encrypted data to simplify the description of the concepts disclosed herein. However, MED 110 may store multiple instances of encrypted data 165 and the challenge values associated with each of such instances of encrypted data 165 .The MED 110 may be configured to operate on data in a block such that that particular block of data is encrypted using a key associated with each block of data. The encryption key may be obtained by rendering the challenge value 145 to the PUF module 115 to obtain the response value 155. The MED 110 may use all or part of this response value as the encryption key to be used to encrypt the data 160 . MED 110 may be configured to use various encryption techniques. For example, the MED 110 may be configured to encrypt the data 160 by applying an exclusive OR (XOR) operation to the data 160 and the response values or portions thereof received from the PUF module 115 . Encrypting data 160 using the XOR algorithm in this context may provide strong encryption protection for the encrypted data 160 because the encrypted value may be obtained from the PUF module 115 using the challenge value 145 provided by the challenge value generator 125 (discussed below). The key, with a random key, selectively encrypts each block of memory or other sections of the memory to be encrypted. MED 110 may also be configured to encrypt data 160 using other encryption algorithms (eg, Advanced Encryption Standard (AES) algorithms or other encryption algorithms, and are not limited to only XOR or AES algorithms).The PUF module 115 can be implemented using various technologies. In one example implementation, PUF module 115 may include a plurality of ring oscillators. The plurality of ring oscillators (ROs) may be simultaneously enabled and their outputs may be sent to two or more switches (multiplexers). The challenge value acts as an input to the switch, which causes each switcher to then select a single RO from the multiple ROs. The challenge values sent to the switch can be designed so that each switch selects a different RO. Even though it may have been attempted in manufacturing to make each of the selected ROs the same, the selected RO may still have slightly different resonant frequencies associated with it each due to minor manufacturing variations at the semiconductor level. The response value 155 may be generated by a pairwise comparison of the frequencies of such selected ring oscillators as measured/stored by a pair of counters. For example, if the first counter detects a higher frequency than the second counter, a logic "1" may be generated, otherwise a logic "0" may be generated. In this way, the comparison performed represents a challenge/response mechanism where the selected RO pair is the challenge value and the RO frequency comparison result is the response value. The plurality of ring oscillator implementations is only one example of an implementation type that can be used to implement the PUF module 115 . Other technologies that provide PUFs based on the components of the CPU 105, the memory 130, and/or the physical features of the computing device 100 that are difficult to predict, are easy to evaluate, and reliably provide other components that provide consistent results may be used to implement the PUF module 115.The MED 110 may also be configured to access the encrypted data 165 stored in the memory 130 and the challenge value 145 associated with the encrypted data 165, and decrypt the encrypted data 165 to recover the original unencrypted data 160. The MED 110 may implement the decryption process illustrated in FIGS. 7 to 10 . The MED 110 may be configured to perform a decryption step in response to a read data request from the CPU 105 (eg, where the CPU 105 provides the address of the data to be read to the MED 110). The MED 110 may be configured to access the encrypted data 165 at the memory location 170 in the memory 130 and the challenge value 145 associated with the encrypted data 165 . The memory location 170 corresponds to the memory location of the data requested in the read data request. Since each instance of the encrypted data 165 is written to a separate memory location in the memory 130, the memory location 170 associated with the instance of the encrypted data 165 stored in the memory 130 will be for the memory stored in the memory 130. Each instance of encrypted data 165 changes. The MED 110 may retrieve the encryption key used to encrypt the encrypted data 115 using the challenge value 145 associated with the encrypted data retrieved from the memory location 170 . The MED 110 may provide the challenge value 145 to the PUF module 115 to obtain a response value from the PUF module 115. Assuming that the challenge value was not changed or corrupted while in memory 130, PUF module 115 should provide the same recovered response value as the response value 155 used to encrypt the encrypted data. The MED 110 may select all or part of the recovered response value to be used as a key to decrypt the encrypted data 165. MED 110 may be configured to select the same portion of the restored response value as the portion selected from response value 155 and/or perform the same operations as those performed on response value 155 to regenerate to encrypt encrypted data 165 Key.The computing device may also include an interrogation value generator 125 . The challenge generator 125 may include a random number generator (RNG) that may be configured to provide a random number to the MED 110. The random number may be used by the MED 110 as the challenge value 145 to be submitted to the PUF module 115 for subsequent use. The response value 155 of the data 160 from the CPU 105 is encrypted. Challenge value generator 125 includes a monotonic counter that may provide the value each time a unique value is read, and MED 110 may be configured to read the counter value from the monotonic counter, which may be used by MED 110 as a means for encrypting data. Encryption key. The MED 110 may also use other types of challenge generators to generate the challenge values to be presented to the PUF module 115 . The size of the challenge value may vary and may depend on the size of the memory 130 for which data is to be encrypted. The challenge value may include a sufficient number of bits to ensure that each block of memory 130 may be protected with a unique challenge value.For clarity, the MED 110, the PUF module 115, and the challenge generator 125 have each been described as separate components from the CPU 105. However, one or more of MED 110, PUF module 115, and challenge value generator 125 may be implemented as a component of CPU 105.Example implementationFIG. 2 is a flowchart of an example process for protecting data in a memory according to the techniques discussed herein. The process illustrated in FIG. 2 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing various stages of the process illustrated in FIG. 2 .The processor's memory encryption device may be used to encrypt the data associated with the storage request to generate encrypted data (stage 205). The MED 110 may receive a storage request to store the unencrypted data 160 from the CPU 105 . The unencrypted data 160 may include data blocks or may include differently sized portions of data to be stored in the memory 130 of the computing device 100 . The MED 110 may be configured to submit the challenge value 145 to the PUF module 115 to obtain the response value 155 from the PUF module 115. The MED 110 may use all or a portion of the response value 155 as the encryption key for the encrypted data 160 . FIG. 3 illustrates an example process that MED 110 may use to encrypt data 160 . MED 110 may be configured to encrypt data 160 using various encryption algorithms such as XOR encryption algorithm, AES algorithm, and/or other encryption algorithms.Encrypted data and the challenge values associated with the encrypted data may be stored in a memory of the computing device (stage 210). The encrypted data 165 may be provided to the memory 130 via the data bus 135 for storage at the memory location 170 . The challenge value used to obtain the key from the PUF module 115 may also be stored along with the encrypted data 165 at the memory location 170 of the memory 130 . The challenge value 145 associated with the encrypted data may be used to retrieve the encryption key needed to decrypt the data from the PUF module 115. The challenge value 145 is stored only with the encrypted data 165. Thus, even if an attacker were able to access the memory location 170 to obtain the encrypted data 165 and the challenge value 145, the separate challenge value 145 is still insufficient to decrypt the encrypted data 165, and because the encryption key was generated by the PUF module 115, the attack It is unlikely that it is possible to predict the encryption key derived from the response value 155 solely from the challenge value 145 alone.FIG. 3 is a flowchart of an example process for encrypting data according to the techniques disclosed herein. The process illustrated in FIG. 3 may be used to implement stage 205 of the process illustrated in FIG. The process illustrated in FIG. 3 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing the various stages of the process illustrated in FIG. 3 .The query value is available (stage 305). The challenge value 145 is a value to be provided to the PUF module 115, and the PUF module 115 generates a response value 155 in response to the challenge value 145. The MED 110 may be configured to obtain new challenge values each time the MED 110 receives a storage request from the CPU 105 . The challenge value may be a random number and may be obtained from the challenge value generator 125. MED 110 may also be configured to use other techniques for generating challenge values.An interrogation value may be provided to a Physical Non-Copyable Function (PUF) module to obtain a response value (stage 310). The MED 110 may provide the challenge value 145 to the PUF module 115 to obtain the response value 155. The nature of the PUF module 115 makes it very difficult to predict the response value 155 obtained from the PUF module 115 based on the challenge value 145 .The data associated with the storage request may be encrypted using the response value as an encryption key to generate encrypted data (stage 315). MED 110 may be configured to apply an encryption algorithm to data 160 using at least a portion of response value 155 as an encryption key to generate encrypted data 165 . MED 110 may be configured to apply different encryption techniques for encrypting data. FIGS. 4 and 5 provide examples of a process in which MED 110 applies an XOR encryption algorithm to encrypt data 160 . MED 110 may be configured to apply other types of encryption algorithms (eg, AES algorithms) to data 160 using at least a portion of response value 155 as an encryption key.FIG. 4 is a flowchart of an example process for obtaining challenge values according to the techniques discussed herein. The process illustrated in FIG. 4 may be used to implement stage 305 of the process illustrated in FIG. The process illustrated in FIG. 4 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing various stages of the process illustrated in FIG. 4 .The challenge value may be requested from the random number generator (stage 405). As discussed above, challenge value generator 125 may include a random number generator, and MED 110 may be configured to request a random number from the RNG, which MED 110 may use as the challenge value to be submitted to PUF module 115 In order to obtain a response value 155 that can be used as an encryption key for encrypting the data 160. Challenge value generator 125 may also include a monotonic counter that may provide the value each time a unique value is read, and MED 110 may be configured to read a counter value from the monotonic counter, which may be used by MED 110 for encryption. The encryption key of the data. The MED 110 may also use other types of challenge generators to generate the challenge values to be presented to the PUF module 115 .The challenge value may be received from the challenge generator (stage 410). Challenge value generator 125 may be configured to provide the challenge value to MED 110. The MED 110 may be configured to provide the challenge values as received from the challenge value generator 125 to the PUF module 115. The MED 110 may also be configured to perform one or more operations on the challenge values in order to obtain the challenge value 145 to be provided to the PUF module 115 . For example, the MED 110 may be configured to select a predetermined number of bits from the challenge values received from the challenge value generator 125 . For example, the MED 110 may be configured to select the first 4 and last 4 bits of the challenge value received from the challenge generator 125 . The MED 110 may also be configured to adjust a random value to fall within a predetermined range of the challenge value expected by the PUF module 115 . The provided examples are meant to provide examples illustrating some of the processing that the MED 110 may perform on the values obtained from the challenge value generator 125 and are not meant to be exclusive.FIG. 5 is a flowchart of an example process for encrypting data according to the techniques disclosed herein. The process illustrated in FIG. 5 may be used to implement stage 315 of the process illustrated in FIG. The process illustrated in FIG. 5 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing various stages of the process illustrated in FIG.An exclusive OR (XOR) operation may be applied to the response value from the PUF module and the data associated with the read request (stage 505). The MED 110 may be configured to apply an XOR operation to at least a portion of the data 160 and the response value 155 received from the PUF module 115 . For example, the MED 110 may be configured to select a predetermined number of bits from the response value 155 to be used as an encryption key. For example, the MED 110 may be configured to select the first X number of bits and the last Y number of bits of the challenge value received from the challenge value generator 125, where X and Y are integer values, and X and Y add up to be encrypted The number of bits of data 160. The MED 110 may also be configured to perform other operations on the response value in order to obtain the key. For example, the MED 110 may be configured to apply a modulo operation to the response value to keep the encryption key within a predetermined range of bits or a predetermined number of bits.The encrypted data may be output (stage 510). MED 110 may be configured to output encrypted data 160 . The MED 110 may be configured to store the encrypted data 165 at the memory location 170 of the memory 130 . MED 110 may also be configured to provide encrypted data 165 to CPU 105, which may be configured to process encrypted data.FIG. 6 is a flowchart of an example process for encrypting data according to the techniques discussed herein. The process illustrated in FIG. 6 may be used to implement stage 315 of the process illustrated in FIG. The process illustrated in FIG. 6 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing the various stages of the process illustrated in FIG. 6 . The process illustrated in FIG. 6 is similar to the process illustrated in FIG. 5, but the process illustrated in FIG. 6 also uses the address of the address location 170 of the memory 130 in which the encrypted data 165 is to be stored to further encrypt the data. MED 110 may be configured to perform an XOR operation on response values (serving as encryption keys) from PUF module 115, data to be encrypted 160, and addresses 170 associated with memory locations in which the encrypted data is to be stored in any order, This is because the XOR operation is transferable. Thus, the order in which such operations are performed in the process illustrated in FIG. 6 is merely one example of a process for encrypting data 160 using such three values, and other embodiments may be performed in a different order.An exclusive OR (XOR) operation may be applied to the response value from the PUF module and the data associated with the read request to generate an intermediate value (stage 605). The intermediate value is similar to that generated during stage 505 of the process illustrated in FIG.An exclusive OR (XOR) operation may be applied to the intermediate value and the address value associated with the memory location where the encrypted data is stored (phase 610). The MED 110 may apply an XOR operation to the address values associated with the memory locations 170 in the memory 130 and the intermediate values determined in stage 605 to generate encrypted data 165 .The encrypted data may be output (stage 615). MED 110 may be configured to output encrypted data 160 . The MED 110 may be configured to store the encrypted data 165 at the memory location 170 of the memory 130 . MED 110 may also be configured to provide encrypted data 165 to CPU 105, which may be configured to process encrypted data.FIG. 7 is a flowchart of an example process for decrypting data according to the techniques disclosed herein. The process illustrated in FIG. 7 may be followed by the process illustrated in FIG. 2 and may be used to decrypt data encrypted according to the process illustrated in FIG. The process illustrated in FIG. 7 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing various stages of the process illustrated in FIG.The encrypted data 165 and the challenge value 145 associated with the encrypted data may be obtained from the memory 130 in response to a read request from the CPU 105 (stage 705). The read request may specify the address of the memory location 170 where the encrypted data 165 and the challenge value 145 associated with the encrypted data are stored. As discussed above, MED 110 may store multiple instances of encrypted data 165 and respective challenge values 145 associated with each of the instances of encrypted data 165 . Thus, each instance of the encrypted data 165 will be associated with a corresponding memory location 170 in the memory 130 where that instance of the encrypted data 165 may be found.The encrypted data 165 may be decrypted to produce decrypted data 160 (stage 710). The MED 110 may decrypt the encrypted data 110 using the challenge value 145 associated with the encrypted data 165 . The MED 110 may be configured to decrypt the encrypted data 110 using an appropriate decryption technique based on the encryption technology that the MED 110 uses to encrypt the data. For example, MED 110 may be configured to encrypt data using an XOR technique as discussed above. MED 110 may also be configured to also encrypt data using other techniques (eg, AES encryption algorithms). FIGS. 8, 9 and 10 provide an example process for decrypting encrypted data 165 where encrypted data 165 is encrypted using an XOR encryption algorithm.The decrypted data 160 may be provided to the CPU 105 or other components of the computing device 100 (stage 715). MED 110 may provide decrypted data 160 to CPU 105 or other components of computing device 100 . The CPU 105 may perform one or more operations on the decrypted data 160, or the decrypted data 160 may be provided to a peripheral device for processing by the peripheral device. For example, decrypted data 160 may be provided to a graphics processor for use in determining information displayed on a display associated with computing device 100 . Other types of peripheral devices may receive the decrypted data 160 for processing.FIG. 8 is a flowchart of an example process for decrypting data according to the techniques disclosed herein. The process illustrated in FIG. 8 may be used to implement stage 710 of the process illustrated in FIG. The process illustrated in FIG. 8 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, memory encryption device 110 of computing device 100 may provide means for performing the various stages of the process illustrated in FIG. 8 .The challenge value 145 may be provided to the PUF module 115 to obtain a recovered response value (stage 805). The challenge value 145 stored in the memory location 170 of the memory 130 may be provided to the PUF module 115 to receive the recovered response value. The MED 110 may use the recovered response value as a decryption key for decrypting the encrypted data 110. In some implementations, the challenge value 145 may be stored at a memory location of the memory 130 that is different from the memory location of the encrypted data 165 . The MED 110 may be configured to maintain a mapping between the memory locations of the encrypted data 165 and the memory locations of the challenge values 145 associated with each instance of the encrypted data 165 .The encrypted data 165 may be decrypted using the recovered response value (stage 810). The MED 110 may decrypt the encrypted data 110 using at least a portion of the response value obtained from the PUF module 115 as a decryption key. The MED 110 may be configured to decrypt the encrypted data 110 using an appropriate decryption technique based on the encryption technology that the MED 110 uses to encrypt the data. For example, MED 110 may be configured to encrypt data using an XOR technique as discussed above. MED 110 may also be configured to also encrypt data using other techniques (eg, AES encryption algorithms). FIGS. 9 and 10 provide an example process for decrypting encrypted data 165 where encrypted data 165 is encrypted using an XOR encryption algorithm.FIG. 9 is a flowchart of an example process for decrypting data according to the techniques disclosed herein. The process illustrated in FIG. 9 may be used to implement stage 810 of the process illustrated in FIG. The process illustrated in FIG. 9 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, the memory encryption device 110 of the computing device 100 may provide means for performing various stages of the process illustrated in FIG.An exclusive OR (XOR) operation may be applied to the encrypted data 165 and the recovered response value (stage 905). The MED 110 may be configured to apply an XOR operation to the encrypted data 165 and at least a portion of the recovered response value received from the PUF module 115 in response to providing the challenge value 145 obtained from the memory 130 to the PUF module 115 . For example, MED 110 may be configured to select a predetermined number of bits from the recovered response value to be used as a decryption key. The selected bits depend on those bits originally selected from the response value 155 obtained from the PUF module 115 when the encrypted data 165 was encrypted. The MED 110 may also be configured to perform other operations on the response value in order to obtain a key, depending on the process that the MED 110 performs on the response value 155 to generate the key used to encrypt the encrypted data 165.The MED 110 may then output decrypted data (stage 910). MED 110 may provide decrypted data 160 to CPU 105 or other components of computing device 100 .FIG. 10 is a flowchart of an example process for decrypting data according to the techniques disclosed herein. The process illustrated in FIG. 10 may be used to implement stage 810 of the process illustrated in FIG. The process illustrated in FIG. 10 may be implemented by the computing device 100 illustrated in FIG. Unless otherwise specified, memory encryption device 110 of computing device 100 may provide means for performing the various stages of the process illustrated in FIG. 10 . The process illustrated in FIG. 10 is similar to the process illustrated in FIG. 9, but the process illustrated in FIG. 10 also uses the address of the address location 170 of the memory 130 in which the encrypted data 165 is to be stored to decrypt the data. The MED 110 may be configured to address the restored response value (serving as a decryption key) from the PUF module 115, the encrypted data 165 to be decrypted, and the address 170 associated with the memory location in which the encrypted data 165 is stored in any order. The XOR operation is performed because the XOR operation is transferable. Thus, the order in which such operations are performed in the process illustrated in FIG. 10 is merely one example of a process for decrypting the encrypted data 160 using these three values, and other implementations may be performed in a different order.An XOR operation may be applied to the encrypted data 165 and the recovered response value to generate an intermediate value (stage 1005). The first XOR operation reverses the XOR operation performed in stage 605 of the process illustrated in FIG. 6, wherein at least a portion of the response value 155 from the PUF module 115 is used to perform a first cryptographic operation on the data 160 to generate encrypted data 165 .An exclusive OR (XOR) operation may be applied to the intermediate values and address values associated with the memory locations in which the encrypted data is stored to generate decrypted data (stage 1010). The second XOR operation reverses the XOR operation performed in stage 610 of the process illustrated in FIG. 6, where the address value associated with the memory location 170 associated with the decrypted encrypted data 165 is used to generate encrypted data. 165. The encrypted data 165 should now have returned to the original unencrypted state and should match the unencrypted data 160 initially provided by the CPU 105 to the MED 110 for encryption.The MED 110 may then output the decrypted data (stage 1015). MED 110 may provide decrypted data 160 to CPU 105 or other components of computing device 100 .Depending on the application, the methods described herein may be implemented by various means. For example, such methods may be implemented in hardware, firmware, software, or any combination thereof. For a hardware implementation, the processing unit may be in one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( The implementation of an FPGA, a processor, a controller, a microcontroller, a microprocessor, an electronic device, other electronic units designed to perform the functions described herein, or combinations thereof.For firmware and/or software implementations, the methods may be implemented with modules (eg, procedures, functions, and so on) that perform the functions described herein. In the implementation of the methods described herein, any machine-readable medium that tangibly embodies instructions may be used. For example, software code may be stored in memory and executed by a processor unit. The memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to any type of long-term, short-term, volatile, non-volatile, or other memory, and is not limited to any particular type of memory or a particular number of memory or a particular type of media . Tangible media includes one or more physical items of machine-readable media, such as random access memory, magnetic memory, optical storage media, and the like.If implemented in firmware and/or software, the functions may be stored on one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded in a data structure and computer-readable media encoded in a computer program. Computer-readable media includes physical computer storage media. Storage media can be any available media that can be accessed by a computer. By way of example, and not limitation, such computer readable media may include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or may be used to store desired programs in the form of instructions or data structures. Any other media that is code-accessible by a computer; as used herein, magnetic and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), and Blu-ray disks, in which disks are usually magnetically Data is reproduced, and the optical disk reproduces data optically with a laser. Combinations of the above should also be included within the scope of computer-readable media. Such media also provide examples of non-transitory media that may be machine-readable, and where the computer is an example of a machine that can be read from such non-transitory media.The general principles discussed herein may be applied to other embodiments without departing from the spirit or scope of the invention or the claims. |
Layout synthesis of regular structures using relative placement. Relative placement constraint information is received. The relative placement constraint information indicates a relative placement of a plurality of layout objects with respect to each other, wherein at least a first one of the plurality of layout objects may be at a different level of hierarchy in the layout than at least a second one of the plurality of layout objects. The plurality of layout objetcs is then automatically placed according to the relative placement constraint information. |
CLAIMS What is claimed is:1. A method comprising: receiving relative placement constraint information, the relative placement constraint information indicating a relative placement of a plurality of layout objects with respect to each other, wherein at least a first one of the plurality of layout objects is at a different level of hierarchy in the layout than at least a second one of the plurality of layout objects; and automatically placing the plurality of layout objects according to the relative placement constraint information. 2. The method of claim 1 further comprising automatically placing any remaining layout objects for an integrated circuit layout using a conventional placement engine. 3. The method of claim 1 wherein receiving relative placement constraint information includes receiving information indicating a relative placement of a first layout object with respect to a second layout object wherein each of the first and second layout objects is one of an instance, group or vector. 4. The method of claim 3 wherein receiving information indicating a relative placement includes <Desc/Clms Page number 25> receiving information indicating that the first layout object is to be placed relative to the second layout object according to one of a set of relative placement operations including horizontal step, vertical step, horizontal abut, vertical abut, interleave and merge. 5. The method of claim 4, wherein automatically placing includes determining an order for placing layout objects if no order is specified in the relative placement constraint information. 6. The method of claim 4 further comprising creating a new group as a result of each of the set of relative placement operations. 7. The method of claim 6 wherein creating a new group comprises one of creating a hard group in which relative placement constraints are specified for each layout object in the hard group, and creating a soft group in which relative placement constraints are not specified for each layout object in the soft group. 8. The method of claim 3, wherein receiving relative placement user constraint information includes receiving absolute constraint information, the absolute constraint information including one of a space specification, a keep-out region specification and an open bit specification. <Desc/Clms Page number 26> 9. The method of claim 1 wherein receiving relative placement constraint information includes receiving information specifying global options, the global options including one or more of a boundary, a number of bits in a datapath, an orientation of a unit, a well-alignment direction, a rowsite height and a bit structure. 10. The method of claim 9 wherein automatically placing the plurality of objects includes applying the specified global options to each of the plurality of layout objects unless a conflicting object-specific constraint is received. 11. The method of claim 1 wherein receiving relative placement information includes receiving object-specific constraints, the object-specific constraints including one or more of a span, a bit structure, a well-alignment style, a rowsite height, a stride, a height and width and a rigidness indicator. 12. The method of claim 11 wherein automatically placing includes applying the object-specific constraints to associated layout objects where a conflicting global option is also specified. 13. An apparatus comprising : a relative placement engine to produce a detailed placement in response to receiving a schematic specifying a plurality of layout objects and a set of user <Desc/Clms Page number 27> constraints, the user constraints specifying a placement of at least some of the layout objects relative to each other, at least one of the specified objects being at a different level of layout hierarchy than another one of the specified objects. 14. The apparatus of claim 13 further comprising: a conventional placement engine coupled to the relative placement engine, the conventional placement engine to work with the relative placement engine to produce the detailed placement, the conventional placement engine to place any layout objects not specified in the user constraints. 15. The apparatus of claim 13 further comprising : a constraint extraction engine to receive a first detailed placement and to extract user constraint information from the first detailed placement to be provided to the relative placement engine to produce a revised detailed placement. 16. The apparatus of claim 13 wherein the user constraint information received by the relative placement engine includes relative placement information indicating a relative placement of at least a first layout object with respect to at least a second layout object. 17. The apparatus of claim 16 wherein the first and second layout objects are each one of an instance, group and vector. <Desc/Clms Page number 28> 18. The apparatus of claim 17 wherein the relative placement information includes at least one relative placement operator from a set including a horizontal step operator, a vertical step operator, a horizontal abut operator, a vertical abut operator, an interleave operator and a merge operator. 19. The apparatus of claim 13 wherein the user constraint information includes an object-specific constraint from a set of object-specific constraints including span, bit structure, well-alignment style, rowsite height, stride, alignment guidelines, height, width, and rigidness. 20. The apparatus of claim 19 wherein the user constraint information includes a global constraint from a set of global constraints including a boundary, an origin, a number of bits in a datapath, an orientation of a unit, a well-alignment direction, and a rowsite height. 21. The apparatus of claim 20 wherein, if an object-specific constraint conflicts with a global constraint, the object-specific constraint takes precedence. 22. The apparatus of claim 13 wherein the user constraint information includes <Desc/Clms Page number 29> an absolute constraint from a set of absolute constraints including a space constraint, an obstacle constraint, an open bit constraint, a net length constraint, an absolute offset constraint, and a net width constraint. 23. A method comprising: receiving an integrated circuit schematic that specifies a plurality of objects to be placed; receiving a user constraint specification, the user constraint specification specifying a relative placement for a first set of the plurality of objects, at least one of the objects in the first set being at a different level of layout hierarchy than at least one other object in the first set; automatically placing the objects in the first set according to the relative placement indicated in the user constraint specification; and automatically placing any remaining objects specified by the schematic using a conventional placement approach. 24. The method of claim 23 wherein receiving a user constraint specification includes receiving relative placement information indicating a relative placement operation to be performed on a first layout object in the first set with respect to a second layout object in the first set, the relative placement operation being one of a set including a horizontal step operation, a vertical step operation, a horizontal abut operation, a vertical abut operation, an interleave operation and a merge operation. <Desc/Clms Page number 30> 25. The method of claim 23 wherein receiving a user constraint specification includes receiving absolute placement information including an absolute placement operation from a set including an open space operation, an obstacle operation, an open bit operation and a net length or weight operation. 26. The method of claim 26 further comprising processing the absolute placement information after automatically placing the objects, and adjusting the automatic placement of the objects after processing the absolute placement information. 27. The method of claim 23 wherein receiving user constraint information includes receiving global options, the global options being from a set including a boundary, an origin, a number of bits in a datapath, an orientation of a unit, a wellalignment direction, a rowsite height and a bit structure, receiving object-specific constraints, the object-specific constraints being from a set including a bit span, a bit structure, a well-alignment style, a rowsite height, a stride, a height, a width and a rigidness, and prioritizing an object-specific constraint over a conflicting global option. 28. A method comprising: <Desc/Clms Page number 31> receiving relative placement constraint information for a first design ; receiving process and other constraints related to a second design; and receiving a schematic specifying objects to be placed for the second design; and providing a detailed placement for the second design using the relative placement constraint information for the first design. 29. The method of claim 28 further comprising: extracting the relative placement constraint information for the first design from a detailed placement for the first design. 30. An article of manufacture comprising a machine-accessible medium including data that, when accessed by a machine, cause the machine to: produce a detailed placement in response to receiving a schematic specifying a plurality of layout objects and a set of user constraints, the user constraints specifying a placement of at least some of the layout objects relative to each other, at least one of the specified objects being at a different level of layout hierarchy than another one of the specified objects. 31. The article of manufacture according to claim 30 wherein the machine-accessible medium further includes data that, when accessed by a machine causes the machine to: place a first set of the plurality of layout objects according to relative placement constraints specified in the user constraints, and <Desc/Clms Page number 32> place the remainder of the plurality of layout objects according to conventional placement techniques. 32. The article of manufacture of claim 31 wherein the relative placement constraints include one of the relative placement constraints of a set including a horizontal step constraint, a vertical step constraint, a horizontal abut constraint, a vertical abut constraint, an interleave constraint and a merge constraint. 33. The article of manufacture of claim 31 wherein the user constraint information includes an absolute constraint from a set including an open space constraint, an obstacle constraint, an open bit constraint, a net length constraint, and a net weight constraint. 34. The article of manufacture of claim 31 wherein the user constraint information specifies a global option from a set of global options including a boundary, an origin, a number of bits in a datapath, an orientation of a unit, a wellalignment style, a rowsite height and a bit structure. 35. The article of manufacture of claim 34 wherein the user constraint information specifies an object-specific constraint from set of object-specific constraints including a span, a bit structure, a well-alignment style, a rowsite height, a stride, a height, a width and a rigidness. <Desc/Clms Page number 33> 36. The article of manufacture of claim 35 wherein, if the user constraint information specifies an object-specific constraint that conflicts with a global option, the object-specific constraint is applied. |
<Desc/Clms Page number 1> A METHOD AND APPARATUS FOR LAYOUT SYNTHESIS OF REGULAR STRUCTURES USING RELATIVE PLACEMENT BACKGROUND 1. Field An embodiment of the present invention relates to the field of integrated circuit design tools and, more specifically, to a method and apparatus for layout synthesis using relative placement. 2. Discussion of Related Art Timing convergence of layouts with a given area constraint is a difficult problem for many integrated circuit designs. Current approaches may involve numerous time consuming iterations between circuit design, place and route, and timing analysis. This process can be both slow and non-deterministic resulting in project management uncertainties. As a specific example, there are currently two primary approaches for datapath layout-purely manual and fully automatic. Where a manual layout approach is used, mask designers may lay out entire functional blocks by hand, for example. While a manual layout approach provides a high degree of control, it is very time consuming and may not be feasible for very large designs. Automatic placement tools, on the other hand, are capable of handling large designs, but their use may result in increased difficulty achieving timing convergence and may limit the degree of control the designer has over the resulting layout. This is because, in order to automate the layout process, such tools are designed to make automatic judgments and assumptions based on the <Desc/Clms Page number 2> input data. In some cases, these assumptions may be incorrect or may otherwise not capture the intent of the designer in producing a layout. One reason this may occur is that conventional automatic placement tools typically only include a small number and range of user controls to provide for the designer to constrain the input data to achieve a desired placement result. In many cases, for example, the user is limited to specifying timing constraints indirectly as net weights or net/path constraints. In this manner, the effects of an adjustment to one of these constraints may be difficult to anticipate. Thus, several iterations and tweaking of these indirect constraints may be required to achieve timing convergence using the automated tool. Alternatively, the designer may instead resort to manual adjustments, which can be time consuming. Another issue may arise when there are changes in cell sizes due to, for example, engineering changes and/or process shifts. Using process shifts as a specific example, layout compaction is often used, but has some shortcomings. Straight compaction may be inefficient under tight area constraints and may not honor the designers'original intent during re-synthesis. For multiple generations of design re-use, the designers'intent may be lost completely resulting in issues ranging from performance penalties to inefficient area use. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements, and in which: <Desc/Clms Page number 3> Figure 1 is a flow diagram showing a method of one embodiment for producing a layout using the relative placement approach of one embodiment. Figure 2 is a block diagram of a computer system in which the relative placement approach of one embodiment may be implemented. Figure 3 is a flow diagram showing the method of one embodiment for producing a layout using the relative placement approach of one embodiment. Figure 4 is a diagram showing exemplary relative placement constraint expressions and corresponding graphical illustrations of the resulting objects. Figure 5 is a flow diagram showing the operation of the automatic placement engine of one embodiment. Figure 6 is a block diagram showing an approach of one embodiment for producing a revised placement using previously specified user constraints. Figure 7 is a block diagram showing an approach of another embodiment for producing a revised placement using previously specified user constraints. DETAILED DESCRIPTION A method and apparatus for layout synthesis of regular structures using relative placement is described. In the following description, particular types of systems, functional unit blocks, instructions, groups of objects, etc. are described for purposes of illustration. It will be appreciated, however, that other embodiments are applicable to other types of systems, functional unit blocks, instructions and object groupings, for example. For one embodiment, as shown in Figure 1, at block 105, relative placement constraints indicating a relative placement of multiple integrated circuit <Desc/Clms Page number 4> layout objects with respect to each other are received. At least one of the layout objects specified in the constraints may be at a different level of hierarchy in the layout than at least another one of the specified layout objects. In other words, for one embodiment, the capability is provided to represent and handle relative placement constraints between physical objects and/or components at different level of logical netlist hierarchy. At block 110, the layout objects are placed according to the relative placement constraints. Using the relative placement approach of one embodiment, the intent of the integrated circuit designers may be more easily captured by the input data such that timing synthesis may be more straightforward. Further, for one embodiment, the user constraint information from an original design may be used in producing a layout for a design proliferation such as a process shrink. In this manner, the number of placement iterations required to achieve timing convergence may be reduced as compared to a straight layout compaction, for example. Further details of this and other embodiments are provided in the description that follows. In the following description, relative orientation and placement terminology, such as the terms horizontal, vertical, left, right, top and bottom, is used. It will be appreciated that these terms refer to relative directions and placement in a two dimensional layout with respect to a given orientation of the layout. For a different orientation of the layout, different relative orientation and placement terms may be used to describe the same objects or operations. Figure 2 is a block diagram of a computer system 200 in which the relative placement method and apparatus of one embodiment may be advantageously <Desc/Clms Page number 5> implemented. For this embodiment, the computer system 200 is a workstation computer system such as a Hewlett Packard HP 9000 Enterprise Server manufactured by Hewlett Packard Company of Palo Alto, California. Other types of workstations and/or other types of computers and/or computer systems are within the scope of various embodiments. The computer system 200 includes a processor 205 to execute instructions using an execution unit 210. A cache memory 215 may be coupled to or integrated with the processor 205 to store recently and/or frequently used instructions. The processor 205 is coupled to a bus 220 to communicate information between the processor 205 and other components in the computer system 200. Also coupled to the bus 220 are one or more input devices 225, such as a keyboard and/or a cursor control device, one or more output devices 230, such as a monitor and/or printer, one or more memories 235 (e. g. random access memory (RAM), read only memory (ROM), etc. ), other peripherals 240 (e. g. memory controller, graphics controller, bus bridge, etc. ), and one or more mass storage devices and/or network connectivity devices 245. The mass storage device (s) and/or network connectivity devices 245 may include a hard disk drive, a compact disc read only memory (CD ROM) drive, an optical disk drive and/or a network connector to couple the computer system 200 to one or more other computer systems or mass storage devices over a network, for example. Further, the mass storage device (s) 245 may include additional or alternate mass storage device (s) that are accessible by the computer system 200 over a network (not shown). <Desc/Clms Page number 6> A corresponding data storage medium (or media) 250 (also referred to as a computer-accessible storage medium) may be used to store instructions, data and/or one or more programs to be executed by the processor 200. For one embodiment, the data storage medium (or media) 250 stores information, instructions and/or programs 255-262 that are used to perform layout synthesis. For this exemplary embodiment, a relative placement engine 255 receives an integrated circuit schematic 256, a relative placement constraint file 257, other rules and constraints 258 and a cell library 259. Responsive to the information received, the relative placement engine 255 produces a detailed placement 260 for the layout objects included in the selected cell (s) according to the relative placement constraints 257 specified for some or all of the objects. For some embodiments, the relative placement engine 255 may be included as part of an automatic placement engine 261 that also includes a conventional placement engine 262. The relative placement engine 255 and user constraint specification 257 are each described in more detail below. For one embodiment, the other rules and constraints 258 may include, for example, design and/or process rules and/or design style and placement methodology-related constraints. Details of the design style and placement methodology-related constraints may be determined, at least in part, by the manner in which the wells are aligned between adjacently placed cells. Typically, data and control flow directions in a functional unit block (FUB) or other sub-unit of an integrated circuit are orthogonal to each other. Integrated circuit units may be referred to herein as being standard or rotated depending on whether the data flow direction is viewed as being North/South (vertical) or <Desc/Clms Page number 7> East/West (horizontal), respectively. Units for which the well alignment is in the data flow direction are referred to herein as data-aligned units and units for which the well alignment is in the control flow direction are referred to as control-aligned units. The corresponding design styles are referred to herein as data-aligned and control-aligned. It will be appreciated that other types of rules and constraints may also be included in the file 258. The cell library 259 is a database of cell-specific files. For one embodiment, these files include information such as bit pitch and number of bit slices for a particular leafcell. Other types of cell-specific information and/or information for other types of cells may also or alternatively be included. It will be appreciated by one of ordinary skill in the art that, while Figure 2 represents the data storage media 250 as a single block, for many embodiments, multiple data storage media may be used to store the information and/or instructions 255-260 and/or some of the information and/or instructions indicated by the blocks 255-260 may be accessible to computer system 200 over a network (not shown). The method of one embodiment for performing layout synthesis using relative placement is described with reference to Figures 2-5. While the following exemplary embodiments refer to layout synthesis for a datapath, another type of regular structure may also benefit from various embodiments. In Figure 3, at block 305, a schematic of interest 256 is loaded for processing by the relative placement engine 255. For one embodiment, placement is performed for an integrated circuit device one cell at a time, where a cell may be substantially any sub-unit of the integrated circuit. In fact, for one <Desc/Clms Page number 8> embodiment, relative placement as described below may be specified for cell (s) at every level of hierarchy from full-chip to macro-cells within functional unit blocks. Thus, at block 305 for this embodiment, the schematic for the particular cell of interest, referred to herein as a topcell, may be loaded. At block 310, one or more cell files corresponding to cells included within the selected topcell, referred to herein for one embodiment as leafcells, are loaded from the cell library 259 and at block 315, the schematic hierarchy for the cell is smashed to create a layout view for use by the placement engine 261. For one embodiment, this layout view is flat. For other embodiments, the layout view may be in the form of a multi-level hierarchy such that multiple levels of hierarchy may be placed simultaneously. For one embodiment, flattening of the cell schematic is performed by one of the placement engines 255, 261 or 262 or another engine (not shown) that is coupled to the relative placement engine 255. At block 320, one or more relative placement user constraint files 257 for the selected topcell are loaded for use by the relative placement engine 255. The relative placement user constraint file (s) contains user-specified constraints for relative placement of layout objects included within the topcell. For one embodiment, the relative placement user constraint file (s) is developed using a programming language that includes an extensive set of relative placement operators. For illustrative purposes, some characteristics of an exemplary relative placement user constraint file 257 that may be used in producing a placement by the relative placement engine 255 are described below. While specific syntax and operators are used as examples, one of ordinary skill in the art will appreciate <Desc/Clms Page number 9> that other syntax styles and other types of relative placement and related operators are within the scope of various embodiments. For one embodiment, for example, the relative placement user constraint file 257 may include both global options to be applied to processing of the entire topcell as well as object-specific constraints to be applied to specified layout objects within the topcell. Further, the user may specify both relative placement and absolute placement constraints, each of which may be applied to one or more levels of hierarchy within the layout. Additional user specifications may be included in the relative placement user constraint file 257 such as re-mapping of instance names and specification of other types of constraints. For one embodiment, global options may include any option or constraint to control operation of the relative placement engine 255 during automatic vectorization for a particular topcell. Examples of the types of global options that may be specified in the relative placement user constraint file 257 or another constraint file include specifying a boundary and/or origin for the topcell, indicating whether text case sensitivity is to be preserved and/or specifying the number of bits in the datapath, the orientation of the unit (e. g. standard or rotated), the well alignment direction (data or control), and/or the rowsite height for control-aligned objects, for example. Other global options or constraints that may be specified include a bit structure to be applied to the entire cell. For one embodiment, for example, the relative placement engine 255 provides the designer with the flexibility to specify a bit structure with a complex recurring bitpitch pattern that may have varying bitpitch and/or multiple sub-column designs within each bitpitch. <Desc/Clms Page number 10> While global options are globally applied, object-specific constraints are applied only to the specified objects. For one embodiment, an object may be one of three types: an instance, a vector or a group. As the terms are used herein, an instance is an atomic object to the placement engine (s) and is the basic building block for the placement. For one embodiment, each instance has a well-defined bounding box that is either estimated or pre-specified. A vector is a list of bussed or individual instances, each occupying a unique bit location in the datapath. Instances in control-aligned units or cells are aligned so their n-wells match. Instances in data-aligned designs are aligned based on a parameter such as a justify parameter discussed in more detail below. A group is a collection of instances, vectors and/or other groups that are to be placed together. Thus, a group may be at any one of a number of levels of hierarchy. Layout hierarchy, as the term is used herein, refers to different levels of granularity for the layout. For example, an instance is an atomic object as described above, and therefore, is at the lowest level of the layout hierarchy. A vector is a list of instances and, therefore, is at a higher level of hierarchy. A group that includes that vector is at an even higher level of hierarchy and so on. For one embodiment, a group may be a hard group or a soft group. For a hard group, the relative placement constraints for all the objects in the group are well-specified. In contrast, for a soft group, relative placement constraints are not specified for all objects in the group. The soft group is a simple collection of groups (hard or soft), vectors and/or instances that are placed together. <Desc/Clms Page number 11> For one embodiment, in addition to relative placement constraints, various other types of properties to be used during placement can be specified for groups and/or vectors and processed by the relative placement engine 255. Some of these properties are similar or identical to properties that may be specified as global options. For these cases, the object-specific properties take precedence for one embodiment. Where an object-specific property is not specified, the corresponding global property is applied. Examples of object-specific properties that may be specified for one embodiment include span and bit structure, well alignment style, rowsite height, stride, alignment guidelines, height and width or other boundary constraints, and rigidness. Other qualifying properties that may be specified with respect to specific objects include whether the object is to be flipped, folded, or split. Span refers to the number of bits in the respective vector or group. A bit structure different from the globally-specified bit structure may be specified for individual vectors and/or groups. For one embodiment, for groups, the bit structure may be inferred as a result of the operation performed to form the group as described in more detail below. For example, if two vectors with different bit pitch values are concatenated, a longer, multi-bit pitch group is generated. The well alignment object-specific property is similar to the well alignment global property discussed above and is either data-aligned or control-aligned for one embodiment. In this manner, specific groups or vectors may have a different alignment style than the globally-specified style. Row site height for one embodiment is used only for control-aligned objects and ignored for data-aligned objects. For control-aligned groups, each <Desc/Clms Page number 12> group has a list of row sites and row site information is stored as part of the group. For soft groups, the row site height may be specified as a user constraint. For other groups, however, row site height information may be generated as a result of relative placement operations and the specific vectors and groups that are being operated on. The stride indicates the frequency of instances across bit positions. For one embodiment, the default stride is 1 such that an instance is placed at each bit position in the cell being operated on. Other strides may be specified for certain vectors and groups that may have another uniform stride. For example, for a stride of two, every other bit position is empty. A justify property may be used to specify a desired alignment for objects that are placed in order in the horizontal or vertical direction, for example. For one embodiment, for objects placed in the horizontal direction, the default is to align the bottom edges of (bottom justify) the objects. Other options that may be specified for horizontally ordered objects include top, center and net-name justification. Objects ordered vertically are placed such that they are left justified by default, i. e. their leftmost edges are aligned. Other options for vertically ordered objects include right, center and net-name justification. For this embodiment, when a net-name is specified for justification, the objects in the related vector or group are placed such that the pins on the various objects related to the specified net are all aligned on a straight line. This option is useful, for example, for data-aligned units where instances in a vector can be aligned based on a control net. For one embodiment, particularly for control- <Desc/Clms Page number 13> aligned units, however, the well alignment property takes priority over the justify property. The height and width or other boundary property may be used, for example, to define a bounding box for a soft group. For this example, all objects in a soft group are then placed within the defined bounding box. The rigidness property may be used identify particular objects to be non- rigid. For one embodiment, the relative placement engine 255 creates a cell placement in a constructive manner using a top down descent and bottom up constructive ascend. By default, all vectors are rigid and all groups are created rigid unless otherwise specified, i. e. the relative placement relationships of their contents are considered to be relatively fixed while constructing their respective parent object. Additionally, for one embodiment, an object may be flipped around a specified axis by a specified amount to create a rotated object or a mirror image of the object. Further, where there are space constraints, it may be desirable to fold the object into two or more vectors of similar length across multiple rows or split the object across multiple columns or bit positions. To specify the relative placement of objects and/or to form groups and/or vectors, relative placement operators are used for one embodiment. Examples of such relative placement operators may include horizontal step, vertical step, horizontal abut, vertical abut, interleave and merge operators each of which is described in more detail below. For one embodiment, horizontal step, vertical step, horizontal abut and vertical abut operators operate on a list of instances, vectors and/or groups while interleave and merge operators operate on a list of <Desc/Clms Page number 14> groups and/or vectors. By default, a list of objects in the input list associated with these operators may be considered to be ordered. Optionally, for one embodiment, an order parameter may be set false to identify a list of inputs to which no order significance is attached. Also for one embodiment, all of these operators return a new group as a result of the specified operation. A newly formed group that is formed in this manner is by default a hard group because the related operation generates a relative placement of the objects in the input list. Further, as mentioned above, for one embodiment, the new group may be rigid by default, but may be explicitly identified as being non-rigid where desired. The horizontal and vertical abut operators cause the relative placement engine 255 to stack a list of objects either horizontally or vertically, respectively. If the list is an ordered list, adjacent objects in the list are placed abutting each other. For one embodiment, a horizontal abut operation causes the newly formed object to grow from left to right while the vertical abut operation causes the object to grow from bottom to top. Different default constructions may be used for other embodiments. Where the input list of objects is unordered, the relative placement engine 255 determines the best order in which to place the objects. Further, for one embodiment, the horizontal and vertical step operations also support a skip specification that indicates a displacement in terms of microns or other units of measurement. Where the input list of objects is unordered, the abut operators may ignore all skip specifications for some embodiments. <Desc/Clms Page number 15> The horizontal and vertical step operators of one embodiment, or a step operator without a specified direction, define a relative placement for the input list of objects along the control flow direction according to the specified bit structure. Along the control direction, the step operator takes a list of input objects and stacks them with each object starting at a unique bit position. For this operation, for one embodiment, multiple objects are prevented from spanning the same bit position. For one embodiment, the step operators may support a stride and/or a skip specification. A stride specification indicates a certain number of bit positions to be skipped between every object in the input list. A skip specification indicates a number of bits to be skipped between two objects. Similar to the abut operators described above, the input list for a step operator may be either ordered or unordered. Where the input list is unordered, the relative placement engine 255 may determine the order of the placement. The relative placement engine 255 of one embodiment is designed such that each of the step and abut operators works with the bounding boxes of the input list. In this manner, the rigidness of any object in the input list is preserved during these operations regardless of whether the object (s) are specified as being rigid or non-rigid. Further, the bounding boxes of abutted or stepped objects for this embodiment do not intersect. The bit structure of a group formed as a result of step and/or abut operations depends on the characteristics of the operands. For one embodiment, for example, if the data flow direction is vertical, horizontal abut operations result in a group with NULL bit structure even if all operands have valid bit structures. <Desc/Clms Page number 16> For the same example, vertical abut operations result in a group with NULL bit structure if all input objects do not have identical bit structures. If all operands have the same bit structure, then that bit structure is the bit structure of the parent object. Also for this example, a step operation results in a group with a NULL bit structure if one or more of the child objects has a null bit structure. If, however, all child objects have valid bit structures, then the resulting parent's bit structure is a concatenation of the child bit structures depending on the order in which the child objects are placed. The interleave operator of one embodiment takes a list of vectors and groups as input and returns a new interleaved group with a span that is the sum of the spans of the input operands. An example of an interleave operation and a graphical illustration of a corresponding result are shown in Figure 4 referenced below. For one embodiment, like the step operation, the relative placement engine 255 performs the interleave operation along the control flow direction. For one embodiment, all input operands for an interleave operation must have a valid bit structure. The bit structure of the resulting group is then equal to the interleaved bit structure of the input operands. Further, the interleave operation of one embodiment assumes an ordered list of input operands. The merge operator of one embodiment causes a bit wise merge of the contents of a list of vectors and/or groups. The new group that results from the merge is such that each bit location of the resultant group includes the collection of instances belonging to the respective bit location in each of the merged objects. The span of the resulting group is equal to the span of the object having the largest span of those being merged. <Desc/Clms Page number 17> The merge operation of one embodiment can be performed on both ordered and unordered input operands with the relative placement engine 255 determining the order for unordered operands. For one embodiment, before performing a merge operation, the placement engine 255 expands any non-rigid input objects in the list of input operands to their rigid components. For an ordered merge, a simple packing of listed objects is then performed by the relative placement engine 255 to place the objects. For one embodiment, the merge operation requires that the bit structures of the objects to be merged are compatible, i. e. the bit structures for corresponding bits of each of the objects, if they exist, are identical. The bit structure of the resulting group is the bit structure of the input object with the largest span. For purposes of illustration, Figure 4 provides exemplary specifications of relative placement constraints including some of the operators described. To the right of each of these expressions is a graphical representation of a corresponding result of the specified operation as performed by the relative placement engine 255 of one embodiment. For the example of Figure 4, a standard unit (in terms of orientation) is shown. A 90 degree rotation of this figure essentially illustrates the effects of similar operations on a rotated unit. The relative placement operations shown in Figure 4 are specified using an exemplary syntax. It will be appreciated that other approaches to specifying relative placement operators that perform functions similar to those described above or similar to other relative placement operators that may be contemplated are within the scope of various embodiments. <Desc/Clms Page number 18> Expression (1) shows an exemplary definition of a vector V1 that includes the instances a [0: 9]. Expression (2) shows an exemplary definition of a vector V2 with a stride of 2 that includes the instances b [0: 4]. Expression (3) is an exemplary definition of a vector V3 that includes instances c [0], c [5], and c [6 : 8] and every other element of d [0: 3], i. e. d [0] and d [2]. With continuing reference to Figure 4, expression (4) shows an exemplary definition of a vector V4 for which the first two bit positions are skipped and then the elements j [0: 1] are included. Expression (5) is an exemplary definition of a vector V5 for which the relative placement of the specified objects is defined along the control direction using the"step"operator and for which the elements of the vectors V3 and V4 are interleaved and the element i (0) is added. Expression (6) is an exemplary definition of a vector V6 for which the vectors V1 and V2 are merged. Expression (7) shows an exemplary definition of a vector for which the merged elements of a vector including instances x [0 : 4] and y [0: 4] are interleaved with a vector including Tall [0: 4]. For this example, Tall refers to the fact that the elements extend over multiple row sites. Finally, expression (8) shows an exemplary specification of a group hg that is formed by vertically abutting vector V6 with vector V7 and group zzinst along the control flow direction. Referring back to Figures 2 and 3, for some embodiments, the relative placement engine 255 may also support mechanisms to specify absolute placement of objects and/or to open up spaces within objects. Absolute placement constraints may be used, for example, to specify placement of objects that are child objects of a soft group. Such absolute <Desc/Clms Page number 19> placement constraints may be specified in terms of, for example, an absolute offset from the origin of the parent object or a bit position offset. Other types of absolute placement constraints may be supported by the relative placement engine 255 for other embodiments. Spaces may be opened up within objects for the purposes of routing or for allocation of area for placement of another type of cell, for example. Spaces may be inserted into a group in several ways for one embodiment. For example, a horizontal channel operator may be used to open up a horizontal routing channel that spans the length of the associated group. The desired horizontal channel may be specified in terms of a y-offset from the origin or one corner of the group and a height in microns or other units for the channel. A vertical channel operator may be used to open up a vertical routing channel that spans the height of the associated group. The vertical channel operator may be used in conjunction with an x-offset from the origin or one corner of the group and a length of the channel in microns or other units. Other approaches for specifying a channel are within the scope of various embodiments. An open bit operator may be used on groups that have a valid bit structure for one embodiment to open up a certain number of bit locations. The open bit locations may be specified using a start bit and a number of bits to be opened, for example. An open space operator may be used to open a rectangular or other space anywhere within the associated group. For one embodiment, the open space may be specified in terms of its vertices, for example. <Desc/Clms Page number 20> An open keep out region operator that is similar to the open space operator may also be used. For some embodiments, the open keep out region operator may be used to differentiate between a space that may be used for other types of cells (open space) and a space that may not be used by any cells (open keep out region). For one embodiment, the above and/or other space insertion operations may be performed as a post-processing action after other relative placement operations are performed and resulting groups are generated. For this embodiment, legalization and bit structures are updated automatically after the requested spaces are inserted. Any objects that were originally in the region in which a space is created are pushed outside the space by the relative placement engine 255. Other types of constraints that may be processed by the relative placement engine 255 include, for example, vector indexing constraints, vector and/or pin exclusion constraints, and/or net length and/or weight constraints. Vector indexing constraints may be used to specify the index to be used during automatic vectorization for multiply indexed instance names. Vector and/or pin exclusion constraints may be used to specify a list of instances or pins to ignore during automatic vectorization. This may be useful because designs may often include instances that look like vectors to an automatic placement engine based on their names, for example, but that are not really vectors. Net length and/or weight constraints may be used to specify a maximum length in microns or other units and/or additional weights or priorities for certain nets. These net length and/or weight constraints may be used to indicate to the <Desc/Clms Page number 21> relative placement engine 255 to pull the instances connecting timing critical nets closer together, for example. While exemplary global options, object specifications, object types, group operators, absolute constraints and other types of specifications and constraints are described above, it will be appreciated that the relative placement engine 255 of various embodiments may support different types of specifications, object types, operators and/or constraints not described above or may not support all of the specifications, object types, operators and/or constraints described. Referring back to Figure 3, at block 325, the relative placement engine 255 automatically vectorizes the cell (s) smashed at block 315 in accordance with the constraints specified in the relative placement constraint file 257. For one embodiment, the relative placement engine 255 identifies the topcell of interest as a top level object and all other objects are considered to be descendents of the topcell. Thus, every object with the exception of the topcell object, has a parent group or object. For one embodiment, referring to Figure 5, at block 505, the relative placement engine 255 first places instances, groups and/or vectors specified in the relative placement user constraint file 257 according to the user- specified constraints. Then, at block 510, it is determined whether there are any remaining instances that have not been processed. If so, then at block 515, these instances are considered to be part of the topcell object, which is then considered to be a soft group, and the relative placement engine proceeds to automatically place the remaining instances according to standard automatic placement procedures of a conventional automatic placement engine. <Desc/Clms Page number 22> For one embodiment, because the user can specify relative placement constraints for some or all of the instances to be placed, the relative placement user constraint file 257 can either complement the automatic placement rules and judgments of a conventional automatic placement engine or completely replace them. For example, a designer may choose to specify relative placement constraints only for instances in the critical path and allow the conventional placement engine to automatically place remaining instances. Thus, the layout synthesis approach of one embodiment provides the designer with the flexibility to determine the desired level of control over the placement. Referring back to Figure 3, at block 330, a detailed placement is provided. At block 335, it is determined whether modifications to the placement are needed. If so, then at block 340, the relative placement user constraint file 257 may be modified and re-loaded at block 320. Once the detailed placement is acceptable, then at block 345, other design-related processes such as, for example, global routing, congestion analysis, pre-routing and/or detail routing may be performed. For remaining topcells in the integrated circuit design, the above-described process may be repeated until a detailed placement has been produced for all topcells or other sub-units of the integrated circuit. In some circumstances, such as for a process shift, for example, it may be desirable to provide a new detailed placement for an existing design. Referring to Figure 6, for one embodiment, where the relative placement user constraint file 605 from the original placement is available, the same file may be used again to perform the detailed placement for the new design. Alternatively, where the original relative placement user constraint file is not available for some reason, as <Desc/Clms Page number 23> shown in Figure 7, a relative placement constraint extraction engine 705 may be used to extract relative placement constraints 710 from the original detailed placement 715 and use these constraints to provide the new detailed placement 720. In either case, the remainder of the placement methodology may be similar to the approaches described above. Using these approaches, the original intent of the designers in terms of placement is preserved even through multiple design generations. This may help to reduce the number of placement iterations to achieve timing convergence on the new design. Thus, a method and apparatus for layout synthesis of regular structures using relative placement is described. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be appreciated that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
Exemplary methods, apparatuses, and systems include receiving a plurality of read operations directed to a portion of memory accessed by a memory channel. The plurality of read operations are divided into a current set of a sequence of read operations and one or more other sets of sequences of read operations. An aggressor read operation is selected from the current set. A supplemental memory location is selected independently of aggressors and victims in the current set of read operations. A first data integrity scan is performed on a victim of the aggressor read operation and a second data integrity scan is performed on the supplemental memory location. |
CLAIMSWhat is claimed is:1. A method comprising: receiving a plurality of read operations directed to a portion of memory accessed by a memory channel, the plurality of read operations divided into a current set of a sequence of read operations and one or more other sets of sequences of read operations; selecting an aggressor read operation from the current set; selecting a supplemental memory location independently of aggressors and victims in the current set of read operations; and performing a first data integrity scan on a victim of the aggressor read operation and a second data integrity scan of the supplemental memory location.2. The method of claim 1, wherein the supplemental memory location is randomly selected.3. The method of claim 1, wherein the supplemental memory location is selected as a part of a deterministic rotation through memory locations.4. The method of claim 1, further comprising: maintaining a limited history of memory locations subject to a data integrity scan in one or more previous sets of read operations, wherein selecting the supplemental memory location includes determining that the supplemental memory location is not included in the limited history of memory locations.5. The method of claim 1, further comprising: maintaining a limited history of aggressor memory locations in one or more previous sets of read operations, wherein selecting the supplemental memory location includes selecting a memory location from the limited history of aggressor memory locations.6. The method of claim 5, further comprising: removing the selected aggressor memory location from the limited history of victim memory locations.7. The method of claim 1, wherein each set includes N read operations, the method further comprising: generating a first random number that is less than or equal to N, wherein selecting the aggressor read operation includes selecting a read operation that is in a position in the current set indicated by the first random number.8. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to: receive a plurality of read operations directed to a portion of memory accessed by a memory channel, the plurality of read operations divided into a current set of a sequence of read operations and one or more other sets of sequences of read operations; select an aggressor read operation from the current set; select a supplemental memory location independently of aggressors and victims in the current set of read operations; and perform a first data integrity scan on a victim of the aggressor read operation and a second data integrity scan of the supplemental memory location.9. The non-transitory computer-readable storage medium of claim 8, wherein the supplemental memory location is randomly selected.10. The non-transitory computer-readable storage medium of claim 8, wherein the supplemental memory location is selected as a part of a deterministic rotation through memory locations.11. The non-transitory computer-readable storage medium of claim 8, wherein the processing device is further to: maintain a limited history of memory locations subject to a data integrity scan in one or more previous sets of read operations, wherein selecting the supplemental memory location includes determining that the supplemental memory location is not included in the limited history of memory locations.12. The non-transitory computer-readable storage medium of claim 8, wherein the processing device is further to: maintain a limited history of aggressor memory locations in one or more previous sets of read operations, wherein selecting the supplemental memory location includes
selecting a memory location from the limited history of aggressor memory locations.13. The non-transitory computer-readable storage medium of claim 12, wherein the processing device is further to: remove the selected aggressor memory location from the limited history of victim memory locations.14. The non-transitory computer-readable storage medium of claim 8, wherein each set includes N read operations, and wherein the processing device is further to: generate a first random number that is less than or equal to N, wherein selecting the aggressor read operation includes selecting a read operation that is in a position in the current set indicated by the first random number.15. A system comprising: a plurality of memory devices; and a processing device, operatively coupled with the plurality of memory devices, to: receive a plurality of read operations directed to a portion of memory accessed by a memory channel, the plurality of read operations divided into a current set of a sequence of read operations and one or more other sets of sequences of read operations, wherein each set includes N read operations; generate a first random number that is less than or equal to N; select an aggressor read operation from the current set, wherein selecting the aggressor read operation includes selecting a read operation that is in a position in the current set indicated by the first random number; select a supplemental memory location independently of aggressors and victims in the current set of read operations; and perform a first data integrity scan on a victim of the aggressor read operation and a second data integrity scan of the supplemental memory location.16. The system of claim 15, wherein the supplemental memory location is randomly selected.17. The system of claim 15, wherein the supplemental memory location is selected as a part of a deterministic rotation through memory locations.18. The system of claim 15, wherein the processing device is further to: maintain a limited history of memory locations subject to a data integrity scan in one or more previous sets of read operations, wherein selecting the supplemental memory location includes determining that the supplemental memory location is not included in the limited history of memory locations.19. The system of claim 15, wherein the processing device is further to: maintain a limited history of aggressor memory locations in one or more previous sets of read operations, wherein selecting the supplemental memory location includes selecting a memory location from the limited history of aggressor memory locations.20. The system of claim 15, wherein the processing device is further to: remove the selected aggressor memory location from the limited history of victim memory locations. |
PROBABILISTIC DATA INTEGRITY SCAN ENHANCED BY A SUPPLEMENTALDATA INTEGRITY SCANTECHNICAL FIELD[0001] The present disclosure generally relates to the mitigation of read disturb errors in a memory subsystem, and more specifically, relates to supplementing a probabilistic data integrity scan scheme with additional data integrity scans.BACKGROUND ART[0002] A memory subsystem can include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory subsystem to store data at the memory devices and to retrieve data from the memory devices.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.[0004] FIG. 1 illustrates an example computing system that includes a memory subsystem in accordance with some embodiments of the present disclosure.[0005] FIG. 2 illustrates an example of managing a portion of a memory subsystem in accordance with some embodiments of the present disclosure.[0006] FIG. 3 is a flow diagram of an example method to supplement a probabilistic data integrity scan scheme with additional data integrity scans in accordance with some embodiments of the present disclosure.[0007] FIG. 4 is flow diagram of another example method to supplement a probabilistic data integrity scan scheme with additional data integrity scans in accordance with some embodiments of the present disclosure.[0008] FIG. 5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.
DETAILED DESCRIPTION[0009] Aspects of the present disclosure are directed to supplementing a probabilistic data integrity scan scheme with additional data integrity scans in a memory subsystem. A memory subsystem can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory subsystem that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem.[0010] A memory device can be a non-volatile memory device. A non-volatile memory device is a package of one or more dice. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1. The dice in the packages can be assigned to one or more channels for communicating with a memory subsystem controller. Each die can consist of one or more planes. Planes can be grouped into logic units (LUN). For some types of non-volatile memory devices (e.g., NAND memory devices), each plane consists of a set of physical blocks, which are groups of memory cells to store data. A cell is an electronic circuit that stores information. [0011] Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as "0" and "1", or combinations of such values. There are various types of cells, such as single-level cells (SLCs), multi-level cells (MLCs), triple-level cells (TLCs), and quad-level cells (QLCs). For example, a SLC can store one bit of information and has two logic states.[0012] Data reliability in a memory can degrade as the memory device increases in density (e.g., device components scale down in size, when multiple bits are programmed per cell, etc.). One contributor to this reduction in reliability is read disturb. Read disturb occurs when a read operation performed on one portion of the memory (e.g., a row of cells), often referred to as the aggressor, impacts the threshold voltages in another portion of memory (e.g., a neighboring row of cells), often referred to as the victim. Memory devices typically have a finite tolerance for these disturbances. A sufficient amount of read disturb effects, such as a threshold number of read operations performed on neighboring aggressor cells, can change the victim cells in the other/unread portion of memory to different logical states than originally programmed, which results in errors.
[0013] A memory system can track read disturb by using counters per subdivision of memory and reprogramming a given subdivision of memory when the counter reaches a threshold value. A probabilistic data integrity scheme consumes less resources by counting or otherwise tracking sets of read operations in a portion of memory (e.g., a chip, logical unit, etc.) and performing a limited data integrity scan by checking the error rate of one or more read disturb victims of a randomly selected read operation in each set. This probabilistic data integrity scheme can face a challenge as the number of different nonsequential memory locations targeted by read operations in a given set increases (e.g., reads that hop across word lines or across blocks). For example, a probabilistic data integrity scan for a set of operations with a localized read pattern (such as a row hammer test) will sufficiently sample read disturb. Compared to localized read patterns, non-localized read patterns, especially those targeting “cold” portions of memory, are more likely to be missed (not selected) by a probabilistic data integrity scheme for an data integrity scan and hence have a higher probability of causing data reliability failure.Additionally, the probabilistic data integrity scheme does not address data integrity issues caused by stresses other than read disturb. For example, locations in memory that are not subject to host reads but are otherwise vulnerable to data reliability issues can go unchecked by a data integrity scheme that is dependent upon the locations targeted by host reads.[0014] Aspects of the present disclosure address the above and other deficiencies by supplementing probabilistic data integrity scans with one more additional data integrity scans that are not read disturb victims of the current set of operations. For example, each supplemental data integrity scan can target a randomly selected data block (or other portion of memory), a deterministically selected data block, and/or a data block that has not recently been subject to a data integrity scan. As a result, the enhanced probabilistic data integrity scan scheme provides sufficient data integrity sampling for read disturb due to both localized and non-localized read patterns, as well as memory locations that can be compromised by stresses other than read disturb.[0015] FIG. 1 illustrates an example computing system 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. The memory subsystem 110 can include media, such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of such. [0016] A memory subsystem 110 can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media
Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory module (NVDIMM).[0017] The computing system 100 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device.[0018] The computing system 100 can include a host system 120 that is coupled to one or more memory subsystems 110. In some embodiments, the host system 120 is coupled to different types of memory subsystems 110. FIG. 1 illustrates one example of a host system 120 coupled to one memory subsystem 110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.[0019] The host system 120 can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 120 uses the memory subsystem 110, for example, to write data to the memory subsystem 110 and read data from the memory subsystem 110.[0020] The host system 120 can be coupled to the memory subsystem 110 via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), a double data rate (DDR) memory bus, a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. The physical host interface can be used to transmit data between the host system 120 and the memory subsystem 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices 130) when the memory subsystem 110 is coupled with the host system 120 by the PCIe
interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory subsystem 110 and the host system 120. FIG. 1 illustrates a memory subsystem 110 as an example. In general, the host system 120 can access multiple memory subsystems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.[0021] The memory devices 130,140 can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device 140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).[0022] Some examples of non-volatile memory devices (e.g., memory device 130) include negative-and (NAND) type flash memory and write-in-place memory, such as a three- dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND)..[0023] Although non-volatile memory devices such as NAND type memory (e.g., 2D NAND, 3D NAND) and 3D cross-point array of non-volatile memory cells are described, the memory device 130 can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self- selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM)[0024] A memory subsystem controller 115 (or controller 115 for simplicity) can communicate with the memory devices 130 to perform operations such as reading data, writing data, or erasing data at the memory devices 130 and other such operations (e.g., in response to commands scheduled on a command bus by controller 115). The memory subsystem controller
115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory subsystem controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.[0025] The memory subsystem controller 115 can include a processing device 117 (processor) configured to execute instructions stored in a local memory 119. In the illustrated example, the local memory 119 of the memory subsystem controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory subsystem 110, including handling communications between the memory subsystem 110 and the host system 120.[0026] In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory subsystem 110 in FIG. 1 has been illustrated as including the memory subsystem controller 115, in another embodiment of the present disclosure, a memory subsystem 110 does not include a memory subsystem controller 115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory subsystem 110).[0027] In general, the memory subsystem controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 130 and/or the memory device 140. The memory subsystem controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices 130. The memory subsystem controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices 130 and/or the memory device 140 as well as convert responses associated with the memory devices 130 and/or the memory device 140 into information for the host system 120.
[0028] The memory subsystem 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory subsystem controller 115 and decode the address to access the memory devices 130.[0029] In some embodiments, the memory devices 130 include local media controllers 135 that operate in conjunction with memory subsystem controller 115 to execute operations on one or more memory cells of the memory devices 130. An external controller (e.g., memory subsystem controller 115) can externally manage the memory device 130 (e.g., perform media management operations on the memory device 130). In some embodiments, a memory device 130 is a managed memory device, which is a raw memory device combined with a local controller (e.g., local controller 135) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. [0030] The memory subsystem 110 includes a data integrity manager 113 that mitigates read disturb and other data errors. In some embodiments, the controller 115 includes at least a portion of the data integrity manager 113. For example, the controller 115 can include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, a data integrity manager 113 is part of the host system 120, an application, or an operating system.[0031] The data integrity manager 113 can implement and manage a read disturb mitigation scheme. For example, the data integrity manager 113 can implement a probabilistic read disturb mitigation scheme enhanced by supplemental data integrity scans. Further details with regards to the operations of the data integrity manager 113 are described below.[0032] FIG. 2 illustrates an example of managing a portion of a memory subsystem 200 in accordance with some embodiments of the present disclosure. In one embodiment, the data integrity manager 113 implements a read disturb mitigation scheme per memory unit 210. For example, the data integrity manager 113 can perform a separate probabilistic read disturb mitigation scheme per LUN.[0033] The illustration of the memory unit 210 includes an array of memory cells. The memory 210 illustrates a small number of memory cells for the sake of providing a simple explanation. Embodiments of the memory unit 210 can include far greater numbers of memory cells.
[0034] Each memory unit 210 includes memory cells that the memory subsystem 110 accesses via word lines 215 and bit lines 220. For example, a memory device 130 can read a page of memory using word line 230. Within that page, memory cell 225 is accessed via word line 230 and bit line 235. As described above, reading a memory cell can result in read disturb effects on other memory cells. For example, a read of memory cell 225 (the aggressor) can result disturbing memory cells 240 and 245 (the victims). Similarly, a read of other memory cells of word line 230 (the aggressor) can result in disturbing other memory cells of word lines 250 and 255 (the victims).[0035] This disturb effect can increase the error rate for victim memory cells. In one embodiment, the data integrity manager 113 measures the error rate of a portion of memory as a raw bit error rate (RBER). The data integrity manager 113 can track and mitigate read disturb by tracking read operation traffic in the memory unit 210 and checking the error rate of victim(s). For example, the data integrity manager 113 can select a read operation directed to word line 230 as the aggressor for testing read disturb and perform a read of word lines 250 and 255 to determine the error rate of each. In response to detecting an error rate of a given victim portion of memory satisfying a threshold error rate value, the data integrity manager 113 can migrate data from the victim portion of memory to different portion of memory.[0036] In one embodiment, the data integrity manager 113 also performs a supplemental data integrity scan. For example, the data integrity manager 113 selects a supplemental memory location independently of aggressors and victims in the current set of read operations and performs a data integrity scan of the supplemental memory location. In one embodiment, the data integrity manager 113 randomly or deterministically selects a word line or other portion of memory within the memory unit 210. For example, the data integrity manager 113 can select word line 260, which is otherwise unrelated to the example aggressor (word line 230) and victims (word lines 240 and 245). This enhanced probabilistic read disturb handling scheme is described further with reference to FIGS. 3 and 4.[0037] FIG. 3 is a flow diagram of an example method 300 to supplement a probabilistic data integrity scan scheme with additional data integrity scans in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 300 is performed by the data integrity manager 113 of FIG. 1. Although shown in a particular
sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0038] At operation 305, the processing device initializes or resets a counter for tracking the processing of read operations. For example, the processing device can set the counter to zero to begin tracking the processing of read operations in a set of read operations. In one embodiment, the processing device processes read operations in equally divided sets. For example, if each set includes 10,000 read operations, the counter is initialized or reset to a state that allows it to count at least 10,000 read operations. The number of operations per set can vary, however, and will referred to here as N.[0039] At operation 310, the processing device receives read operation requests. Read requests can be received from one or more host systems and/or generated by another process within the memory subsystem 110. The processing device can receive read operation requests asynchronously, continuously, in batches, etc. In one embodiment, the memory subsystem 110 receives operation requests from one or more host systems 120 and stores those requests in a command queue. The processing device can process the read operations, from the command queue and/or as internally generated, in sets of N operations.[0040] At operation 315, the processing device selects an aggressor operation in the current set of operations. When implementing a probabilistic read disturb handling scheme, the processing device can select an aggressor in the current set by generating a first random number (e.g., a uniform random number) in the range of 1 to N and, when the count of read operations reaches the first random number in the current set, identifying the current/last read operation as the aggressor.[0041] At operation 320, the processing device selects a supplemental memory location for an additional data integrity scan. For example, the processing device selects one or more supplemental memory locations independently of aggressors and victims in the current set of read operations. In one embodiment, the processing device selects the supplemental memory location randomly. For example, the processing device can generate a random number and select a page, word line, or another subdivision of memory that corresponds to the random number. In another embodiment, the processing device selects the supplemental memory location deterministically. For example, the processing device can rotate through subdivisions of memory
in numerical or another order and tracks progress through that order with each set of read operations.[0042] In some embodiments, the processing device selects a supplemental memory location that has not been subject to a recent data integrity scan. For example, the processing device can maintain a limited history of memory locations subjected to a data integrity scan and, when the initial selection of a supplemental memory location is in this history, the processing device selects (randomly or deterministically) another memory location. In one embodiment, the processing device maintains this limited history of memory locations subjected to a data integrity scan as a first-in first-out (FIFO) list or similar data structure.[0043] In some embodiments, the processing selects a supplemental memory location that was subjected to read traffic in one or more of the previous sets of read operations. For example, the processing device can maintain a limited history of memory locations subjected to read operations (aggressors and/or victims) that were not subjected to a data integrity test and select a memory location (randomly or deterministically) from this history. Once selected, the processing device removes the memory location from the list or other data structure used to track the limited history. For example, if the processing device selects an aggressor from the list and performs a data integrity scan of the corresponding victim location(s), In one embodiment, the processing device maintains this limited history of memory locations subjected to read operations as a first-in first-out (FIFO) list or similar data structure.[0044] In one embodiment, the processing device selects a subsection of memory, e.g., a block, and then selects the supplemental memory location from that subsection, e.g., a word line within the block. For example, the processing device can randomly or deterministically select a block of memory as described above. Additionally, the processing device can maintain a list or other data structure of word lines that, based upon testing, physical location, or another metric are prone to the worst error rates, select a word line from the list that is within the block.[0045] At operation 325, the processing device performs a read operation. For example, the memory subsystem 110 reads a page of data by accessing the memory cells along a word line and returning the data to the host system 120 or internal process that initiated the read request. Additionally, the processing device increments the read operations counter. For example, the processing device can increment the counter in response to completing a read operation to track the current position in the sequence of read operations in the current set.[0046] At operation 330, the processing device determines if the read operations counter has reached the aggressor operation in the set. For example, the processing device can compare the
value of the counter to the first random number generated to identify the aggressor read operation in the current set. If the counter has not yet reached the position in the sequence corresponding to the aggressor operation, the method 300 returns to operation 325 to continue the performance of the next read operation, as described above. If the counter has reached the position in the sequence corresponding to the aggressor operation, the method 300 proceeds to operation 335.[0047] At operation 335, the processing device performs an integrity scan of the victim(s) of the selected aggressor. For example, the processing device can execute a read of each victim to check the error rate of the victim. In one embodiment, checking the error rate is includes determining an error rate, such as a raw bit error rate (RBER), for the victim. In another embodiment, checking the error rate includes comparing the threshold voltage distribution of the victim/sampled portion of memory with an expected voltage distribution. Additionally, the processing device performs an integrity scan of the supplemental memory location. If the error rate of a victim and/or the supplemental memory location satisfies a threshold (e.g., meets or exceeds an error rate threshold value), the processing device, e.g., error corrects the data of the victim and/or supplemental memory location and writes the corrected data to new location(s). [0048] In one embodiment, the processing device performs the data integrity scan of the supplemental memory location at a different time than the data integrity scan of the victim(s). For example, the processing device can perform the data integrity scan of the supplemental memory location at the beginning, end, second random, or other determined position in the sequence of operations in the current set. In one embodiment, when the processing device determines the supplemental memory location is an erased portion of memory, the data integrity scan checks to confirm that the supplemental memory location remains in an erased state when/instead of checking the error rate.[0049] At operation 340, the processing device determines if the read operations counter has reached the end of the current set. For example, the processing device can compare the value of the counter to the value of N. If the read operations counter has reached the end of the current set, the method 300 proceeds to operation 305 to reset the counter and process the next set of read operations. If the read operations counter has not reached the end of the current set, the method 300 proceeds to operation 345.[0050] At operation 345, the processing device performs a read operation and increments the read operations counter as described above with reference to operation 325. The method 300
proceeds to operation 340 to once again determine if the read operations counter has reached the end of the current set.[0051] FIG. 4 is flow diagram of another example method to supplement a probabilistic data integrity scan scheme with additional data integrity scans in accordance with some embodiments of the present disclosure. The method 400 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 400 is performed by the data integrity manager 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0052] At operation 405, the processing device receives read operations, e.g., as described with reference to operation 310. The processing device divides the read operations into sets of read operations. For example, the processing device can use a counter to track a number, N, of operations to be performed per set.[0053] At operation 410, the processing device selects an aggressor read operation in the current set of operations. For example, the processing device can randomly select a read operation in the current set to identify one or more victims to be the subject of a data integrity scan.[0054] At operation 415, the processing device selects a supplemental memory location to subject to a data integrity scan. For example, the processing device can randomly or deterministically select a supplemental memory location independently of aggressors and victims in the current set of operations as described above with respect to operation 320.[0055] At operation 420, the processing device performs a data integrity scan of the victim(s) and the supplemental memory location. For example, the processing device can execute a read of the victim(s) and the supplemental memory location to check the error rate of each as described above with respect to operation 335.[0056] FIG. 5 illustrates an example machine of a computer system 500 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 500 can correspond to a
host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (e.g., the memory subsystem 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the data integrity manager 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0057] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0058] The example computer system 500 includes a processing device 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 518, which communicate with each other via a bus 530.[0059] Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instmction set computing (RISC) microprocessor, very long instmction word (VLIW) microprocessor, or a processor implementing other instmction sets, or processors implementing a combination of instmction sets. Processing device 502 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. The computer system 500 can further include a network interface device 508 to communicate over the network 520.
[0060] The data storage system 518 can include a machine-readable storage medium 524 (also known as a computer-readable medium) on which is stored one or more sets of instructions 526 or software embodying any one or more of the methodologies or functions described herein. The instructions 526 can also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500, the main memory 504 and the processing device 502 also constituting machine-readable storage media. The machine-readable storage medium 524, data storage system 518, and/or main memory 504 can correspond to the memory subsystem 110 of FIG. 1.[0061] In one embodiment, the instructions 526 include instructions to implement functionality corresponding to a data integrity manager (e.g., the data integrity manager 113 of FIG. 1). While the machine-readable storage medium 524 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine- readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0062] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0063] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into
other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[0064] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. For example, a computer system or other data processing system, such as the controller 115, may carry out the computer-implemented methods 300 and 400 in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[0065] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[0066] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. [0067] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the
disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
To prevent copying of a design implemented in a programmable logic device (PLD), the PLD itself stores a decryption key or keys loaded by the designer, and includes a decryptor for decrypting an encrypted configuration bitstream as it is loaded into the PLD. The PLD also includes logic for reading header information that indicates whether the bitstream is encrypted, and can accept both encrypted and unencrypted bitstreams. The encryption keys may be stored in non-volatile memory or backed up with a battery so that they are retained when power is removed. |
CLAIMS 1. In a PLD having a decryptor for decrypting an encrypted bitstream and a key for use by the decryptor, a method of using the PLD comprising: placing the PLD into a non-secure mode; and loading the key into the PLD.2. The method of using the PLD of Claim 1 further comprising: placing the PLD into a secure mode after the step of loading the key.3. A programmable logic device (PLD) comprising: configurable logic configured by a configuration memory; structure for receiving a bitstream from a source external to the PLD; a key memory for storing a decryption key; a decryptor having a decryption algorithm for decrypting encrypted configuration bits in the bitstream using the key, and thereby forming configuration data; and structure for loading the configuration data into the configuration memory.4. The PLD of claim 3 wherein the structure for loading the configuration data into the configuration memory includes a CRC checksum calculation circuit.5. The PLD of Claim 3 further comprising: structure for reading back configuration from the configuration memory; and structure for disabling the structure for reading back configuration when the header information indicates the bitstream includes encrypted data. 6. The PLD of Claim 3 further comprising: structure for reconfiguring the PLD after the PLD has been configured; and structure for disabling the structure for reconfiguring the PLD when the header information indicates the bitstream includes encrypted data.7. The PLD of Claim 3 wherein the decryptor reads from one of the registers for storing a plurality of decryption keys a value indicating whether another key will also be used for decryption.8. The PLD of Claim 3 wherein the decryptor includes a circuit for aborting decryption if an attempt is made to use the keys differently from the way specified by the keys.9. The PLD of Claim 3 wherein a key specifies whether it is a first, middle, last, or only key of a key set. |
PROGRAMMABLE LOGIC DEVICE WITHDECRYPTION ALGORITHM AND DECRYPTION KEYFIELD OF THE INVENTIONThe invention relates to PLDs, more particularly to protection of designs loaded into a PLD through a bitstream.BACKGROUND OF THE INVENTIONA PLD (programmable logic device) is an integrated circuit structure that performs digital logic functions selected by a designer. PLDs include logic blocks and interconnect lines and typically both the logic blocks and interconnections are programmable. One common type of PLD is an FPGA (field programmable logic device), in which the logic blocks typically include lookup tables and flip flops, and can typically generate and store any function of their input signals. Another type is the CPLD (complex programmable logic device) in which the logic blocks perform the AND function and the OR function and the selection of input signals is programmable.Problem with storing bitstream external to PLDDesigns implemented in PLDs have become complex, and it often takes months to complete and debug a design to be implemented in a PLD. When the design is going into a system of which the PLD is a part and is to be sold for profit, the designer does not want the result of this design effort to be copied by someone else. The designer often wants to keep the design a trade secret.Many PLDs, particularly FPGAs, use volatile configuration memory that must be loaded from an external device such as a PROM every time the PLD is powered up. Since configuration data is stored external to the PLD and must be transmitted through a configuration access port, the privacy of the design can easily be violated by an attacker who monitors the data on the configuration access port, e. g. by putting probes on board traces. Current solutions and their disadvantagesEfforts have been made to encrypt designs, but it is difficult to make the design both secure from attackers and easy to use by legitimate users. The encryption algorithm is not a problem. Several encryption algorithms, for example, the standard Data Encryption Standard (DES) and the more secureAdvanced Encryption Standard (AES) algorithm, are known for encrypting blocks of data. The process of cipher block chaining (CBC), in which an unencrypted data word is XORed with the next encrypted data word before decryption allows the DES or AES to encrypt a serial stream of data and these are therefore appropriate for encrypting a bitstream for configuring a PLD. A key used for encrypting the design must somehow be communicated in a secure way between the PLD and the structure that decrypts the design, so the design can be decrypted by the PLD before being used to configure the PLD.Then, once the PLD has been configured using the unencrypted design, the design must continue to be protected from unauthorized discovery. A November 24,1997 publication by Peter Alfke of Xilinx, Inc. entitled "Configuration Issues: Power-up, Volatility, Security, Battery Back-up" describes several steps that can be taken to protect a design in an existingFPGA device having no particular architectural features within the FPGA to protect the design. Loading design configuration data into the FPGA and then removing the source of the configuration data but using a battery to maintain continuous power to the FPGA while holding the FPGA in a standby nonoperational mode is one method. However, power requirements on the battery make this method impractical for large FPGA devices. Nonvolatile configuration memory is another possibility. If the design is loaded at the factory before the device is sold, it is difficult for a purchaser of the configured PLD device to determine what the design is. However, a reverse engineering process in which the programmed device is decapped, metal layers are removed, and the nonvolatile memory cells are chemically treated can expose which memory cells have been charged and thus can allow an attacker to learn the design.. Further, nonvolatile memory requires a more complex and more expensive process technology than standard CMOS process technology, and takes longer to bring to market. It is also known to store a decryption key in nonvolatile memory in aPLD, load an encrypted bitstream into the PLD and decrypt the bitstream using the key within the PLD. This prevents an attacker from reading the bitstream as it is being loaded into the PLD, and does retain the key when power is removed from the PLD. Such an arrangement is described by Austin in U. S.Patent 5,388,157. But this structure does not protect the user's design from all modes of attack. In addition to design protection, some users need data protection. They may have generated data within the PLD that should not be lost when the PLD loses power. It is desirable to protect such data. There remains a need for a design protection method that is convenient, reliable, and secure.SUMMARY OF THE INVENTIONThe invention provides several structures and methods for protecting aPLD from unauthorized use and data loss. If the PLD is configured by static RAM memory that must be loaded on power-up, the configuration data must be protected as it is being loaded into the device. As in the prior art, this is accomplished by encrypting the configuration data for storing it in a memory outside the integrated circuit device, loading one or more decryption keys into the PLD and maintaining the keys in the PLD when powered down, including a decryption circuit within thePLD that uses the key to decrypt the configuration data, generating decrypted configuration data within the PLD and configuring the PLD using the decrypted configuration data. For additional security, rather than using nonvolatile memory to preserve keys, the invention preferably uses a battery connected to the PLD to preserve the key when power is removed from the PLD. Whereas it is possible to remove a PLD storing keys in nonvolatile memory, decap the PLD and observe which of the nonvolatile bits are programmed to logic 1 and which are programmed to logic 0, it is believed that it is very difficult to determine the contents of keys stored only in static memory cells since power must be maintained to the memory cells storing the keys in order for the keys to even be stored, and the PLD would have to be decapped, delayered, and probed while operating power is continuous to the PLD.Ways an attacker can steal a design once loaded into a PLDIf a key does not offer sufficient security, an attacker may break the encryption code and determine the value of the key. The well-known DataEncryption Standard DES used a 56-bit encryption key, and has been broken in a few hours by a sophisticated computer to reveal the key. DES is described byBruce Schneier in"Applied Cryptography Second Edition: protocols, algorithms, and source code in C"copyright 1996 by Bruce Schneier, published by John Wiley & Sons, Inc., at pages 265-278. If it is desirable to use such a well known encryption standard, then in order to increase security, the configuration data may be encrypted several times using different keys each time, thus strengthening the encryption code by about 256 each time the encryption is repeated. Or it may be encrypted using a first key, decrypted using a second key, and encrypted using a third key, a combination that is part of the triple DES standard. Other encryption algorithms may also be used, and it is not necessary to keep the algorithm secret since the security resides in the key. When the encryption method is symmetrical, the same keys used for encryption are stored in the PLD and used in reverse order for decryption. In a PLD offering multiple keys, if the number of keys to be used and the addresses of all keys were provided in an unencrypted bitstream, an attacker might be able to attack the keys one at a time and more easily determine the key values. To avoid such attack, additional security is achieved by storing within the keys, not the bitstream, an indication of how many keys are to be used and whether a key is the last key of a set or whether more are to follow. If the PLD offers the option of reading back the bitstream after it has been loaded into the PLD, another method that can be used by an attacker is to read back this bitstream. To avoid this method of attacking the design, in one embodiment, a PLD that offers readback and also offers encryption includes the ability to disable the readback feature when encryption has been used. In another embodiment, the PLD that offers the ability to read back encrypts the configuration data before it is read back. Additionally, some PLDs offer the option of partial configuration (where several configuration addresses are specified for loading several portions of a design) and partial reconfiguration (where an existing design is not erased before new design data are loaded). If the PLD offers these options, an attacker could partially reconfigure a PLD to make successive portions of the design visible, and probably learn the whole design. To avoid such an attack, in one embodiment, partial configuration and reconfiguration of PLDs loaded with encrypted designs are disallowed. In another embodiment, several configuration addresses can be specified, but the addresses are encrypted. Yet another mode of attack is to try to flip a bit that indicates the security status of the PLD. Lowering or raising the operating voltage, changing the temperature, and applying noise to certain ports come to mind. To protect against such bit-flipping, when the PLD is operating with a secured bitstream, a secure-mode flag is set, and in one embodiment, if this flag becomes unset, all configuration data is erased. In another embodiment that doesn't allow for reconfiguration while the device is still operating, the configuration data is erased before any bitstream is sent. Another mode of attack is to relocate portions of the encrypted bitstream so that when they are unencrypted they are placed into visible portions of thePLD not intended by the designer. To prevent this relocation, address information is used in the encryption and decryption processes so that sending a portion of an encrypted bitstream to a different PLD location from that intended by the designer will cause it to decrypt differently into data with no meaning. Cipher block chaining (CBC) is one effective means of achieving this result. In cipher block chaining, the decrypted data packet (block) is combined using the XOR function with the next data block before the next block is decrypted, thus the encrypted data for each data block depends on every block that preceded it and on the order of those blocks. Identical blocks of data will encrypt to different values depending on the value of the data blocks that preceded them. This way, if the order of the blocks is changed, the bitstream will not decrypt correctly because the place where the encrypted bitstream is rearranged will scramble subsequent data. Further, the initial CBC value can be modified to incorporate the address of the data to force the decrypted data to be placed at a specific location in order to decrypt correctly. Alternatively, if the PLD allowed part of a design to be encrypted and part to be unencrypted, the attacker could add an unencrypted portion to the encrypted portion that would read out information about the encrypted portion of the design. Thus, additional security is achieved by permitting the design to be totally encrypted or totally unencrypted, but not to be mixed.Further to this, in one embodiment, when data are being encrypted, additional security is provided by allowing only a single full-chip configuration following a single starting address for the configuration data. Further, in order to allow convenient testing and debugging and to allow the PLD manufacturer to communicate freely with its customers (the designers who produce the designs for configuring the PLD), the PLD has both encrypted and unencrypted modes of operating, and when operating in the encrypted mode, parts of the configuration bitstream that control loading of the configuration data into the PLD are still not encrypted. As another mode of attack, if the PLD manufacturer gives information freely about the configuration bitstream format, including header information and addresses for loading configuration data, and gives information about the encryption method used, encrypting this well known information would expose the encryption key to possible discovery. Such exposure is avoided by encrypting only the actual configuration data and leaving control information unencrypted. If the PLD manufacturer allows the key memory to be used in both secure and non-secure modes, an attacker could simply learn the keys by placing the key memory into non-secure mode and reading out the keys. To avoid such attack, the PLD manufacturer includes a circuit that causes all keys plus any configuration data loaded into the PLD to be erased when the key memory is moved to non-secure mode.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 shows functional relationships in a prior art FPGA. Fig. 2a, 2b, 2c, and 2d show bitstream format and commands that can be included in a prior art bitstream. Fig. 3 shows functional relationships in an FPGA according to one embodiment of the present invention. Fig. 4a, 4b, 4c, and 4d show bitstream format and commands that can be included in a bitstream of the present invention. Fig. 5a and 5b show example unencrypted and encrypted bitstreams. Fig. 6 shows configuration logic 29 and the lines in bus 27 and bus 28 leading to decryptor 24. Fig. 7a shows the modified starting value for outer cipher block chaining with triple encryption used in one embodiment of the invention. Fig. 7b shows the corresponding starting value and decryption process used with Fig. 7a. Fig. 8 shows flow of the operations for processing a bitsteam. Fig. 9 shows a state machine implemented by decryptor 24 to evaluate key order. Fig. 10a shows the structure of key memory 23 of Fig. 3. Fig. 10b shows the structure of the memory cells of Fig. 10a. Fig. 11 shows the steps performed by control logic 23a of Fig. 10a to erase keys when made non-secure. Fig. 12 shows in more detail the battery supply switch of Fig. 10a. Figs. 13 and 14 show the level shift circuit and voltage detection circuit of the battery supply switch of Fig. 12. Fig. 15 shows a state machine for erasing a design when a secure mode is exited. Fig. 16 shows a block diagram of elements for loading configuration memory and reading back configuration, including lines disabled when encryption is present.DETAILED DESCRIPTIONFig. 1 shows a prior art structure for an FPGA 10. The FPGA includes programmable logic 11, typically comprising (1) logic blocks with lookup table combinatorial logic function generators, flip flops for storing lookup table outputs and other values, and multiplexers and logic gates for enhancing the logic ability of the programmable logic (2) routing lines and programmable interconnection points for routing signals around the FPGA, and (3) input/output blocks for driving signals between the routing lines and the external pins of the FPGA. The FPGA also includes configuration memory 12 for turning on routing transistors, controlling multiplexers, storing lookup tables and controlling the input/output blocks, all of this for the purpose of configuring the FPGA to perform the function desired by the designer (s). Bus 16 connects configuration memory 12 to programmable logic 11 and is typically a distributed set of control lines located throughout the FPGA. Some Xilinx products (e. g. XC6200) have included a bus 17 by which programmable logic 11 causes configuration logic 14 to send programming information to configuration memory 12. Such a structure is described by Kean in U. S. Patent 5,705,938. FPGA 10 further includes a JTAG logic block 13 for interfacing with JTAG port 20, especially intended for testing of the board in which the FPGA will be placed. JTAG logic block 13 implements the IEEE standard 1532, which is a superset of the IEEE standard 1149. 1. JTAG allows debugging of a design at the board level. Finally FPGA 10 includes configuration logic 14 for responding to a configuration bitstream from external source 15 on configuration access port 21and for interfacing with JTAG logic block 13. The bitstream on configuration access port 21 is treated as words, in one embodiment 32-bit words. Several of the words, usually at or near the beginning of the bitstream, are used for setting up the configuration process and include, for example, length of a configuration memory frame, and starting address for the configuration data.Bus 19 allows communication between configuration logic 14 and JTAG logic block 13 so that the JTAG port can be used as another configuration access port.Bus 18 allows communication between configuration logic block 14 and configuration memory 12. In particular, it carries addresses to select configuration frames in memory 12, control signals to perform write and read operations, and data for loading into configuration memory 12 or reading back from configuration memory 12. Configuration Logic block 14 receives instructions and data, and processes the data according to the instructions. These instructions come into configuration logic 14 as a bitstream. An instruction, or header, is usually followed by data to be acted upon. Fig. 2a shows an example bitstream structure. Header A specifies an action and specifies that a single word, DataA, will follow. Header B specifies an action and in this case specifies that 4 words of data will follow to be acted upon. Fig. 2b shows the default format (format type 001) for a 32-bit header word in the bitstream used in the Virtex (R) devices available from Xilinx, Inc.(Virtex is a registered trademark of Xilinx, Inc., assignee of the present invention). This format includes three bits to indicate the format type (001), two bits to specify an op code, 16 bits for a configuration logic register address, and 11 bits for a word count. The op code can designate a read operation, a write operation, or no operation. For example, 00 can designate no operation, 01 can designate read and 10 can designate write. The 11 bits for word count can specify 2"words or 2048 words. As shown in Fig. 2c, if the word count is greater than this, the word count bits in format type 001 are set to 00000000000 and the header of format type 001 is followed by a header of format type 2.Format type 2 uses 27 bits to specify word count, and can thus specify 227 words or 2.68 million words. Fig. 2d shows the kinds of control information that can be loaded into the registers of Configuration Logic 14 by headers for a Virtex bitstream. For example, a header (of format 001) having the configuration logic register address 0000 specifies that the next 32-bit data word should be loaded into the cyclic redundancy check (CRC) register. (Virtex devices use a 16-bit cyclic redundancy check value so some bits will be padded with 0's.) If the header includes an address 0001, the next data will be loaded into the Frame Address register in order to specify a frame (column) in configuration memory 12 to receive or provide data. The Configuration Logic Register address (16 bits) shown in Fig. 2b provides the 4-bit values shown in the left column of Fig. 2d that select one of the registers in configuration logic 14 (Fig. 1) into which to place the next 32-bit data word. The Frame Length register (address 1011) specifies the length of the frame into which the configuration data will be loaded. (Frame length, or column height, depends upon the size of the PLD. Larger PLDs usually have taller columns or longer frames. Specifying the frame length in the bitstream and storing the frame length in a register rather than providing a different structure in the PLD for placing the data words into frames allows the internal configuration logic to be identical for PLDs of different sizes.)For readback, a read command is placed in the op code field and theFrame Data Output register is addressed, followed by a Word Count (usingCommand Header Format 2 if necessary). The specified number of words is read back from configuration memory 12, starting at the address specified in the Frame Address register, and shifted out on either configuration access port 21 or JTAG port 20. (Readback data is returned to the port that issued the readback instruction). Specifying a word count in a bitstream header or pair of headers (Figs. 2b and 2c) sets a counter that counts down as the data words are loaded. For many configuration logic register addresses the word count is 1. But if the bitstream header has a configuration logic address of 0010 or 0011 to indicate configuration data are being loaded in or read back, the word count will be much larger. This is when header format 2 of Fig. 2c is used. Data loaded into configuration memory 12 through the frame data input register (address 0010) or read out through the frame data output register (address 0011) is called the design data because it causes the FPGA to implement a design or shows the status of a design. The other register data are control data since they control how the configuration logic behaves while the logic is being configured or read back. Further detail about configuration of Virtex devices can be found in the "Virtex Configuration Guide"published October 9,2000 by Xilinx, Inc.(assignee of the present invention), 2100 Logic Drive, San Jose, CA 95124. Configuration logic 14 typically performs a cyclic redundancy check on a configuration bitstream coming in (see Erickson, U. S. Patent 5,321,704 or see pages 39 through 40 of the above referenced Virtex Configuration Guide), reads header bits indicating the frame length of the part being configured and the word count of the configuration data, reads address instructions identifying where to load configuration data, collects frames of configuration data and loads them into columns of configuration memory 12 indicated in the addresses. Configuration logic 14 also controls readback of configuration data and flip flop values from configuration memory 12 to an external location. In aVirtex FPGA available from Xilinx, Inc., readback can be done through eitherJTAG port 20 or through configuration access port 21. Configuration logic 14 can also receive configuration data from programmable logic 11. More information about prior art FPGA structures in which part of the FPGA configures another part of the FPGA can be found inKean, U. S. Patent 5,705,938. More information about architectures of FPGAs similar to the Virtex architecture can be found in Young et al., U. S. Patent 5,914,616. The format of a bitstream used with the Virtex product available from Xilinx, Inc., assignee of the present invention, is described in an Application Note, XAPP138, entitled"Virtex FPGA Series Configuration andReadback"available from Xilinx, Inc., 2100 Logic Drive, San Jose, CA 95124 published Oct. 4,2000.PLD with Decryption Fig. 3 shows a block diagram of an FPGA (a type of PLD) according to one embodiment of the present invention. Some elements are the same as shown in Fig. 1, are given the same reference numbers, and not explained again. In addition, Fig. 3 includes an expanded configuration logic unit 29, a decryptor 24 and a key memory 23. Fig. 3 shows an embodiment in which key memory 23 is loaded on bus 25 from JTAG access port 20. In other embodiments, key memory 23 is loaded through another port. Bus 25 carries data, addresses, and control signals to perform write and read operations and allows programming of the decryption keys from JTAG port 20. In one embodiment, bus 26 allows programming of the keys from the configuration port. In another embodiment, bus 26 is eliminated. In yet another embodiment, bus 26 is present and bus 25 is eliminated. In an embodiment described further herein, bus 26 carries security data from key memory 23 to configuration logic 29. In one embodiment, bus 27 carries encrypted configuration data from configuration logic 29 to decryptor 24 and carries decrypted configuration data back to configuration logic 29. Bus 28 allows decryptor 24 to access the keys for decrypting data. When the structure of Fig.3 is being loaded with encrypted data, an attacker who monitors the bitstream as it is being loaded receives only the encrypted bitstream and can not learn the user's design by this method.Partially Encrypted BitstreamAccording to another aspect of the invention, the bitstream comprises two portions, a data portion representing the user's design that can be encrypted or not, and a control portion controlling loading of the bitstream (for example giving addresses of columns in the PLD into which successive portions of the bitstream are to be loaded, providing a cyclic redundancy check (CRC) code for checking reliability of the loading operation, and a starter number for cipher block chaining (CBC), a technique that prevents a"dictionary attack"where the decrypted data can be deduced from the frequency of occurrence of the encrypted data). In a preferred embodiment of the invention, the data portion may be encrypted but the control portion is unencrypted. This provides additional security because the PLD manufacturer needs to describe freely the control features of the bitstream, and if this relatively well known control information were encrypted, an attacker might be able to decrypt this information and use this information to decrypt the entire bitstream. Further, keeping the control portion of the bitstream unencrypted makes it easier for thePLD to use the information. In another embodiment, used when the order of addresses in which configuration data is loaded may be useful to an attacker in analyzing the design, the address of the configuration data is also encrypted, but other control information in the configuration bitstream remains unencrypted.Bitstream FormatFigs. 4a-4d illustrate differences in bitstream format and registers of configuration logic 29 in comparison to the format and registers of configuration logic 14 of the prior art product shown in Figs. 2a-2d. As shown in Fig. 4a, the bitstream still includes header words followed by data words. In a typical configuration, several control data words will be loaded into registers before encrypted configuration data begins. Fig. 4a shows an example in which three header words Header A, Header B, and Header C are each followed by three unencrypted control data words Data A, Data B, and Data C. (In an actual configuration, more than three control data words will likely be provided.) Next, Header D specifies that encrypted configuration data will follow and is followed by multiple words Data 1D, Data 2D, Data 3D, etc. of encrypted configuration data. These words have been shaded in Fig. 4a to emphasize that this data is encrypted. As shown in Figs. 4b and 4c, a fourth op code has been added. In addition to the values 00 for no operation, 01 and 10 for read and write without decryption, the new value 11 specifies that writing is to be with decryption. (It is not important what code or what method is used to specify that decryption is to be used or even that it is specified through an op code. It is just important that optional encryption and decryption be allowed and indicated, so that designers can make use of this option. In the embodiment of Fig. 4d, two new configuration logic registers are added. Shown at addresses 1100 and 1101 are the register for holding a cipher block chaining (CBC) starter value and the address for the initial encryption key.Optional EncryptionAccording to another aspect of the invention, a PLD can accept both encrypted and unencrypted data portions of the bitstream. The control portion of the bitstream indicates whether the data portion of the bitstream is encrypted. If the data portion of the bitstream is encrypted, it is diverted within the PLD to a decryptor and after decryption is used to configure thePLD. If unencrypted, it is not diverted, and is used directly to configure thePLD. There are some occasions for which it is preferable not to encrypt the bitstream. Certain test activities used during debugging a design require reading back the configuration information. It is more straight forward to diagnose a configuration problem if an encryption step has not been performed (especially if the designer is trying to determine whether encryption has anything to do with the problem). Also, if several designers are writing code to be implemented in parts of the PLD and different parts of the PLD are to be configured at different times, it may be necessary to make all portions of the bitstream visible, and to allow the PLD to be partly reconfigured. Figs. 5a and 5b show example bitstream portions representing the same design, first unencrypted and then encrypted, to illustrate the differences between an unencrypted bitstream and an encrypted bitstream in one embodiment of the invention. An actual bitstream includes the 0's and 1's at the right of the figures and none of the text at the left. The text at the left is provided to explain the meaning of the bits to the right. These bitstream portions use the commands illustrated in Figs. 4b-4d. In order to emphasize the differences between the unencrypted version of Fig. 5a and the encrypted version of Fig. 5b, the differences are shown in bold. Looking at Fig. 5a, after a dummy word (a constant high signal interpreted as all 1's) and a sync word with a specified pattern of 1's and 0's, the next word is of type 001 with an op code of 10, has an address of 0000000000010000 and a word count of 00000000001. Thus this word addresses the command register CMD and specifies that one word will be written there.Fig. 5a has been annotated to the left of the bitstream to indicate that this word is Type 1 and indicates to write 1 word to CMD. The following word 111 is the data to be placed in command register CMD, and resets a CRC (cyclic redundancy check) register. (In a preferred embodiment, the PLD includes a circuit, not shown, such as described by Erickson in U. S. Patent 5,598,424 to calculate a CRC value from the bitstream as the bitstream is being loaded, and protects against glitches in the bitstream voltages that might cause incorrect bits to be loaded.) Next, a header word specifies that the format is again type 1 and it specifies to write 1 word to the frame length register FLR. The data word that follows, 11001, specifies the frame length (25 words). Similarly, several additional header and data words follow, including the header specifying the word to be written to the frame address register FAR. In this case, the following data word indicates data will start at address 0. Finally, after these registers have been loaded, a command comes to write data to the frame data input register FDRI, and since quite a bit of data will be written, the word count is given as 00000000000 and a header of type 2 specifies that 10530 words will be written to the FDRI register. This is the actual design data that causes the PLD to be configured. Thus the next 10530 words in the bitstream are design data. Finally, to assure that data have been loaded correctly, theCRC value calculated by the device that originated the configuration data is loaded and compared to the CRC value that has been calculated by the PLD.Additional commands and data are loaded in order to indicate that configuration is complete and to move the PLD into operation mode. Fig. 5b is similar to Fig. 5a, and differs only where the data and annotations are shown in bold. In Fig. 5b, the data are encrypted, and additional commands are used to provide the initial key address and to write two words (64 bits) to the CBC (cipher block chaining) register. Next, a type 1 header includes the op code 11 and indicates that data will be decrypted before being written to frame data input register FDRI. A type 2 header follows, again with the op code 11, giving the instruction that 10530 words are to be decrypted and written to data input register FDRI. The 10530 encrypted data words then follow. Then the CRC word follows for confirming that the (encrypted) data were loaded correctly. Finally, the additional commands and data are sent, and place the PLD into operation mode if all is correct.Decryption ProcessFig. 6 shows how optional decryption is accomplished in one embodiment. Fig. 6 shows the detail of configuration logic 29 and of buses 27 and 28 leading into decryptor 24. Bus 27 includes the following: *the 3-bit initial decryption key address"Init key~addr"taken from register address 1101 (Fig. 4d) in configuration logic 29, *the 64-bit modified cipher block chaining value"modCBC". This value is formed by replacing the lower order bits of the 64-bit CBC value taken from register address 1100 (Fig. 4d) in configuration logic 29 with the 22-bit Frame Address value specified in Register0001. *the 64 lines"Encrypted~data"for loading encrypted data, taken from the bitstream, *the 64 lines"Decrypted~data"for returning the decrypted data produced by decryptor 24 to configuration logic 29, 'a line for the signal"Encdatardy"that tells decryptor 24 that data is on the ^"Encrypted-data-lines and that decryptor 24 can start decrypting, *a line for the signal"Dec data~rdy"that tells configuration logic 29 that decryption on a 64-bit word is complete and is available on the"Decrypted~data"lines, and *a Badkeyset line used by decryptor 24 to cause configuration logic 29 to abort the configuration and set a status register accordingly when the keys have not been used as specified, for example, by the bits in key memory that designate whether the keys are to be first, middle, or last of a set. In the embodiment shown in Fig. 4d, the status register is at address 0111, and the Badkeyset error is indicated by storing a logic 1 in one of the bits.Bus 28 is comprised of the following: *3 lines for the key address, which is initially the key address provided from bus 27, but which is updated each time a new key is used, 56 lines for the decryption key, and 2 lines for indicating whether the decryption key is the first, middle, last, or only key to be used.Preventing Design RelocationOne potential attack on a design in an encrypted bitstream is to change the frame address register (starting address) in the encrypted bitstream so that when it is decrypted it is loaded into a portion of the FPGA visible when theFPGA is being used. In some designs the content of the block RAM is visible.In all designs the configuration of the input/output ports is visible and therefore the configuration bits can be determined. Thus if successive portions of the design were moved to visible portions of the FPGA, even though theFPGA did not function properly, an attacker could in repeated relocation learn the contents of the unencrypted bitstream. To prevent design relocation, in one embodiment, an initial value used by the cipher block chaining method used with the DES standard is modified.Figs. 7a and 7b show the encryption and decryption portions of a triple DES algorithm, respectively, as modified according to the invention. The standard cipher block chaining method starts the encryption process by XORing a starting number (can be designer supplied or randomly generated) with the first word of data to be encrypted. According to the invention, part of the random number is replaced by address information, in the present example the 22-bit address of the first frame into which data will be loaded in configuration memory 12. The starter CBC value, a 64-bit number, has its least significant bits, labeled x, replaced by the frame address, labeled y, to produce a modified 64-bit value that depends upon the address into which data will be loaded.This modified CBC value is XORed with the first word of configuration information Wordl. Then the encryption algorithm is used to produce the first encrypted word Encrypted Wordl, which is placed into the bitstream. Fig.7a shows a triple encryption algorithm with outer cipher block chaining, comprising an encryption step encl using the first key, followed by a decryption step dec2 using the second key, followed by an encryption step enc3 using the third key. This first encrypted word Encrypted Wordl is XORed with the second unencrypted word Word2 and the encryption process is repeated to produce encrypted Word2. The XOR chaining continues until all configuration data have been encrypted. As shown in Fig. 7b, the PLD must perform the reverse process to derive the decrypted words. For the above encryption sequence, the decryption sequence would be decryption step decl using key 3, then encryption step enc2 using key 2, then decryption step dec3 using key 1. Importantly, part of the initial value for generating Decrypted Wordl is to use the same frame address for both encryption and decryption. The PLD, not the bitstream, generates the modified CBC value from the frame address stored in the frame address register, which is also used to specify the frame of configuration memory 12 into which configuration data are to be loaded. So if an attacker changes the frame address into which the data are to be loaded, the modified CBC value changes accordingly, and the configuration data are not correctly decrypted. The XOR step produces the original data that was in the designer's bitstream before it was encrypted. Original Wordl = Decrypted Wordl, for example. This decrypted configuration data is sent on bus 27 (Fig. 3) to configuration logic 29.Configuration Logic 29Configuration logic 29 includes the structures to support optional encryption as well as the structures to prevent design relocation and a single key attack. As shown in Fig. 6, configuration logic 29 includes a holding register 292, control logic 291, configuration registers (FDRI, FAR, CRC, and init CBC are shown), decryptor 24 interface multiplexers 294 and 295,64-bit assembly register 297, and registers 298 and 299 (for interfacing with configuration access port 21). A 64-bit shift register 299 receives data from configuration access port 21, which can be a single pin for 1-bit wide data or 8 pins for 8-bit wide data. This data is loaded into 64-bit shift register 299 until register 299 is full. Then these 64 bits are preferably shifted in parallel into 64bit transfer register 298. From there, multiplexer 296b alternately selects right and left 32-bit words, and multiplexer 296a moves the data 32 bits at a time either into holding register 292 or alternately into High and Low portions of assembly register 297 as controlled by control line M. When loading of the bitstream begins, line M and a clock signal not shown cause multiplexers 296a and 296b to move data from 64-bit transfer register 298 to holding register 292.From there these words are applied to control logic 291. If the word is a header, control logic 291 interprets the word. If the op code indicates the data to follow are to be written unencrypted, control logic 291 places an address on bus G to select a register, places a signal on line L to cause multiplexer 294 to connect bus B to bus D, and applies the following word on bus B. On the next clock signal (clock signals are not shown), the data on bus D are loaded into the addressed register. All registers shown in Fig. 4d can be loaded this way. The init CBC register for loading the initial cipher block chaining value is a 64-bit register and receives two consecutive 32-bit words, as shown in Fig. 5b and discussed above. A modified CBC value formed from (1) the original CBC value stored in the init CBC register and (2) the initial frame address stored in the FAR register is available to decryptor 24. In one embodiment, the initial frame address in the FAR register uses no more than 32 bits while the init CBC value uses 64 bits. In the embodiment of Fig. 6, the 64-bit bus providing the modified CBC value includes 22 bits from the frame address register FAR and 42 bits from the init CBC register. Important to the security provided by the present invention, note that this value depends upon where configuration data will be loaded. If an attacker were to try to load encrypted data into a different place by changing the contents of the FAR register, the modCBC value fed to decryptor 24 would also change. When the op code command to decrypt a number of words of configuration data is received by control logic 291, the decryption process begins. Control line M causes multiplexer 296a to apply data from transfer register 298 to bus A leading to assembly register 297. Control bus H alternately connects bus A to the High [31: 0] and Low [31: 0] portions of encrypted data register 297 to form a 64-bit word to be decrypted. Control logic 291 then asserts the Encdatardy signal, which causes decryptor 24 to decrypt the data in register 297. To perform the decryption, decryptor 24 applies a key address KeyAddr on bus 28 to key memory 23 (Fig. 3). This causes key memory 23 to return the 56-bit key in that address on the 56-bit Key lines. It also causes key memory 23 to return two additional bits"Order"also stored in the key data at that address. For the first decryption key, these two bits must indicate that this is a first key or an only key. If not, decryptor 24 asserts the Badkeyset signal, which causes control logic 29 to abort the configuration operation. If these two bits indicate the key is a first or only key, decryptor 24 performs the decryption, using for example the well known DES algorithm (described by Schneier, ibid). If the key isn't an only key, decryptor 24 then gets the key at the next address in key memory 23, and checks to see if the two Order bits indicate it is a middle or last key. If not, the Bad-key-set signal is asserted and the configuration is aborted. If so, decryption is performed. If it is a middle key, another round of decryption is done. If it is the last key, decryptor 24 forms the XOR function of the decrypted word and the value modCBC.Decryptor 24 then places the resultant value on the 64 bit Decrypted data bus and asserts the Decdatardy signal. This causes control logic 291 to place signals on control line K to cause multiplexer 295 to break the 64-bit word into two sequential 32-bit words. Control logic 291 places a signal on line L to cause multiplexer 294 to forward the 32-bit words of decrypted data to bus D.Control logic 291 also places address signals on bus G to address frame data input register FDRI. The next clock signal moves the decrypted data to bus E where it is loaded into the frame register and when the frame register is full, eventually shifted into configuration memory 12 at the address indicated in theFAR register. The modCBC value is used only once in the decryption operation.Subsequent 64-bit words of encrypted data are decrypted and then chained using the previously decrypted data for the XOR operation. (The value stored in the FAR register is also used only once to select a frame address.Subsequently, the frame address is simply incremented every time a frame is filled.)Flow of OperationsFig. 8 indicates the flow of operations performed by configuration logic 29 and decryptor 24. Configuration logic 29 begins at step 70 by loading the bitstream headers and placing the corresponding data into configuration logic registers shown in Fig. 4b, including determining bitstream length. At step 71, as a further part of the start-up sequence, configuration logic 29 reads the first configuration memory address. Recall that the bitstream format includes an op code that indicates whether encryption is being used. Step 72 branches on the op code value. If encryption is not used, the process is shown on the left portion of Fig. 8. If encryption is used, the process is shown in the right of Fig.8. For no encryption, at step 73, configuration logic 29 sets a counter equal to the bitstream word count (see Fig. 4c). At step 74,32 bits (1 word) of configuration data are sent to the addressed frame of configuration memory 12.If step 75 indicates the counter is not finished, then at step 76 the counter is decremented and the next 1 word of configuration data are sent to configuration memory 12. When the counter has finished, configuration logic 29 performs cleanup activities including reading the final cyclic redundancy value to compare with a value at the end of the bitstream to determine whether there were any errors in loading the bitstream. If step 72 indicates the bitstream is encrypted, the counter is loaded with the word count, and at step 81 the process loads the initial key address from key address register 293 (Fig. 6) into decryptor 24. At step 82, two words (64 bits) of encrypted configuration data are loaded into decryptor 24. At step 83 the addressed key is loaded into decryptor 24. In one embodiment, a 64-bit number is loaded into decryptor 24. This 64bit number includes a 56-bit key, two bits that indicate whether it is the first, middle, last, or only key, and some other bits that may be unused, used for parity, or used for another purpose. In another embodiment, the 64-bit key data includes a single bit that indicates whether it is or is not the last key. In yet another embodiment, the 64-bit key data includes an address for the next key so the keys don't need to be used in sequential order. In another embodiment, extra bits are not present and the key data uses less than 64 bits.In yet another embodiment, the bitstream rather than the key indicates how many keys are to be used, but this is believed to be less secure because an attacker can see how many keys are used and perform a single key attack, breaking one key at a time, whereas using the keys to indicate how many keys are to be used does not give this information to an attacker. At step 84, decryptor 24 decrypts the 64-bit data with the 56-bit key using, for example, the DES algorithm. The DES algorithm is described in the above mentioned book by Bruce Schneier at pages 265 to 278. Other encryption algorithms may also be used, for example, the advanced encryption standardAES. Other algorithms may require more key bits. For example AES requires a key of 128 to 256 bits. Step 85 determines whether more keys are to be used. The two bits that indicate whether the key is first, middle, last, or only key are examined to determine whether this is the last key, and if not, the key address is incremented and decryptor 24 addresses the next key in memory 23. After the last key has been used, at step 87, the modified CBC value shown in Fig. 6 as a 64-bit value from combining registers FAR and init CBC isXORed with the decrypted value obtained in step 87. In one embodiment, 22 bits of the 64-bit random number loaded into the CBC register are replaced with the frame address of the beginning of the bitstream. The goal of the encryption process is to have every digit of the 64-bit encrypted value be a function of all previous bits plus the key. The goal of combining the CBC value with the first address is to cause the decrypted values to change if the bitstream is loaded into a different address from the intended starting address.Step 87 achieves both goals. The new CBC value is then stored. Storage may be in the FAR and init CBC registers shown in Fig. 6, or in another register located in decryptor 24. At step 88, this decrypted configuration data is sent on bus 27 (Fig. 3) to configuration logic 29. Configuration logic 29 calculates an updated cyclic redundancy check value to be compared with the cyclic redundancy value stored in the CRC register at the end of the loading process. If configuration logic 29 has been set to use encryption, a multiplexer in configuration logic 29 forwards this decrypted configuration data to the addressed column of configuration memory 12. At step 89 the counter is checked and if not finished, at step 96 the counter is decremented and the process returns to step 82 where the next 64 bits (2 words) are loaded from the bitstream. Finally, when step 89 indicates the counter is finished, at step 90, a CRC (cyclic redundancy check) value in the bitstream is compared with a CRC value calculated as the bitstream is loaded. If the values agree, configuration is complete and the FPGA goes into operation. If the values do not agree, a loading error has occurred and the entire configuration process is aborted.Evaluating Key Order-Preventing Single Key AttackFig. 9 shows a state machine implemented by decryptor 24 to evaluate key order. The state machine remains in state S1 until the Encdataready signal is activated. This signal indicates decryption can begin and moves to decision state Q1 where decryptor 24 applies the address specified by the address Initkeyaddr on bus 27 to bus 28, reads back a key and a key order, and from the two bits of key order data determines whether the key is a first or only key. If not, decryptor 24 sends the Badkeyset signal to control logic 291 and causes configuration logic 29 to abort the configuration. If the address is first or only, decryptor 24 goes to state S3, which decrypts the data. Then the state machine goes to decision state Q2, which determines whether the key is last or only. If so, decryption is complete and at state S4 decryptor 24 returns the decrypted data to configuration logic 29. If not, in state S5, decryptor 24 increments the key address, and gets the new key. The state machine asks question Q3 to determine whether the next key is a middle or last key. If not, state S2 causes the configuration to abort. If the key is middle or last, the state machine returns to state S3 to decrypt the data again. In another embodiment, in state S4 decryptor 24 also performs the step of XORing the decrypted data with a CBC value. The benefit of storing the key order within the keys is that an attacker can not implement a single key attack because the attacker can not prevent decryptor 24 from using all the keys specified by key memory 23 (as intended by the designer) when performing decryption. It is not necessary to ask the second and third questions Q2 and Q3 to protect against an attacker using a single key attack, since the key order is stored within the key data inside the PLD. However, it is beneficial to the designer or board tester who loads the keys to ask all three questions to make sure that each key has been labeled correctly when it is loaded. In one embodiment, decryptor 24 uses the triple DES standard with a decryption-encryption-decryption sequence, alternating the algorithm (only slightly) each time another key is used. Such a combination is in accordance with the ANSI X9.52 1998 Triple DES standard. In another embodiment, decryption is used each time. Key Memory 23The circuit shown in Fig. 10a includes three components: battery supply switch 22, control logic 23a and key registers 23b. Control logic circuit 23a and key registers 23b comprise key memory 23 of Fig. 3. In the embodiment of Fig. 10a, key registers 23b comprise six 64-bit words. Of course, other key memory sizes may alternatively be used. In other embodiments, there may be far more than six keys stored in key memory 23, and more than 3 bits needed to give the address of the key to be used. The power supply for key registers 23b comes from battery supply switch 22 on line VSWITCH. When key memory supply voltage VCCI is insufficient or not present, battery supply switch 22 applies the battery backup voltage VBATT to the VSWITCH line so that VSWITCH carries a positive voltage. In this embodiment each key register has 64 memory cells. Each cell receives a write enable signal WE, that when high causes data to be written to the cell and when low causes data in the cell to be held. Cells in one register have a common write enable signal WE. When the PLD supply voltage (different from VCCI) is absent such that the WE signals are not actively driven, weak pull-down transistors such as T1 pull down the WE signal so that none of the key memory registers can be addressed, and none of the memory cells are disturbed. In one embodiment, the JTAG port of a PLD is used to load decryption keys into the PLD. The memory cell supply voltage is at the device voltage level of VCCI during normal operation, and in one embodiment this level is between 3.0 and 3.6 volts. Signals applied to the JTAG port may be several different voltages. Also, there may be several different internal voltages. Thus voltage translation is needed. This voltage translation is performed in the memory cells. Detail of a memory cell is shown in Fig. 10b. The latch comprising inverters I1 and I2 is powered by VSWITCH and is thus powered whether or not a device supply voltage VCCI is present. The WE signal and the inverted data signal data b both operate at the 1.5 volt level. These signals drive NMOS transistors T4, T5, and T6, and through inverter I3 (also using the 1.5 volt supply voltage) transistor T7. Fig. 10b shows that when WE is low, transistors T4 and T5 are off, and the content of the latch comprising invertersI1 and I2 is retained. When WE is high, one of inverters I1 and 12 is pulled low, thus loading the new data into the latch. Control logic circuit 23a receives signals from JTAG bus 25 (also shown inFig. 3). JTAG bus 25 includes control signals for writing, reading, setting the secure mode, and data and address buses. This interface conforms to the IEEE 1532 JTAG standard. Before key memory 23 can be accessed through JTAG bus 25, the security status (bus 26) is placed in non-secure mode, which can be done using the ISCPROGRAMSECURITY instruction (see Fig. 10a) and applying logic 1 to bit 0 of the key data bus. Key memory 23 is written to and read (for verification) from JTAG bus 25 using the ISCPROGRAM and ISC~READ instructions of the IEEE 1532 standard. Control logic 23a includes a decoder for decoding the 3-bit address signal ADDR from JTAG bus 25 to produce a low-going pulse on the addressed one of write strobe lines ws~b [5: 0] if the ISCJPROGRAM instruction appears on JTAG bus 25, or a high signal on the addressed one of read select lines rsel [5: 0] if the FISC RAD instruction appears on JTAG bus 25. One of the six 64-bit words can be read by applying a high signal to one of the six read select lines rsel [5 : 0], which causes read multiplexer 23d to place the selected word on the 64 output lines q [63: 0]. Only one of the write select lines or read select lines is selected at one time. When no read select signal is asserted, a high park low signal causes 64 transistors 23e to pull down the 64 lines q [63: 0] and prevent these lines from floating. If key memory 23 is operating in non-secure mode, the 64-bit words can be read from key registers 23b to JTAG bus 25 where the values can be examined external to the FPGA. The FPGA can be tested in this non-secure mode by using 56 bits of a selected 64-bit word in registers 23b as the 56-bit key for DES decryption. In one embodiment, when key memory 23 is in non-secure mode, readback of a user's design is possible even though the design has been encrypted before loading. This allows the designer to test and debug even an encrypted design. Communication of the key security status is through bus 26 (see also Fig. 3). After values have been written into key registers 23b and verified with a read operation from bus 25, control logic 23a is placed into secure mode by using the ISCJPROGRAMSECURITY instruction and applying logic 0 to bit 0 of the 64-bit key data bus which is part of the IEEE 1532 standard. In the secure mode, no access to the keys is granted. As shown in Fig. 11, to assure that an attacker can not return to the nonsecure mode by using the ISCPROGRAMSECURITY instruction and then reading out the keys, if the security is eliminated (if the ISC~PROGRAM~SECURITY signal moves to the non-secure logic level), a state machine in control logic 23a erases all keys by writing zeros to all six words, one word at a time. This is done by: in step 110 putting zeros on the wdata [63: 0] bus and at step 111 asserting the wsb [0] signal (with a logic 0 value), then at steps 112-117 successively strobing the wsb [0: 0] through ws~b [5: 0] signals one at a time before changing the security status at step 118 and entering the non-secure mode, and finally at step 119 releasing the wdata [63: 0] logic 0 values. Thus, any attempt to place battery backed up memory 23 into a non-secure mode causes all values in key registers 23b to be erased. To communicate whether key memory 23 is in secure mode, control logic 23a sends a secure mode signal on bus 26 (may be a single line) to configuration logic 29 to indicate that key memory 23 is operating in secure mode. If this signal switches to non-secure mode, configuration logic 29 clears the design from configuration memory 12. Note that an unencrypted bitstream may be loaded by configuration logic 29 into configuration memory 12 even though keys are stored in key registers 23b and key memory 23 is in a secure mode.Loading the Keys, Multiple Encryption KeysDecryption keys must be loaded into the PLD before the PLD is put into a secure mode where a user can not learn details of the design. In the embodiment shown in Fig. 3, the key or keys are loaded through a JTAG port 20. As a feature of the invention, the encryption keys are loaded through thisJTAG port 20. It is expected that JTAG programmers will load the encryption keys during board testing. When the RAM for storing keys is in a non-secure mode, the user has full access to it and can read out both the keys and the design, even if the design has been encrypted. This is useful for the designer while testing the keys and the use of the keys. Then once the designer is satisfied with the operation, he or she can send another instruction through theJTAG port and place the key memory into a secure mode. Once the key memory has been placed into secure mode, the keys can not be read out.Further, moving the key memory from secure to non-secure mode erases the keys by activating a circuit that starts up the memory initialization process.(Fig. 15, discussed below, shows a state machine for performing this function.)According to one aspect of the invention, more than one key may be used to encrypt the design. For example, if three keys are to be used, the bitstream is first encrypted using the first key, then the resulting encrypted bitstream is again encrypted using the second key, then finally the resulting doubly encrypted bitstream is again encrypted using the third key. This triply encrypted bitstream is stored, for example in a PROM or flash memory on the printed circuit board that holds the PLD. For decryption, these keys are used in succession (reverse order) to repeatedly decrypt the encrypted bitstream. Further to this, if more keys are stored in the PLD than are used for decrypting a particular design, the encrypted bitstream may include in an unencrypted portion an indication of how many keys are to be used, and the address of the first key. Such an embodiment may make it easier for an attacker to decrypt the bitstream because the attacker need only deal with one key at a time. Alternatively, the keys themselves may indicate whether they are the first, middle, last, or only keys. Thus the same PLD can at different times be programmed to perform different functions (configured with different designs), and information about the values of the different keys can be made available to only one or some of the designers. Thus a first designer may not learn about a second design even though both designs are implemented in the same PLD (at different times). Regarding Fig. 3, configuration logic 29 includes additional logic beyond configuration logic 14 of Fig. 1. As in the structure of Fig. 1, the bitstream on configuration access port 21 is treated as words, in one embodiment 32-bit words. Several of the words, usually at or near the beginning of the bitstream, contain header information, for example length of the bitstream, starting address for the configuration data. New to the bitstream of the present invention is an indication as to whether the bitstream is encrypted, and the address of a key for decrypting configuration data in the bitstream.Battery Backed up MemoryValues stored in key memory 23 are preferably retained by a battery when power to the FPGA is removed. Further, other memories than encryption keys can also be backed up using a battery supply switch such as switch 22. In particular, a PLD can be manufactured in which the VSWITCH voltage supply is routed to all flip flops in the PLD if the purpose is to preserve data generated by the PLD when the PLD is powered down. And if the purpose is to also preserve configuration of the PLD when the PLD is powered down, configuration memory 12 (Fig. 3) may alternatively be powered from VSWITCH, though such an embodiment requires considerably more battery power than does powering just the flip flops in the PLD, and powering flip flops in turn requires more battery power than does powering a very small memory for storing a few encryption keys. Fig. 12 shows a structure for battery supply switch 22. In this embodiment, VBATT level shift circuit 31 allows the PLD to use different voltages for the battery and main power supply. And of course the purpose of the circuit is to deal with varying voltage levels. In one embodiment, battery supply switch 22 can handle VCCI voltages up to 3.6 volts, = and switches to battery power when VCCI falls below about 1 volt. Battery voltage can be between 1.0 volts and 3.6 volts. Battery supply switch 22 includes four output driving P-channel transistors PO through P3. Transistors PO and P1 turn on and off together as do transistors P2 and P3. The circuit includes two transistors for each leg instead of one in order to avoid any possibility that VCCI and VBATT will be connected together. Transistor PO includes a parasitic diode (the p-n junction between the drain and substrate) that can conduct current upward in the figure even when the transistor is off. To prevent such current flow, transistor P1 is added and has its substrate connected to its drain so that parasitic diode conduction can only be downward. A similar arrangement is made with transistors P2 and P3. Thus there is no possibility that current will conduct from VBATT to VCCI or from VCCI to VBATT. Inverters 33 and 34 are powered from the VSWITCH voltage, so they are always operational even when VCCI is off. Transistor P4 is a resistor, always on,-and provides protection against electrostatic discharge. Most of the time, the structures controlled through transistor P4 do not draw current, so there is usually no voltage drop across transistor P4. Fig. 13 shows one embodiment of VBATT level shift circuit 31. Output voltage at terminal OUT is controlled by signals IN and INB. These signals are generated by inverters 33 and 34, which derive their supply voltage from theVSWITCH node. Therefore, if VSWITCH is supplied by VBATT, one of signals IN and INB will be at voltage VBATT and the other will be at ground.However, if VSWITCH is supplied by VCCI, one of IN and INB will be at theVCCI voltage level. If IN is at VCCI and INB is at ground, transistor 45 will be on and transistor 46 will be off. The gate of P-channel transistor 43 will be low, and transistor 43 will be on, pulling the input of inverter 47 to VBATT. The output of transistor 48 will also be at VBATT. Returning to Fig. 12, a voltage level VBATT at the gate of transistor PO will positively turn off transistor PO. Fig. 14 shows VCCI detect circuit 32. VCCI detect circuit 32 determines when the voltage on line VSWITCH will be switched to the battery and back toVCCI. This embodiment of circuit 32 is essentially a string of five inverter stages I1 through I5. Controlling of the switching voltage occurs primarily at inverter stage I1. Transistors 52 and 53 form a CMOS inverter. Power to thisCMOS inverter must flow through P-channel transistor 51, which doesn't turn on until VCCI reaches the threshold voltage of transistor 51, typically 0.7-0.8 volts. If VCCI is switching slowly, taking several milliseconds to reach full voltage, transistor 51 delays the activation of circuit I1. When transistor 51 turns on, the source (upper terminal) of transistor 52 goes to VCCI. N-channel transistor 53 typically has a threshold voltage of about 0.7-0.8 volts as well but is sized as a weak transistor relative to transistor 52. In one embodiment, transistor 53 has a width/length ratio of 1/18 whereas transistor 52 has a width/length ratio of 3/2. So transistor 53 pulls the input of inverter I2 low only until transistor 52 turns on. In one embodiment, circuit I1 pulls the input of inverter stage I2 high when VCCI is at about 1.0 volt. Thus the output of inverter 54 goes low. Inverter stage I3 is a Schmitt trigger. The zero volt input to inverter stage I3 turns off transistors 56 and 57 and turns on transistor 55, pulling node N3 to VCCI and turning on transistor 58, which pulls up node N4, thus raising the voltage at which transistor 56 will turn on, and preventing small variations in VCCI from switching the voltage at node N3. Inverters 59 and 60 are optional and produce a sharper edge of the output signals usebatt and usebattb that cause battery supply switch 22 of Fig. 12 to switch fromVBATT to VCCI. Transistor 61, controlled by the VBATT'signal, is a weak pull-down transistor and assures that the usebattb line is pulled low whenVCCI is not present and therefore not providing an output signal from inverter 60.Key Not Available to Purchaser of a Product Containing the Configured PLDIn order to prevent an attacker from learning the design that has been used to configure the PLD, several additional steps may be taken. According to another aspect, a key is loaded into the PLD before sale of a system incorporating the PLD, such that after sale of a system including thePLD, the design can be loaded into the PLD and used, but an attacker can not learn the value stored in the key (or keys). Thus the unencrypted design can not be read or copied. To achieve this security, several steps are taken.Secure Mode Preservation (Tamper-proofing)In one embodiment, there are two security flags in configuration logic 29 of the PLD. One indicates whether the decryption keys are secured, and the other indicates whether the design is a decrypted design and must be protected. If JTAG logic 13 (Fig. 3) selects secure mode with the ISCPROGRAMSECURITY instruction, a secure-key flag in control logic 23a (Fig. 10a) is set. If the bitstream loaded into the PLD has the indication that design data in the bitstream is encrypted, a secure-design flag in configuration logic 29 (not shown) is set. If either flag is later unset, the entire configuration memory is cleared, thereby removing the decrypted design. If the secure-key flag is reset (by an ISCPROGRAMSECURITY instruction), then the keys are also erased. Fig. 15 shows a state machine for performing the design clearing function.When the secure design flag is set, the state machine enters state S1. This state monitors a change from secure to non-secure mode of the securedesign flag. As long as the secure-design mode continues, the state machine stays in state S1. Once a change occurs, the state machine enters state S2 and the data shift registers for shifting data into configuration memory 12 are reset, thereby placing zeroes on all data lines for the configuration memory bits. Next, the state machine moves to state S3 where the word line of the addressed frame is asserted. This results in the zeros on the data shift register lines being written into the memory bits at the addressed frame. If question Q1 indicates there are more frames to be addressed, the state machine moves to state S4 where the frame address is advanced and the state machine. returns to state S3. When question Q1 indicates there are no more frames to be addressed, the process is done and the configuration memory is cleared. It is also necessary to protect the keys from being accessed by an attacker.Loading of the keys is performed before a system containing the design is made available to an end customer. When designers are in the process of developing the design, they may wish to operate the PLD in a non-secure mode for debugging. In order to allow for this debugging operation and also to preserve security of the keys, the key loading process begins in a non-secure mode by clearing all key registers. A secure key flag must be kept in the nonsecure mode while keys are loaded and while the keys are read back for verification. The secure key flag may also be kept in the non-secure mode while a configuration bitstream is loaded and decrypted. But once the secure key flag is set, returning the secure key flag to the non-secure mode clears all keys and also initiates operation of the state machine of Fig. 15. So, not only are the keys cleared, but the configuration is also cleared.Readback Attack and Readback DisabledSome FPGAs allow a bitstream to be read back out of the FPGA so that a user may debug a design or may obtain state machine information from flip flops in the FPGA. Unless the design were re-encrypted for the read-back operation, the act of reading back the bitstream would expose the unencrypted bitstream to view. Further security of the design is provided by disabling readback when an encrypted design is loaded into the FPGA. In one embodiment, readback is disabled only if the decryption keys are also secured. Fig. 16 shows the block diagram of a structure for loading and reading back configuration memory. In one embodiment, configuration logic 29 prevents readback when two conditions are present: (1) the security status line on data bus 26 (see Figs. 3 and 10) indicates that the keys are in a secure mode, and (2) configuration logic 29 has responded to op codes in a configuration bitstream that indicate the bitstream is encrypted. So if either the keys are not secured or the bitstream is not encrypted, readback can be enabled. In other embodiments, different conditions control whether readback can be enabled. When configuration logic 29 receives in the bitstream a header indicating that readback is to be performed, it sends on line 107 the frame address stored in its frame address register, which is decoded by address decoder 110 to select the addressed line of bus 109. Next, word line enable signal on line 108 is asserted, which asserts the selected word line of bus 109 to allow memory cells addressed by the selected word line to place their values on the n data lines 102 (n is the frame length and is stored in configuration logic 29). Configuration logic 29 then asserts the Load signal on line 104 to load the frame of data (in parallel) into data shift register 101. Next, configuration logic 29 asserts the shift signal on line 105 to cause data shift register 101 to shift out the frame of data in 32-bit words on bus 103 to the frame data output register (see Fig. 4d) and from there to an outgoing bitstream on configuration access port 21 (Fig.3). If decryption is indicated in the bitstream, configuration logic 29 sets internal flags to indicate this. If these flags are set and key memory 23 is in secure mode as indicated by the security status signal on bus 26, then configuration logic 29 responds to a readback command in the bitstream by keeping the word line enable signal on line 108 inactive and by keeping the load and shift signals on lines 104 and 105 inactive to prevent readback.However, if key memory 23 is not in secure mode, even though the design may be encrypted, readback is allowed so that testing and debugging are possible.Partial Reconfiguration Attack and Prevention Some FPGAs allow partial reconfiguration of the FPGA or allow different parts of a design to be loaded into different parts of the FPGA using separate starting addresses and separate write instructions. An attacker might attempt to learn the design by partially reconfiguring the design to read contents of a block RAM or flip flops directly to output ports or by adding a section to an existing design to read out information that can be used to learn the design.For example, the attacker might partially reconfigure the PLD with an unencrypted design whose only purpose is to extract information about the encrypted design. Such a Trojan Horse design could be loaded into the PLD with another bitstream or attached to an existing encrypted bitstream. If the attacker was interested in learning a state machine design loaded into blockRAM of an FPGA, for example, the Trojan Horse design could include logic to cycle through the addresses of the block RAM and send the block RAM data contents to package pins. In order to prevent an attacker from making such changes, if the original design is encrypted, configuration logic 29 disallows partial reconfiguration once configuration with decryption is started. Configuration logic 29 disallows a further write instruction once a header with the decryption op code has been processed. Also, configuration logic 29 disallows configuration with decryption once configuration without encryption has been done.Configuration logic 29 accomplishes these restrictions by ignoring headers that write to configuration memory after a decrypt instruction has been received and ignoring headers that have a decrypt command if an unencrypted portion of a design has been loaded. Thus, if any op code indicates that writing with decryption is being used, the PLD will accept only a single write instruction.Additional EmbodimentsThe above description of the drawings gives detail on a few embodiments. However, many additional embodiments are also possible. For example, instead of the cipher block chaining algorithm discussed above, one can use an encryption method called cipher feedback mode in which data can be encrypted in units smaller than the block size, for example one 8-bit byte at a time. This cipher-feedback mode is described by Schneier, ibid, at pages 200203. In yet another embodiment, if encryption is used, all bitstreams must be loaded starting at address 0. One implementation of this embodiment replaces any address loaded into the starting frame address register FAR (Fig. 6) with address 0 when an op code specifying encryption is received. In still another embodiment, the starting address and the design data are both encrypted. In this embodiment, it is possible to load several segments of encrypted design data starting at different frame addresses, just as is possible with unencrypted design data. In another embodiment, the key data stored in a key memory such as key memory 23 specifies the number of keys that will follow. In a variation on this embodiment, the key data also specify the number of keys that precede the key. If an attacker gives a key address other than the first key address intended by the designer, the configuration may be aborted. Additionally, encryption will proceed until the number of keys specified within the keys have been used. In another embodiment, instead of allowing keys to be read back when the key memory is in a non-secure mode, the keys include parity bits or CRC check bits, and only these bits can be read back for verification that the key or keys were loaded correctly. This embodiment allows keys known to one designer to be kept secret from another designer, and is useful when the PLD is to be used at different times for loading different designs. Regarding the CRC checksum calculation discussed above, embodiments can be provided in which the CRC checksum is calculated either before or after a design is encrypted. Of course, if the checksum added to the bitstream is calculated before the design data is encrypted, then a corresponding checksum must be calculated within the PLD on the design data after it has been decrypted. Likewise, if the checksum added to the bitstream is calculated after the design data has been encrypted, then the PLD must calculate the corresponding checksum on the received bitstream before the design data have been decrypted. A further note regarding the process of loading the decryption keys, when the process illustrated in Fig. 8 is used, it is not necessary to use a device programmer for loading decryption keys. The keys may simply be loaded as part of the board test procedure. It is also possible to use the structures and methods described above for programming more than one PLD. It is well known to use a single bitstream for programming more than one PLD or FPGA, either by arranging several devices in a daisy chain and passing the bitstream through the devices in series or addressing the devices in series. It is possible to arrange several PLDs in such an arrangement when one or more of the devices is to receive encrypted design data. As yet another embodiment, although one embodiment was described in which only a single address could be specified for a bitstream having encrypted design data, in another embodiment, several addresses, preferably encrypted, can be specified for loading separate portions of a design. Further, these separate portions may use the same encryption key or keys, or the separate portions may use different encryption keys or different sets of keys. Variations that have become obvious from the above description are intended to be included in the scope of the invention. |
In one embodiment, the present invention includes a method for receiving incoming data in a processor and performing a checksum operation on the incoming data in the processor pursuant to a user-level instruction for the checksum operation. For example, a cyclic redundancy checksum may be computed in the processor itself responsive to the user-level instruction. Other embodiments are described and claimed. |
1.A method for checksum calculation includes:Receiving input data in the processor; andIn response to user-level instructions for checksum operations, checksum operations are performed on the input data in the processor, where the processor includes a logic that has different hardware engines for The input data of different sizes performs a checksum operation. The user-level instruction is one of a plurality of user-level instructions of the instruction set architecture. Each of the plurality of user-level instructions supports a checksum for input data of a specific size Operation.2.The method of claim 1, further comprising performing the checksum operation in a pipeline of the processor, wherein the processor includes a general-purpose processor, and wherein the checksum operation includes cyclic redundancy correction Test operation.3.The method of claim 1, further comprising performing the checksum operation by a hardware engine of the processor, wherein the processor includes a general-purpose processor.4.The method of claim 3, further comprising performing a polynomial division operation in the hardware engine in response to the user-level instruction.5.The method of claim 3, wherein the hardware engine includes an XOR tree coupled to the source register and the destination register.6.The method of claim 5, further comprising:Input the input data from the source register and the current value stored in at least a part of the destination register into the XOR tree;Performing the checksum operation in the XOR tree using the input data and the current value; andThe output of the XOR tree is stored in the destination register.7.The method of claim 6, wherein the output of the XOR tree corresponds to the running remainder of the checksum operation.8.The method of claim 7, further comprising using the running remainder as a checksum when the buffer that provides the input data for the source register is empty.9.The method of claim 1, further comprising:Loading the input data into the source register of the processor;Reflect the input data; andAt least one XOR operation is performed with the reflected input data and the reflected data from the destination register, and the result of the at least one XOR operation is stored in the destination register in the reflection order.10.The method of claim 1, further comprising performing the checksum operation in a logical block of the processor without using lookup table information, using the input data and remainder value.11.A device for performing cyclic redundancy check operation includes:The first register for storing source data;A second register for storing result data; andAn execution unit, which is coupled to the first register and the second register, performs a cyclic redundancy check operation with the source data and the result data, and compares the running remainder with the cyclic redundancy check operation Correspondingly, at least a part of the output of the execution unit is provided to the second register, wherein the execution unit includes an integer unit of a processor pipeline, the integer unit includes a plurality of individual logic blocks, each pair of logic blocks Cyclic redundancy check operation is performed on data of different sizes, and wherein the execution unit performs the cyclic redundancy check operation in response to a user-level instruction that indicates that the cyclic redundancy check is to be performed The size of the data to be checked.12.The apparatus of claim 11, wherein the execution unit includes an exclusive OR tree logic of a general-purpose processor pipeline.13.The apparatus of claim 12, wherein the XOR tree logic performs polynomial division according to a fixed polynomial.14.The apparatus of claim 11, wherein the user-level instruction is one of a plurality of user-level instructions, the plurality of user-level instructions each supporting a cyclic redundancy check operation for data of a specific size.15.A method for performing cyclic redundancy check operation includes:According to the source operand from the first register and the destination operand of the second register, the cyclic redundancy check value is accumulated in the dedicated execution unit of the processor pipeline;Storing the accumulated cyclic redundancy check value in the second register; andDetermine whether there is additional data to be checked for cyclic redundancy,Wherein, the accumulation is in response to a user-level instruction used for the cyclic redundancy check in the instruction set architecture of the processor, based on the size of the source operand in multiple parts of the dedicated execution unit In one, where the user-level instruction indicates the size of the source operand.16.The method of claim 15, further comprising incrementally accumulating the cyclic redundancy check value and storing the incrementally accumulated cyclic redundancy check value in the second register until there is no additional data The cyclic redundancy check is to be performed.17.The method of claim 15, wherein the user-level instruction is one of a plurality of user-level instructions each supporting a cyclic redundancy check operation for a source operand of a specific size.18.A system for performing cyclic redundancy check operations, including:A processor, which includes first and second execution units to perform operations in response to instructions of the processor's instruction set architecture, wherein the first execution unit includes a hardware engine for performing cyclic redundancy check operations, The processor also includes a first register that provides a source operand for the hardware engine and a second register that provides a destination operand for the hardware engine, where the hardware engine includes multiple logical blocks, each logical block Perform cyclic redundancy check operations on data of different sizes. The hardware engine responds to user-level instructions for cyclic redundancy check operations of a given data size in the instruction set architecture, and provides data to the multiple Among the logical blocks corresponding to the given data size, perform the cyclic redundancy check operation; andA dynamic random access memory coupled to the processor.19.The system of claim 18, wherein the first execution unit includes an integer unit and the second execution unit includes a floating point unit.20.The system of claim 18, wherein the processor includes a buffer that provides data for the first register.21.The system of claim 20, wherein the hardware engine performs a cyclic redundancy check operation on the data in response to one or more instructions for the cyclic redundancy check operation in the instruction set architecture until The buffer is empty.22.The system of claim 21, wherein the hardware engine provides the second register with a running remainder of the cyclic redundancy check operation.23.The system of claim 18, wherein the user-level instruction is one of a plurality of user-level instructions, each of the plurality of user-level instructions supporting a cyclic redundancy check operation for data of a specific size. |
Cyclic redundancy checksum operation in response to user-level instructionsTechnical fieldEmbodiments of the present invention relate to data processing, and more specifically, to determining checksums such as cyclic redundancy check (CRC).Background techniqueIn the data processing system, it is desirable to accurately receive the data transmitted between the first location and the second location, so that the data can also be accurately processed in the second location. In addition, in order to be able to detect errors in data transmission, a checksum is often added to the data packet to be sent. For example, the transmission source can generate a CRC sum and append it to the data to be transmitted. The checksum can be calculated according to one of a number of different algorithms, which can then be compared with the corresponding checksum generated from the received data at the receiving end. If the two checksums are the same, the transmitted data is correct. However, if the generated checksum is different from the transmitted checksum, it indicates an error. This checksum is used in network technology to detect transmission errors.In different applications, there will be different ways to implement CRC information. For example, the CRC calculation can be performed in hardware or software. In order to implement CRC calculation in hardware, a dedicated hardware engine is usually provided in the system to perform CRC calculation. Therefore, the data to be subjected to such CRC calculation is sent to the hardware engine to calculate the CRC, and then the CRC is appended to the data so as to be sent out from the system, for example. Using this offload engine has various disadvantages, including the overhead of sending data to the engine. In addition, it is difficult to perform stateless hardware uninstallation. That is to say, it is often necessary to transmit additional state-based overhead data, which increases complexity and slows the process of useful work.Because many systems lack this offload engine, CRC calculation is usually done in software. In order to implement CRC calculation in software, a look-up table scheme is usually used. However, this software calculation of CRC values is notoriously slow and computationally intensive. In addition, the storage footprint of the lookup table may be large, affecting performance. Therefore, these slow calculations degrade network performance and further consume processing resources. For example, it may take 5 to 15 cycles to perform CRC calculation for each data byte. As a result, the software CRC performance is too low for general use in high-speed networks.BRIEF DESCRIPTIONFIG. 1 is a flowchart of a method according to an embodiment of the present invention.2 is a block diagram of a processor according to an embodiment of the present invention.3 is a block diagram of a part of a processor that performs a checksum operation according to an embodiment of the present invention.4 is a block diagram of another part of the processor according to an embodiment of the present invention.5 is a block diagram of a system according to an embodiment of the present invention.detailed descriptionIn various embodiments, the use of instruction set architecture (ISA) extensions to calculate checksum values can affect checksum operations. More specifically, a user-level instruction can be provided in the ISA to enable the programmer to directly perform desired checksum operations such as CRC operations in a general-purpose processor (eg, central processing unit (CPU)) . The CRC operation may be a 32-bit CRC operation (ie, a CRC32 operation that generates a 32-bit running residue, discussed further below), and, in different embodiments, the CRC operation may correspond to, for example, the Electrical and Electronic Engineering Association (IEEE) 802.3 Ethernet protocol (published in 2002) or CRC used in other protocols.In different implementations, various opcode instructions can be provided to perform CRC calculations on different data packets. Although the scope of the present invention is not so limited, in some embodiments, for example, different operation codes may be used to support CRC calculation on 8, 16, 32, and 64-bit packets. In this way, the CRC calculation can be performed quickly in hardware without a lookup table or the like. In addition, these calculations can be performed using a general-purpose, architecturally visible processor through integer operations based on different opcodes. As a result, the CRC can be calculated in the processor without the overhead and complexity of offloading hardware such as network offloading hardware. Therefore, a larger number of data transfers (eg, data transfers in terms of input / output per second (I / O)) can occur. Note that although the CRC calculation is mainly described here, the embodiments of the present invention can also be used to perform other checksum calculations.Referring now to FIG. 1, a flowchart of a method according to an embodiment of the present invention is shown. The method 100 can be used to obtain a checksum using user-level instructions implemented on processor hardware (eg, an execution unit of a CPU). As shown in FIG. 1, the method 100 may begin by performing a series of XOR operations on data in a source or destination register (block 110). Note that XOR operations may correspond to polynomial arithmetic operations, and more specifically polynomial division operations. For example, the data in the source register may correspond to data that has been received by the processor or appears in the processor pipeline to be sent from the processor. As an example, the source register may be provided with data packets in a buffer corresponding to a desired packet size (eg, 16 bits, 32 bits, or the like), and the source register may be a general-purpose register of the processor. Alternatively, in some embodiments, the source data may be obtained from memory. The destination register may correspond to the storage location where the running remainder is obtained from the XOR operation. The destination register may also be a general-purpose register of the processor.In various embodiments, XOR operations can be performed within dedicated hardware in the processor pipeline. For example, a processor execution unit such as an integer execution unit can be expanded to implement a series of XOR operations. For example, the circuit may correspond to an XOR tree to handle polynomial division by the desired polynomial. In various embodiments, polynomials used for XOR operations may be hard-wired into the logic gates of the XOR tree. In addition, the XOR tree can be configured to achieve desired pre-processing and post-processing through XOR operations such as bit reflection and the like. In addition, the XOR tree logic may include multiple parts, each configured to handle operations on different data sizes.Still referring to FIG. 1, the result can then be stored in the destination register, the result corresponding to the running remainder from the XOR operation (block 120). Note that the destination register can be set to a predetermined value during system initialization, for example, all ones, all zeros, or other such values. Then, during the checksum operation, the current checksum operation result is used to continuously update the running remainder. More specifically, the remainder of the polynomial division achieved by the current checksum operation can be stored in the destination register.Next, it can be determined whether additional source data appears (diamond 130). For example, in some embodiments, the buffer may include data that has been received by the system to be verified and verified. The data can be fed into the source register in blocks for checksum operation. Therefore, it can be determined in the diamond 130 whether additional source data appears in the buffer. If additional source data appears, the next data block may be provided to the source register, and control returns to block 110 as discussed above.In contrast, if it is determined at diamond 130 that no additional source data has occurred, then the control process proceeds to block 140. There, the checksum operation result can be provided, which is the current value (eg, the running remainder) stored in the destination register (block 140). As discussed above, the checksum value can be used in many different ways. For example, in the case of receiving data, the calculated checksum can be compared with the received checksum to determine that the data was received accurately. In the case of transmission, a checksum can be appended to the data to be transmitted so that the data can be verified at the receiving end. Of course, there will be other uses of checksums, such as for hash functions or generating numbers according to a pseudo-random number scheme.Many different forms of processors can be used according to the desired architecture to implement the checksum operation according to an embodiment of the present invention. Referring now to FIG. 2, a block diagram of a processor according to an embodiment of the present invention is shown. As shown in FIG. 2, the processor 200 includes a data path 205. The data path 205 may be controlled by the front-end control stage, which may include a register alias table (RAT) 270, which may receive decoded instructions from the front end of the processor (not shown in FIG. 2). RAT270 can be used to receive micro-ops (μop) from the front end and rename the μop for the resources of the data path. Then, in the data path 205, the renamed μop may be provided to the reordering buffer (ROB) 250. ROB 205 can be used as a register file to store μop and the corresponding source operand until it is ready to pass μop to the reservation station (RS) 230. Similarly, ROB 250 can also store the corresponding result of the executed μop. These results can be kept in ROB 250 until the μop is decommissioned (at this time the ROB entry is released).The reservation station 230 may be used to store the μop until their corresponding source operands appear and / or until the μop is ready to be executed in one of the multiple execution units of the data path 205. The reservation station 230 may include multiple distribution ports to couple instructions and data to selected execution units of the data path 205. In some embodiments, multiple distribution ports may be used in each cycle.As shown in FIG. 2, the execution unit in the data path 205 includes an address generation unit (AGU) 220, an integer (INT) execution unit 222, a stored data (STD) unit 224, a floating point (FP) execution unit 226, and a single instruction Data (SIMD) execution unit 228. As shown in FIG. 2, the integer execution unit 222 further includes logic 221. The logic 221 may include one or more hardware engines to perform checksum operations according to an embodiment of the present invention. More specifically, the logic 221 may include multiple XOR logic trees to implement polynomial arithmetic operations and related data operations. In various embodiments, the logic 221 may include different hardware engines to perform CRC operations on data blocks of different sizes. As an example, multiple user-level instructions of ISA can each define a CRC operation for a specific data size. In some embodiments, the logic 221 may include a corresponding number of separate hardware engines, also referred to herein as XOR trees, in order to implement these different CRC operations.Although not shown in FIG. 2, additional or different execution units may appear in different embodiments. After executing a μop in an execution unit, for example, the result data can be passed back to RS 230 and ROB 250 for storage until retirement. Thus, in one embodiment, the source and data registers used for CRC calculation may be located in RS 230 or ROB 250. Although not shown in FIG. 2, it can be recognized that there may be additional buffers such as a memory sort buffer (MOB) and other resources in the processor 200.It can also be recognized that the content shown in FIG. 2 is for discussion, and in various embodiments, there may be more stages or differently named stages in a given processor. For example, the write-back phase can be coupled to the execution unit in order to receive the result data and later pass it to the memory system. Alternatively, one or more other buffers such as storage buffers, load buffers, and the like may be coupled to RS 230. As an example, one or more retirement buffers may be coupled to the RS230 in order to store μop and related result data until the relevant instruction is retired.Of course, other implementations are also possible. Referring now to FIG. 3, a block diagram of a portion of a processor that performs a checksum operation according to an embodiment of the present invention is shown. As shown in FIG. 3, a part of the processor 300 is shown. More specifically, the processor 300 includes an XOR tree 310, a first register 320, and a second register 330, all of which may be part of the processor pipeline. The XOR tree 310 can be configured differently in various embodiments. For example, the XOR tree 310 may be implemented using multiple 3-input XOR gates in the first stage, coupling their outputs to similar XOR gates in the second stage, and so on. In such an embodiment, each level of the XOR tree may be one-third the size of the previous level. Of course, other configurations are also possible.As further shown in FIG. 3, the processor 300 includes a buffer 340, which may also be in the processor pipeline (eg, as a cache, queue, or the like). Alternatively, the buffer 340 may be a cache memory associated with the processor 300. In the embodiment of FIG. 3, the first register 320 may correspond to the source register, and the second register 330 may correspond to the destination register. In various embodiments, these registers may be general-purpose registers in the processor 300. Of course, the processor 300 may include many other registers, logic, functional units, and the like, and the portion shown in FIG. 3 is for ease of explanation.As shown in FIG. 3, in order to perform a checksum according to an embodiment of the present invention, at least the first part of the first register 320 is provided to the XOR tree 310 together with a part of the second register 330. In the embodiment shown in FIG. 3, which shows 8-bit CRC accumulation, a single byte of data (B0) is provided from the first register 320 to the XOR tree 310, and the 4-byte portion of the second register 330 is provided to the XOR tree 310. This 4-byte part may correspond to the running remainder of the CRC32 operation. Using this data, the XOR tree 310 can perform data operations through XOR operations to produce a result that includes the remainder. As shown in FIG. 3, the remainder portion may be the running remainder stored back in the second register 330. In this way, the CRC operation can be performed efficiently with the least cycle time and using the least processor resources. In the embodiment of FIG. 3, for the 8-bit accumulation operation, the additional part of the first register 320 may be incrementally provided to the XOR tree 310 together with the current content of the second register 330 (ie, the 32-bit running remainder). Therefore, in order to obtain a CRC checksum for 64-bit data in the first register 320, eight XOR iterative operations can be performed in the XOR tree 310, each iteration using the single-byte data from the first register 320 and the second register The current operating remainder in 330. If additional data appears in the buffer 340 to be verified by the checksum, the additional data may be loaded into the first register 320, and then may be processed in the XOR tree 310.Note that there can be different hardware to handle CRC calculations with different bit widths. Therefore, referring back to FIG. 2, the logic 221 may include different XOR tree structures in order to handle such CRC calculations. Referring now to FIG. 4, a block diagram of another part of the processor according to an embodiment of the present invention is shown. As shown in FIG. 4, the processor 300 includes a different XOR tree 410 (for example, different from the XOR tree 310 of FIG. 3), and the XOR tree 410 is coupled for receiving data from the first register 320 and the second register 330. As further shown in FIG. 4, a buffer 340 is provided, which is used to provide data for CRC calculation. Note that in the embodiment of FIG. 4, the XOR tree 410 is configured to handle 64-bit CRC accumulation. Therefore, the entire content of the first register 320 (ie, B0-B7 bytes) can be coupled to the XOR tree 410 at one time, so as to perform an XOR operation together with the data in the second register 330. The result data is stored back into the second register 330, and the expected part of the result data corresponds to the running remainder. Although the specific implementation in FIG. 3 and FIG. 4 has been described, it should be recognized that the scope of the present invention is not so limited, and in other embodiments, different hardware configurations may be used to perform the CRC operation.Referring now to Table 1 below, an exemplary instruction list of an instruction set architecture (ISA) is shown to support CRC operations according to various embodiments of the present invention. As shown in Table 1, each instruction can be represented as an opcode, and each instruction can be used to perform CRC32 operations using source and destination registers. As shown, there may be different styles, each instruction performs a CRC operation on the destination and source operands of a given size. In this way, referring to the first row of Table 1, the instruction is used to perform a CRC32 operation on the 8-bit source operand and the 32-bit destination operand. Similarly, the second row of Table 1 is used to perform a CRC32 operation on the 16-bit source operand and 32-bit destination operand. In a similar manner, the third row of Table 1 shows instructions for performing CRC32 operations on the 32-bit source operand and the 32-bit destination operand.Because the first three instructions are executed with the largest data block of 32 bits, note that these instructions are valid in both the 64-bit operation mode and the traditional (ie, 32-bit) operation mode. In contrast, the fourth and fifth rows of Table 1 represent CRC operations performed on 8-bit and 64-bit source operands and 64-bit destination operands. In this way, the last two instructions are only executed in 64-bit arithmetic mode.Table 1In various embodiments, the programmer may use these user-level instructions, for example as an inline function, to implement CRC operations according to, for example, the flowchart of FIG.Generally, user-level CRC instructions can be implemented in the following manner. Starting with the initial value in the first operand (ie, destination operand), the CRC32 value of the second operand (ie, source operand) can be accumulated and the result stored back in the destination operand. In different implementations, the source operand can be a register or a memory location. The destination operand can be a 32 or 64 bit register. If the destination operand is a 64-bit register, then the 32-bit result can be stored in the least significant double word and 00000000H in the most significant double word of the register.Note that the initial value provided in the destination operand can be a double word integer stored in a 32-bit register, or it can be the least significant double word of a 64-bit register. In order to accumulate the CRC32 value incrementally, the software retains the previous CRC operation result in the destination operand, and then performs the CRC operation again with the new input data in the source operand. Therefore, each instruction obtains a running CRC value in the first operand and updates the CRC value based on the second operand. In this way, by performing the operation cyclically, a CRC of any desired amount of data can be generated until all the desired data has undergone the CRC operation.In some implementations, the data contained in the source operand is processed in reflected bit order. For all bits of the source operand, this means treating the most significant bit of the source operand as the least significant bit of the quotient, and so on. Similarly, the result of the CRC operation can be stored in the destination register in the reflected bit order. For all bits of the CRC, this means storing the resulting most significant bit of the CRC (ie, bit 31) into the least significant bit (bit 0) of the destination memory, and so on.Although there may be different ways to implement these user-level instructions, Tables 2 to 6 below show exemplary pseudo-code representations of the hardware implementation of each user-level instruction in Table 1.Table 2 CRC32 instruction TEMP1 [63-0] ← BIT_REFLECT64 (SRC [63-0]) TEMP2 [31-0] ← BIT_REFLECT32 (DEST [31-0]) TEMP3 [for 64-bit source operand and 64-bit destination operand 95-0] ← TEMP1 [63-0] << 32TEMP4 [95-0] ← TEMP2 [31-0] << 64TEMP5 [95-0] ← TEMP3 [95-0] XORTEMP4 [95-0] TEMP6 [ 31-0] ← TEMP5 [95-0] MOD2 11EDC6F41HDEST [31-0] ← BIT_REFLECT (TEMP6 [31-0]) DEST [63-32] ← 00000000Htable 3 CRC32 instruction TEMP1 [31-0] ← BIT_REFLECT32 (SRC [31-0]) TEMP2 [31-0] ← BIT_REFLECT32 (DEST [31-0]) TEMP3 [for 32-bit source operand and 32-bit destination operand 63-0] ← TEMP1 [31-0] << 32TEMP4 [63-0] ← TEMP2 [31-0] << 32TEMP5 [63-0] ← TEMP3 [63-0] XORTEMP4 [63-0] TEMP6 [ 31-0] ← TEMP5 [63-0] MOD211EDC6F41HDEST [31-0] ← BIT_REFLECT (TEMP6 [31-0])Table 4 CRC32 instruction for 16-bit source operand and 32-bit destination operand TEMP1 [15-0] ← BIT_REFLECT16 (SRC [15-0]) TEMP2 [31-0] ← BIT_REFLECT32 (DEST [31-0]) TEMP3 [47-0] ← TEMP1 [15-0] << 32TEMP4 [47 -0] ← TEMP2 [31-0] << 16TEMP5 [47-0] ← TEMP3 [47-0] XOR TEMP4 [47-0] TEMP6 [31-0] ← TEMP5 [47-0] MOD211EDC6F41HDEST [31- 0] ← BIT_REFLECT (TEMP6 [31-0])table 5 CRC32 instruction TEMP1 [7-0] ← BIT_REFLECT8 (SRC [7-0]) TEMP2 [31-0] ← BIT_REFLECT32 (DEST [31-0]) TEMP3 [for 8-bit source operand and 64-bit destination operand 39-0] ← TEMP1 [7-0] << 32TEMP4 [39-0] ← TEMP2 [31-0] << 8TEMP5 [39-0] ← TEMP3 [39-0] XORTEMP4 [39-0] TEMP6 [ 31-0] ← TEMP5 [39-0] MOD211EDC6F41HDEST [31-0] ← BIT_REFLECT (TEMP6 [31-0]) DEST [63-32] ← 00000000HTable 6 CRC32 instruction TEMP1 [7-0] ← BIT_REFLECT8 (SRC [7-0]) TEMP2 [31-0] ← BIT_REFLECT32 (DEST [31-0]) TEMP3 [for 8-bit source operand and 32-bit destination operand 39-0] ← TEMP1 [7-0] << 32TEMP4 [39-0] ← TEMP2 [31-0] << 8TEMP5 [39-0] ← TEMP3 [39-0] XORTEMP4 [39-0] TEMP6 [ 31-0] ← TEMP5 [39-0] MOD211EDC6F41HDEST [31-0] ← BIT_REFLECT (TEMP6 [31-0])Note that the general structure of these pseudo-code fragments is the same. First, the data in the source register is bit-reflective (ie, puts its bits into the temporary register in reverse bit order). The destination register is similar to bit reflection. Then, both the source and data operands of the bit reflection can be shifted, more specifically left shifted. Then, XOR the obtained value. This operation may correspond to polynomial division divided by the selected polynomial value. In different embodiments, the value may take different forms, especially in the specific implementation of CRC32 operation, the polynomial may correspond to 11EDC6F41H, but the scope of the present invention is not limited to this. The remainder of the division of the polynomial (ie, the remainder from the modulo-2 division of the polynomial) can be stored in the order of bit reflection into the lower order bits of the destination operand (eg, bits 0-31 of a 32-bit or 64-bit register). In the example of a 64-bit register, zeros can be loaded into the remaining most significant bits (MSB). Although described with specific implementations related to Tables 2-6, it is recognized that other ways of providing user-level CRC instructions can be performed.By performing CRC operations in the processor pipeline itself according to user-level instructions, there is no need to send data to the offload engine. Similarly, the operation can be performed without providing a state, reducing overhead. In this way, as implemented in a three-cycle path, CRC operations can be performed with less than about 0.4 cycles per byte. Therefore, user-level instructions can be used in the processor pipeline along with dedicated hardware to improve performance. In addition, three-cycle delay can be achieved with minimal resource consumption and power consumption. The embodiments of the present invention can be used to process various storage protocols, for example, the Internet Small Computer System Interface (iSCSI) with a rate greater than 10 Gbits per second. Embodiments of the present invention also allow the use of data that appears in or is closely connected to the processor to reduce the data on the cache memory required. In this way, the data in the processor's buffer can be fed into the XOR tree, so as to enable fast, on-the-fly CRC calculation.Multiple embodiments can be implemented in many different system types. Referring now to FIG. 5, a block diagram of a microprocessor system according to an embodiment of the present invention is shown. As shown in FIG. 5, the microprocessor system is a point-to-point interconnect system, and includes a first processor 470 and a second processor 480 coupled through a point-to-point interconnect 450. As shown in FIG. 5, each processor 470 and 480 may be a multi-core processor, including first and second processor cores (ie, processor cores 474a and 474b and processor cores 484a and 484b). Although each illustration is not shown, the first processor 470 and the second processor 480 (cores in a more specific processor) may include XOR tree logic in their execution units to facilitate an embodiment according to the present invention Execute user-level CRC instructions. The first processor 470 also includes a memory control center (MCH) 472 and point-to-point (P-P) interfaces 476 and 478. Similarly, the second processor 480 includes MCH 482 and P-P interfaces 486 and 488. As shown in FIG. 5, MCHs 472 and 482 couple processors to their respective memories, namely memory 432 and memory 434, which may be part of the main memory locally attached to the respective processors.The first processor 470 and the second processor 480 may be coupled to the chipset 490 through P-P interconnects 452 and 454, respectively. As shown in FIG. 5, the chipset 490 includes P-P interfaces 494 and 498. In addition, the chipset 490 includes an interface 492 for coupling the chipset 490 with the high-performance graphics engine 438. In one embodiment, an advanced graphics port (AGP) bus 439 may be used to couple the graphics engine 438 and the chipset 490. The AGP bus 439 can comply with the accelerated graphics port interface specification revised by Intel Corporation of Santa Clara, California on May 4, 1998, revision number 2.0. Alternatively, point-to-point interconnect 439 may couple these components.Then, the chipset 490 may be coupled to the first bus 416 through the interface 496. In one embodiment, the first bus 416 may be a Peripheral Component Interconnect (PCI) bus as specified in the PCI local bus specification with the product revision number 2.1 of June 1995, or a PCI Express bus or another A third-generation input / output (I / O) interconnection bus, but the scope of the present invention is not limited thereto.As shown in FIG. 5, various I / O devices 414 and a bus bridge 418 may be coupled to the first bus 416, and the bus bridge 418 couples the first bus 416 and the second bus 420. In one embodiment, the second bus 420 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 420, for example, these devices include: a keyboard / mouse 422, a communication device 426, and a data storage unit 428 that may contain code 430. In addition, audio I / O 424 may be coupled to the second bus 420. Note that other architectures are also possible. For example, instead of the point-to-point architecture of FIG. 5, the system may implement a multi-drop bus or other architecture.The embodiments can be implemented in code, and can be stored on a storage medium, where multiple instructions have been stored on the medium, and the system can be programmed to cause the system to execute these instructions. The storage medium may include, but is not limited to, any type of disk, including floppy disk, optical disk, compact disk read-only memory (CD-ROM), rewritable compact disk (CD-RW), and magneto-optical disk, such as read-only memory (ROM), random access memory (RAM) (such as dynamic random access memory (DRAM), static random access memory (SRAM)), erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read only memory (EEPROM), a magnetic or optical card, or any other type of media semiconductor device suitable for storing electronic instructions.Although the present invention has been described with respect to a limited number of embodiments, those skilled in the art will recognize various modifications and variations from this. The appended claims intend to cover all such modifications and variations that fall within the true spirit and scope of the present invention. |
A method and system for eliminating post etch residues is disclosed. In one method embodiment, the present invention recites disposing a surface, having post etch residues adhered thereto, proximate to an electron beam source which generates electrons. The present method embodiment then recites bombarding the post etch residues with the electrons such that the post etch residues are removed from the surface to which the post etch residues were adhered. |
What is claimed is: 1. A method for removing post etch residues using an electron beam-based inspection tool comprising a focussed electron beam source, said method comprising the steps of:a) disposing a surface, having post etch residues adhered thereto, proximate to said focussed electron beam source which generates electrons; and b) bombarding said post etch residues with said electrons such that said post etch residues are removed from said surface to which said post etch residues were adhered. 2. The method for removing post etch residues as recited in claim 1 wherein step a) comprises disposing said surface proximate to said electron beam source by controllably moving a structure, on which said surface is located, with respect to said electron beam source.3. The method for removing post etch residues as recited in claim 1 wherein step b) comprises bombarding said post etch residues with said electrons generated by said focussed electron beam source at an accelerating voltage of approximately 1000-3000 volts.4. The method for removing post etch residues as recited in claim 1 wherein step b) comprises bombarding said post etch residues with said electrons generated by said electron beam-based inspection tool operating with a probe current of approximately 1.0*10<-9 >to 1.0*10<-12 >Amperes.5. The method for removing post etch residues as recited in claim 1 wherein step b) comprises bombarding said post etch residues with said electrons generated by said electron beam-based inspection tool operating with a total power of approximately 1.0*10<-7 >Watts.6. The method for removing post etch residues as recited in claim 1 wherein steps a) and b) are performed without the use of toxic waste generating chemicals.7. The method for removing post etch residues as recited in claim 1 wherein steps a) and b) are performed without significantly heating said post etch residues and the surface to which said post etch residues were adhered.8. The method for removing post etch residues as recited in claim 1 further comprising the step of:c) transporting said post etch residues, which have been removed from said surface, away from said surface to which said post etch residues were previously adhered. 9. The method for removing post etch residues as recited in claim 8 wherein step c) comprises transporting said post etch residues away from said surface, to which said post etch residues were adhered, by creating a vacuum proximate said surface.10. A method for removing post etch residues using an electron beam-based inspection tool comprising a focussed electron beam source, said method comprising the steps of:a) disposing a surface, having post etch residues adhered thereto, proximate to said focussed electron beam source which generates electrons, said surface disposed proximate said focussed electron beam source by controllably moving a structure, on which said surface is located, with respect to said electron beam source; b) bombarding said post etch residues with said electrons such that said post etch residues are removed from said surface to which said post etch residues were adhered, said steps a) and b) performed without the use of toxic waste generating chemicals, without significantly heating said post etch residues and the surface to which said post etch residues were adhered; and c) transporting said post etch residues away from said surface to which said post etch residues were adhered by generating a vacuum proximate said surface. 11. The method for removing post etch residues as recited in claim 10 wherein step b) comprises bombarding said post etch residues with said electrons generated by said electron beam-based inspection tool adapted to operate with an accelerating voltage of approximately 1000-3000 volts.12. The method for removing post etch residues as recited in claim 10 wherein step b) comprises bombarding said post etch residues with said electrons generated by said electron beam-based inspection tool adapted to operate with a probe current of approximately 1.0*10<-9 >to 1.0*10<-12 >Amperes.13. The method for removing post etch residues as recited in claim 10 wherein step b) comprises bombarding said post etch residues with said electrons generated by said electron beam-based inspection tool adapted to operate with a total power of approximately 1.0*10<-7 >Watts. |
TECHNICAL FIELDThe present claimed invention relates to the field of semiconductor processing. More specifically, the present claimed invention relates to the removal of post etch residues during semiconductor processing.BACKGROUND ARTThe geometries of semiconductor devices are aggressively being scaled smaller and smaller to meet cost reduction and real estate requirements. Consequently, more polymers are needed during etch processes to protect exposed sidewalls and underlying layers while maintaining tight profile control and better etch selectively. As a result of the use of the additional polymers, complete and thorough polymer removal is becoming increasingly challenging.Also, semiconductor fabrication processes are also now incorporating various new materials including: new multiple dielectrics, metals, and resists, to achieve better device performance. The recently incorporated new materials have also necessitated the development of new etch processes. Some of the new etch sources for etching the new materials have complex etch chemistries which, in turn, create new etch residues. Traditionally, wet or dry clean processes, or their combinations, are used to ensure etch residue removal and to achieve the required low-contact resistance in, for example, vias. Experiments have shown, however, that conventional wet or dry clean processes cannot effectively remove the etch residues especially in high respect ratio etch environments such as, for example, via etching. Attempts have also been made to introduce fluorine-based chemicals within a low temperature cleaning process, but such fluorine-based chemicals deleteriously damage oxide-based dielectric materials. Also, conventional wet clean process steps typically require the use of a solvent that can be toxic, costly to use and dispose of, and which is difficult to handle. Moreover, in many cases, the chemical reactions occurring during the wet clean processes do. not produce enough activation energy to remove all of the etch residues. As a result, deleterious post etch residues remain, for example, in vias of the semiconductor devices being formed. Post etch residues which remain in contact holes, vias, or various other structures of the semiconductor device may ultimately result in device failure.In yet another conventional approach, a dry clean process is employed in an attempt to remove post etch residues. Conventional dry cleaning approaches are based on a plasma process. The traditional plasma excitation source used in a dry clean process can damage the semiconductor device due to the relatively large number of high-energy ions present in the plasma region. These high-energy ions can sputter chamber walls, create dielectric damage, and form unwanted driven-in mobile ions within the semiconductor wafer. Hence, traditional plasma-based dry clean processes have corresponding disadvantages which render them poorly suited to the removal of post etch residues.In still another conventional approach, microwave downstream plasma processing has been employed to remove post etch residues. The microwave downstream plasma process offers low damage performance. Unfortunately, however, the microwave downstream plasma process requires that the semiconductor wafers be heated to quite high temperatures. Specifically, in order to achieve a reasonable throughput, conventional microwave downstream plasma processes require heating the semiconductor wafer to temperatures in the range of 200 degrees Celsius or higher. The use of such high temperatures can cause post etch polymer residues to harden on the wafer, thereby rendering their removal even more difficult.As yet another concern, in order to achieve widespread acceptance, and to ensure affordability, any method of removing post etch residues, which overcomes the above-listed drawbacks, should be compatible with existing semiconductor fabrication processes.Thus, the need has arisen for a method and system to remove post etch residues. Another need exists for a method and system which meet the above needs and which does not suffer from the disadvantages associated with conventional post etch residue removal approaches. Yet another need exists for a method and system for removing post etch residues which meet the above needs and which are compatible with existing semiconductor fabrication processes such that significant revamping of semiconductor capital equipment is not required.DISCLOSURE OF THE INVENTIONThe present invention provides a method and system to remove post etch residues. The present invention further provides a method and system which achieve the above accomplishments and which does not suffer from the disadvantages associated with conventional post etch residue removal approaches. The present invention also provides a method and system for removing post etch residues which achieve the above accomplishments and which are compatible with existing semiconductor fabrication processes such that significant revamping of semiconductor capital equipment is not required.Specifically, in one method embodiment the present invention recites disposing a surface, having post etch residues adhered thereto, proximate to an electron beam source which generates electrons. The present method embodiment then recites bombarding the post etch residues with the electrons such that the post etch residues are removed from the surface to which the post etch residues were adhered.In another embodiment, the present invention includes the steps of the above-described embodiment and further recites transporting the loosened post etch residues away from the surface, to which the post etch residues were adhered, by creating a vacuum proximate the surface.These and other technical advantages of the present invention will no doubt become obvious to those of ordinary skill in the art after having read the following detailed description of the preferred embodiments which are illustrated in the various drawing figures.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention:PRIOR ART FIG. 1 is a side sectional view of a semiconductor structure having unwanted post etch residues disposed within a via.FIG. 2 is a side sectional view of the structure of PRIOR ART FIG. 1 after having a layer of photoresist removed therefrom and depicting bombarding of unwanted post etch residues with electrons in accordance with one embodiment of the present claimed invention.FIG. 3 is a side sectional view of the structure of FIG. 2 after the post etch residues have been removed by electron bombardment in accordance with one embodiment of the present claimed invention.FIG. 4 is a side sectional view of the structure of FIG. 4 after the deposition of a layer of metal.FIG. 5 is a schematic depiction of a system used to remove post etch residues in accordance with one embodiment of the present claimed invention.FIG. 6 is a flow chart of steps performed to remove post etch residues in accordance with one embodiment of the present claimed invention.FIG. 7 is a flow chart of steps performed to remove post etch residues in accordance with another embodiment of the present claimed invention.The drawings referred to in this description should be understood as not being drawn to scale except if specifically noted.BEST MODE FOR CARRYING OUT THE INVENTIONReference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention.With reference now to Prior Art FIG. 1, a side sectional view of a semiconductor structure 100 is shown. Specifically, semiconductor structure 100 includes a semiconductor substrate 102, contact regions 104a and 104b, a dielectric layer 106, and an overlying photoresist layer 108. Openings or vias 110a and 110b are shown formed extending through photoresist layer 108 and dielectric layer 106 such that contact regions 104a and 104b are exposed. It should be noted that structure 100 is cited as an exemplary structure only, and that the present method and system are well suited to use various other types and configurations of semiconductor structures. That is, as will be described in detail below, the post etch residue removal method and system of the present embodiments are not limited solely to use with the structure shown in Prior Art FIG. 1.With reference still to Prior Art FIG. 1, post etch residues, typically shown as 112, are shown disposed within vias 110a and 110b. As mentioned above, as the geometries of semiconductor devices are aggressively being scaled smaller and smaller to meet cost reduction and real estate requirements, more polymers are needed during etch processes to protect exposed sidewalls and underlying layers. As a result of the use of the additional polymers, complete and thorough polymer removal is becoming increasingly difficult. Hence, post etch residues of polymer material may remain in contact holes, vias, or various other structures. The presence of such polymer post etch residues may ultimately result in device failure. As will be described below, the present embodiments provide a method and system for removing post etch residues such as residues 112 of Prior Art FIG. 1. For purposes of clarity, the following discussion will utilize the side sectional views of FIGS. 1-4 in conjunction with FIG. 5 and flow charts 600 and 700 of FIGS. 6 and 7, respectively, to clearly describe the various embodiments of the present invention.With reference now to step 602 of flow chart 600 of FIG. 6, and also to FIG. 2, one method embodiment of the present invention recites disposing a surface, having post etch residues adhered thereto, proximate to an electron beam source which generates electrons. Arrows 200 of FIG. 2 are intended to represent the directional path of electrons which are generated by the electron beam source. In one embodiment, the electron beam source is disposed within an electron beam-based inspection tool. As shown in FIG. 2, in the present embodiment, photoresist layer 108 of Prior Art FIG. 1 has been removed.Referring now to FIG. 5, a schematic diagram of one embodiment of a post etch residue removal system 500 is shown. In one embodiment, post etch removal system 500 includes an electron beam-based inspection tool 502 comprised of: an electron generating portion 503 (having an electron emission tip 504), electron focusing lenses 506, a suppressor portion 508, and a power and control source 510. System 500 of the present embodiment further includes a controllable surface moving device unit 512 coupled to electron beam-based inspection tool 502. Controllable surface moving device unit 512 is adapted to move a semiconductor structure, typically shown as 514, such that electrons emitted from electron emission tip 504 will impinge the desired location on semiconductor structure 514. In another embodiment, a vacuum source 516 is coupled to electron beam-based inspection tool 502.Referring again to FIG. 2, and to step 602 of flow chart 600 of FIG. 6 in one method embodiment, a present invention recites disposing a surface, having post etch residues adhered thereto, proximate to an electron beam source which generates electrons. As mentioned above, post etch residue removal system 500 of FIG. 5 includes a controllable surface moving device unit 512 which is adapted to move a semiconductor structure such that electrons emitted from electron emission tip 504 will impinge the desired location on semiconductor structure. More particularly, in one embodiment of the present invention, semiconductor structure 100 of FIG. 2 is located with respect to electron beam-based inspection tool 502 such that generated electrons are directed towards openings 110a and 110b. In one embodiment, the position of semiconductor structure 100 is repeatedly adjusted such that numerous desired regions are disposed proximate to the path of the generated electrons.With reference now to step 604 of flow chart 600 and still to FIG. 2, the present method embodiment recites bombarding the post etch residues with the electrons such that the post etch residues are removed from the surface to which the post etch residues were adhered. More specifically, in the present embodiment, post etch residues 112 are bombarded with electrons generated by electron beam-based inspection tool 502 of FIG. 5 such that post etch residues 112 are removed from within vias 110a and 110b. The bombarding electrons break the loose bonds between the polymer and materials which are adhered by polymer, and efficiently evaporate the polymer post etch residues. In the present embodiment, electron beam-based inspection tool 502 provides a low energy electron beam source. That is, electron beam-based inspection tool 502 operating with an accelerating voltage of approximately 1000-3000 volts, with a probe current of approximately 1.0*10<-9 >to 1.0*10<-12 >Amperes, and a total energy of approximately 1.0*10<-7 >Watts.The low energy electron beam source of the present embodiments has distinct advantages associated therewith. Namely, unlike some conventional post etch residue removal methods (e.g. traditional plasma dry clean processes) which employ high energy ion sources, the low energy electrons of the present embodiment remove the post etch residues without the sputtering of chamber walls, without damaging dielectric materials, and without inducing mobile ion drive-in. As yet another advantage, conventional processes such as, for example, conventional wet clean processes, the present embodiments remove post etch residues without the use of toxic waste generating chemicals. Furthermore, unlike conventional microwave downstream plasma processing, the present embodiments remove the post etch residues without significantly heating the post etch residues and the surface to which the post etch residues were adhered. Hence, in the present invention, chemicals are saved and the environment is protected as no chemical agent is needed in the present invention. Moreover, the present invention is a substantially damage-free process because the total energy is 1.0*10<-7 >Watts as compared to the lowest damage form of conventional down stream dry etching which has a total energy of approximately 100 Watts.With reference now to FIG. 3, a side sectional view of the structure 100 of FIG. 2 is shown after post etch residues 112 have been removed therefrom in accordance with one embodiment of the present claimed invention. As shown in FIG. 3, openings 110a and 110b are free of post etch residues. As a result, subsequent processing steps can be performed without concern for post etch residue-induced defects. Subsequent processing step may include, for example, deposition of an overlying metal layer 114 as shown in FIG. 4. Although such a subsequent processing step is shown in FIG. 4, such a step is exemplary only, and the present invention is well suited to use with various other subsequent processing steps as well. Additionally, the present invention is well suited to performing steps 602 and 604 of FIG. 6, at process stages other than as depicted in FIGS. 2 and 3.With reference now to FIG. 7, a flow chart 700 of steps performed in accordance with another embodiment of the present invention is shown. The embodiment of FIG. 7 includes the same steps (602-604) as were recited and described above in detail in conjunction with the description of the embodiments of FIG. 6. For purposes of clarity and brevity, a discussion of steps 602-604 is not repeated here. The embodiment of FIG. 7, and particularly step 702 recites transporting the post etch residues, which have been removed from the surface, away from the surface to which the post etch residues, 112 of FIG. 2, were previously adhered. In one embodiment of the present invention, the loosened post etch residues are transported away from the surface, to which the post etch residues were adhered, by creating a vacuum proximate to the surface. More particularly, in one embodiment, post etch removal system 500 of FIG. 5 includes a vacuum pump 516 for creating a vacuum proximate to semiconductor structure 514. The vacuum causes the loosened post etch residues to move away from the semiconductor structure 514, and enables subsequent exhausting of the loosened particles. In one embodiment, a vacuum of approximately 5.0*10<-7 >Pascals is created by post etch removal system 500 proximate to semiconductor structure 514. As yet another benefit, the vacuum created post etch removal system 500 can also remove residues which were left traditional dry and wet clean processes prior to the removal of post etch residues.Beneficially, the method and system of the present embodiments are realized using existing semiconductor fabrication devices and processes such that significant revamping of semiconductor capital equipment is not required. As a result, the present embodiments do not require significant costs to implement.Thus, the present invention provides a method and system to remove post etch residues. The present invention further provides a method and system which achieve the above accomplishments and which does not suffer from the disadvantages associated with conventional post etch residue removal approaches. The present invention also provides a method and system for removing post etch residues which achieve the above accomplishments and which are compatible with existing semiconductor fabrication processes such that significant revamping of semiconductor capital equipment is not required.The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents. |
Mutual triggering of electrostatic discharge (ESD) fingers is improved by creating a base contact in each individual finger and connecting all of these base contacts in parallel. The local base contact in each ESD finger is located at a position where the base voltage significantly increases when the ESD current increases. Thus when an ESD finger is triggered its local base voltage will tend to significantly increase. Since all of the ESD finger bases are connected in parallel this local voltage increase will forward bias die base-emitter junctions of the other ESD fingers, thus triggering them all. By sharing the triggering current from the fastest ESD finger with the slower ones ensures that all ESD fingers are triggered during an ESD event. |
CLAIMS What is claimed is: 1. An apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad, comprising: a plurality of ESD fingers (300), wherein each of the plural ity of ESD fingers is coupled to a signal pad connection (323), a distributed base connection (31 ), a polysilicon layer (322) coupled to a gate connection, and a ground connection (318). 2. The apparatus according to claim 1 , wherein each of the plurality of ESD fingers (300) comprises: an NMOS device (312) comprising a high voltage (HV) drain formed by an N-well (330) formed in a P-substrate (308) and coupled to the signal pad (323) through an N+dif fusion contact (332) in the N-well (330), a gate formed by the polysilicon layer (322) over the P-substrate (308) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first N+diffusion contact (302) in the P-substrate (308) and coupled to the distributed base connection ( 16); a first NPN bipolar device (306) comprising a collector formed by the N-well (330), a base formed by the P-substrate (308), and an emitter formed by a second N+diffusion contact (3 10) in the P-substrate (308) and coupled to the ground connection (318); a second NPN bipolar device (324) comprising a collector formed by the N-well (330), a base formed by the P-substrate (308), and an emitter formed by the first N+diffusion contact (302) in the P-substrate (308) and coupled to the distributed base connection (316);a first P+diffusion contact (314) in the P-substrate (308) and coupled to the distributed base connection (316), wherein the first P+diffusion contact (314) is butted proximate to the first N+diffusion contact (302); and a second P+diffusion contact (320) in the P-substrate (308) and coupled to the ground connection (318). 3. The apparatus according to claim 2, wherein the second NPN bipolar device (324) is a secondary contribution NPN bipolar device to the first NPN bipolar device (306). 4. The apparatus according to claim 2, further comprising: a first resistor (328) formed in the P-substrate (308) that couples the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the first P+diffusion contact (314); and a second resistor (326) formed in the P-substrate (308) that couples the base of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the second P+diffusion contact (320). 5. The apparatus according to claim 4, wherein the first base resistor (328) is an unwanted parasitic resistor while second base-emitter resistor (326) is desired. 6. The apparatus according to claim 4, wherein the first base resistor (328) connects the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the distributed base connection (3 16). 7. The apparatus according to claim 4, wherein the second resistor (326) is higher in resistance than the first resistor (328) for maximizing mutual triggering of the plurality of ESD fingers (300). 8. The apparatus according to claim 2, wherein the gate formed by the polysilicon layer (322) is coupled to the distributed base connection (316). 9. The apparatus according to claim 2, wherein the gate formed by the polysil icon layer (322) is coupled to the distributed base connection (316) ) through a resistor. 10. The apparatus according to claim 2, wherein the gate formed by the polysilicon layer (322) is coupled to the ground connection (318) through a resistor. 1 1. The apparatus according to claim 2» wherein the gate formed by the polysilicon layer (322) is coupled to the ground connection (318). 12. The apparatus according to claim 2, wherein the gate formed by the polysilicon layer (322) is coupled to an ESD clamp triggering circuit ( 1 10). 13. The apparatus according to claim 1, wherein each of the plurality of ESD fingers (400a) comprises: an NMOS device (312) comprising a high voltage (HV) drain formed by an N-well (330a) butted to a P- well body (308a) and coupled to the signal pad (323) through an N+diffusion contact (332) in the N-well (330a), a gate formed by the polysilicon layer (322) over the P-well body (308a) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first N+diffusion contact (302) in the P-well body (308a) and coupled to the distributed base connection (316); a first NPN bipolar device (306) comprising a collector formed by the N-well (330a), a base formed by the P-well body (308a), and an emitter formed by a second N+diffusion contact (3 10) in the P-well body (308a) and coupled to the ground connection (3 18); a second NPN bipolar device (324) comprising a collector formed by the N-well (330a), a base formed by the P-well body (308a), and an emitter formed by the first N+diffusion contact (302) in the P-well body (308b) and coupled to the distributed base connection (316); a first P+diffusion contact (3 14) in the P-well body (308a) and coupled to the distributed base connection (316), wherein the first P+diffusion contact (314) is butted proximate to the first N+diffusion contact (302);a second P+dif fusi n contact (320) in the P-well body (308a) and coupled to the ground connection (318); and an isolation substrate (334) having the P-well body (308a) and the N-well (330a) deposed thereon, 14. The apparatus according to claim 13, wherein the second NPN bipolar device (324) is a secondary contribution NPN bipolar device to the first NPN bipolar device (306). 15. The apparatus according to claim 13, wherein the signal pad connection (323) is connected to a positive supply while the ground connection (318) is connected to a signal pad to be protected. 16. The apparatus according to claim 13, further comprising: a first resistor (328) formed in the P-well body (308a) that couples the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the first P+dif fusion contact (314); and a second resistor (326) formed in the P-well body (308a) that couple the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the second P+diffusion contact (320). 17. The apparatus according to claim 16, wherein the first base resistor (328) is an unwanted parasitic resistor while the second base-emitter resistor (326) is a desired parasitic resistor. 18. The apparatus according to claim 16, wherein the second resistor (326) is higher in resistance than the first resistor (328) for maximizing mutual triggering of the plurality of ESD fingers (400a). 19. The apparatus according to claim 13, wherein the gate formed by the polysilicon layer (322) is coupled to the distributed base connection (316). 20. The apparatus according to claim 13, wherein the gate formed by the polysilicon layer (322) is coupled to the ground connection (318) through a resistor. 21. The apparatus according to claim 13, wherein the gate formed by the polysilicon layer (322) is coupled to an ESD clamp triggering circuit (1 10). 22. The apparatus according to claim 1 , wherein each of the plurality of ESD fingers (400b) comprises: an NMOS device (312) comprising a high voltage (HV) drain formed by a deep N-well (330b) surrounding a P-well body (308b) and coupled to the signal pad connection (323) through an N+diffusion contact (332) in the deep N-well (330b), a gate formed by the polysilicon layer (322) over the P-well body (308b) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first N+diffusion contact (302) in the P-well body ( 308b) and coupled to the distributed base connection (316); a first NPN bipolar device (306) comprising a collector formed by the deep N-well (330b), a base formed by the P-well body (308b), and an emitter formed by a second N+diffusion contact (310) in the P-well body (308b) and coupled to the ground connection (3 18); a second NPN bipolar device (324) comprising a collector formed by the deep N-well (330b), a base formed by the P-well body (308b), and an emitter formed by the first N+diffusion contact (302) in the P-well body (308b) and coupled to the distributed base connection (316); a first P+diffusion contact (314) in the P-well body (308b) and coupled to the distributed base connection (316), wherein the first P+diffusion contact (3 14) is butted proximate to the first N+diffusion contact (302); a second P+diffusion contact (320) in the P-well body (308b) and coupled to the ground connection (318); and a P-substrate (308) having the deep N-well (330b) formed therein. 23. The apparatus according to claim 22, wherein the second NPN bipolar device (324) is a secondary contribution NPN bipolar device to the first NPN bipolar device (306). 24. The apparatus according to claim 5, wherein the first base resistor (328) is an unwanted parasitic resistor while second base-emitter resistor (326) is a desired parasitic resistor, 25. The apparatus according to claim 5, wherein the second resistor (326) is higher in resistance than the first resistor (328) for maximizing mutual triggering of the plurality of ESD fingers (400b). 26. The apparatus according to claim 22, wherein the gate formed by the polysilicon layer (322) is coupled to the distributed base connection (316). 27. The apparatus according to claim 22, wherein the gate formed by the polysilicon layer (322) is coupled to the distributed base connection (316) through a resistor. 28. The apparatus according to claim 22, wherein the gate formed by the polysilicon layer (322) is coupled to the ground connection (318) through a resistor. 29. The apparatus according to claim 22, wherein the gate formed by the polysilicon layer (322) is coupled to the ground connection (31 ), 30. The apparatus according to claim 22, wherein the gate formed by the polysilicon layer (322) is coupled to an ESD clamp triggering circuit (1 10). 31. The apparatus according to claim 22, wherein the signal pad connection (323) is connected to a positive supply while the ground connection (3 1 8) is connected to a signal pad to be protected. 32. An apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad, comprising: a plurality of ESD fingers (400c), wherein each of the plurality of ESD fingers (400c) is coupled to a signal pad connection (423), a distributed base connection (416), a polysilicon layer (422) coupled to a gate connection, and a ground connection (418); wherein each of the plurality of ESD fingers (400c) comprises:a PMQS device (412) comprising a drain formed by a P-well (430c) formed in a deep N-well (408c) and coupled to a ground pad (418) through an P+diffusion contact (432) in the P-well (430c), a gate formed by the polysilicon layer (422) over the deep N-well (408c) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first P+diffusion contact (402) in the deep N-well (408c) and coupled to the distributed base connection (416); a first PNP bipolar device (406) comprising a collector formed by the P-well (430c), a base formed by the deep N-well (408c), and an emitter formed by a second P+diffusion contact (410) in the deep N-well (408c) and coupled to the signal pad connection (423); a second PNP bipolar device (424) comprising a collector formed by the P-well (430c), a base formed by the deep N-well (408c), and an emitter formed by the first P+diffusion contact (402) in the deep N-well (408c) and coupled to the distributed base connection (416); a first N+diffusion contact (414) in the deep N-well (408c) and coupled to the distributed base connection (416), wherein the first N+diffusion contact (414) is butted proximate to the first P+diffusion contact (402); and a second N+diffusion contact (420) in the deep N-well (408c) and coupled to the signal pad connection (423). 33. The apparatus according to claim 32, wherein the second PNP bipolar device (424) is a secondary contribution PNP bipolar device to the first PNP bipolar device (406). 34. The apparatus according to claim 32, further comprising a first resistor (428) that couple the bases of the first PNP bipolar device (406) and the second PNP bipolar device (424) to the first N+diffusion contact (414), 35. The apparatus according to claim 33, further comprising a second resistor (426) that couple the bases of the first PNP bipolar device (406) and the second PNP bipolar device (424) to a second N+diffusion contact (420), wherein the second resistor (426) is higher in resistance than the first resistor (428) for maximizing mutual triggering of the plurality of BSD fingers (400c). 36. The apparatus according to claim 35, wherein the first base resistor (428) is an unwanted parasitic resistor while second base-emitter resistor (426) is a desired parasitic resistor. 37. The apparatus according to claim 32, wherein the gate formed by the polysil icon layer (422) is coupled to the distributed base connection (416). 38. The apparatus according to claim 32, wherein the gate formed by the polysilicon layer (422) is coupled to the distributed base connection (416) through a resistor. 39. The apparatus according to claim 32, wherein the gate formed by the polysilicon layer (422) is coupled to the ground connection (418) through a resistor. 40. The apparatus according to claim 32, wherein the gate formed by the polysilicon layer (422) is coupled to the ground connection (418). 41 . The apparatus according to claim 32 wherein the gate formed by the polysilicon layer (422) is coupled to an ESD clamp triggering circuit ( 1 10). 42. The apparatus according to claim 32, wherein the signal pad connection (423) is connected to a positive supply while the ground connection (418) is connected to a signal pad to be protected. 43. An apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad, comprising: a plurality of ESD fingers (400d),wherein each of the plurality of ESD fingers (400d) is coupled to a signal pad connection (423), a distributed base connection (416), a polysilicon layer (422) coupled to a gate connection, and a ground connection (418); wherein each of the plurality of ESD fingers (400d) comprises: a PMOS device (412) comprising a drain formed by first P+diffusion contact (432) formed in a P-well (430d) formed in a deep N-well (408d) and coupled to a ground pad (418), a gate formed by the polysilicon layer (422) over the deep N-well (40 d) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a second P+diffusion contact (442) in the deep N-well (408c) and coupled to the signal pad connection (423); an NPN bipolar device (406) comprising a collector formed by the deep N-well (408d) and coupled to the signal pad connection (423) through the second N+diffusion contact (444), a base formed by the P-well (430d) that is formed in the deep N-well (408d), and an emitter formed by a first N+diffusion (410) built inside the P-well (430d), and ipled to the ground connection (418); a third P+diffusion contact (414) in the P-well (430d) and coupled to the distributed base connection (416); a first P+diffusion contact (432) formed in the P-well (430d) and acting as the base contact (426) to the ground connection (418); and a P-substrate having the deep N-well (408d) formed therein. 44. The apparatus according to claim 43, further comprising a first resistor (428) formed between the base of the NPN bipolar device (406) and the first P+diffusion contact (414), 45. The apparatus according to claim 43, further comprising a second resistor (426) formed between the base of the NPN bipolar device (406) and the first P+diffusion contact (432) formed in the P-well (430d) and coupled to the ground pad connection (418), wherein the second resistor (426) is higher in resistance than the first resistor (428) for maximizing mutual triggering of the plurality of ESD fingers (400d). 46. The apparatus according to claim 44, wherein the first base resistor (428) is an unwanted parasitic resistor while second base-emitter resistor (426) is a desired parasitic resistor. 47. The apparatus according to claim 43, wherein the gate formed by the poly silicon layer (422) is coupled to the pad connection (423). 48. The apparatus according to claim 43, wherein the gate formed by the polysilicon layer (422) is coupled to the pad connection (423) through a resistor. 49. The apparatus according to claim 43, wherein the gate formed by the polysilicon layer (422) is coupled to the pad connection (423). 50. The apparatus according to claim 43, wherein the gate formed by the polysilicon layer (422) is coupled to an ESD clamp triggering circuit (1 10). 51 . The apparatus according to claim 43, wherein the signal pad connection (423 ) is connected to a positive supply while the ground connection (418) is connected to a signal pad to be protected. 52. An apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad, comprising: a plurality of ESD fingers (900), wherein each of the plurality of ESD fingers (900) is coupled to a signal pad connection (923), a distributed base connection ( 16), and a ground connection (918); wherein each of the plurality of ESD fingers (900) comprises: an NPN bipolar device (906) comprising a collector formed by a first N+diffusion contact (932) formed in an N-well (930) formed in a P-substrate (908) and coupled to the signal pad connection (923), a base formed in the P-substrate (908), and an emitter formed by a second N+diffusion contact (910) formed in the P-substrate (908) and coupled to the ground connection (918). 53. The apparatus according to claim 52, further comprising a first resistor (928) formed between the base of the NPN bipolar device (906) and a first P+diffusion contact (914). 54. The apparatus according to claim 53, further comprising a second resistor (926) formed between the base of the NPN bipolar device (906) and a second P+diffusion contact (920) coupled to the ground connection (918), wherein the second resistor (926) is higher in resistance than the first resistor (928) for maximizing mutual triggering of the plurality of ESD fingers (900). 55. The apparatus according to claim 53, wherein the first base resistor (928) is an unwanted parasitic resistor while second base -emitter resistor (926) is a desired parasitic resistor. 56. An apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad, comprising: a plurality of ESD fingers ( 1000), wherein each of the plurality of ESD fingers ( 1000) is coupled to a signal pad connection ( 1023), a distributed base connection ( 1016), and a ground connection ( 1018); wherein each of the plurality of ESD fingers ( 1 00) comprises; a PNP bipolar device (1006) comprising a collector formed by the P-substrate (1008) and coupled to the ground connection ( 1018) through a first P+diffusion contact ( 1032) formed in the P-substrate (1008), a base formed by an N-well ( 1 30) and coupled to the distributed base connection ( 1016) through a first N+diffusion contact ( 1014) formed in the N-well ( 1030) and coupled to the pad connection ( 1023 ) through a second N+diffusion contact ( 1020) formed in the N-well ( 1030), and an emitter formed by a second P+diffusion contact ( 1010) formed in the N-well base ( 1030) and coupled to the signal pad connection ( 1023). 57. The apparatus according to claim 56, further comprising a first resistor (1028) formed between the base of the PNP bipolar device ( 1006) and the first N+diffusion contact ( 1014). 58. The apparatus according to claim 57, further comprising a second resistor ( 1026) formed between the base of the PNP bipolar device (1006) and a second N+diffusion contact ( 1020) formed in the N-well ( 1030) and coupled to the signal pad connection ( 1023), wherein the second resistor ( 1026) is higher in resistance than the first resistor (1028) for maximizing mutual triggering of the plurality of ESD fingers ( 1000). 59. The apparatus according to claim 57, wherein the first base resistor ( 1028 ) is an unwanted parasitic resistor while second base-emitter resistor ( 1026) is a desired a parasitic resistor. 60. An apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad, comprising: a plurality of ESD fingers (1 100), wherein each of the plurality of ESD fingers (1 100) is coupled to a signal pad connection (1 123), a distributed base connection (1 1 16), and a ground connection (1 18); wherein each of the plurality of ESD fingers (1 100) comprises: an NPN bipolar device (1 106) comprising a collector formed by a deep N-well (1 108) and coupled to the signal pad connection (1 123) through a second N+diffusion contact (1 144), a base formed by a P-well (1130) formed in the deep N-well (1 108) and coupled to the distributed base connection (11 16) through a first P+diffusion contact (1 1 14) formed in the P-well (1 130) and coupled to the ground connection (1 1 18) through a second P+diffusion contact (1 132) formed in the P-well (1 130), and an emitter formed by a first N+diffusion (1 1 10) formed inside the P-well (1 130), and coupled to the ground connection (1 1 18); and a P-substrate having the deep N-well (1108) formed therein, 61. The apparatus according to claim 60, further comprising a first resistor (1 128) formed between the base of the NPN bipolar device (1106) and the first P+diffusion contact ( 1 1 14). 62. The apparatus according to claim 61, further comprising a second resistor (1 126) formed between the base of the NPN bipolar device (1 106) and the second P+diffusion contact (1 132) formed in the P-well (1130), wherein the second resistor (1126) is higher in resistance than the first resistor (1128) for maximizing mutual triggering of the plurality of ESD fingers (1 100). 63. The apparatus according to claim 61 , wherein the first base resistor ( 1028) is an unwanted parasitic resistor while second base -emitter resistor ( 1026) is a desired parasitic resistor. 64. The apparatus according to claim 60, wherein the signal pad connection ( 1 123) is connected to a positive supply while the ground connection ( 1 1 18) is connected to a signal pad to be protected. |
MULTI-CHANNEL HOMOGENOUS PATH FOR ENHANCED MUTUAL TRIGGERING OF ELECTROSTATIC DISCHARGE FINGERS RELATED PATENT APPLICATION This application claims priority to commonly owned United States Provisional Patent Application Serial Number 61/510,357; filed July 21 , 201 1 ; entitled "Multi-Channel Homogenous Path for Enhanced Mutual Triggering of Electrostatic Discharge Fingers," by Philippe Deval, Fernandez Marija and Besseux Patrick; which is hereby incorporated by reference herein for all purposes. TECHNICAL FIELP The present disclosure relates to high voltage (HV) metal oxide semiconductor (MOS) devices, and more particularly, to providing enhanced electrostatic discharge (ESD) protection for the HV MOS devices. BACKGROUND CAN Controller-area network (CAN or CAN-bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other within a vehicle without a host computer. CAN is a message-based protocol designed specifically for automotive applications but is now also used in other areas such as industrial automation and medical equipment. The LIN- Bus (Local Interconnect Network) is a vehicle bus standard or computer networking bus-system used within current automotive network architectures. The LIN specification is enforced by the UN-consortium. The LIN bus is a small and slow network system that is used as a cheap sub-network of a CAN bus to integrate intelligent sensor devices or actuators in today's vehicles. The automotive industry is beginning to require higher than the standard 4 kV HBM ESD target. Current information indicates that greater than 6 kV is required (targeting 8 kV on the bus pins and SPLIT pin). Also, the industry may subject the device to system level tests as defined by IEC 801 and I EC 61000-4- 2. Therefore it is necessary to meet IEC 1000-4-2: 1995 specifications, as well as the following reliability specifications on all pins of an integrated circuit device used in a CAN and/or LIN system: ESD: EIA/JESD22 Al 14/A1 13; ESD: IEC 1000-4-2: 1995. High energy ESD discharge (8KV HBM / 6KV IEC 61000.4) induces high current peak flowing in the ESD protection (up to 20A @ 6KV IEC 61000.4). Adding a 220 pF loadcapacitor in parallel with the integrated circuit device signal pad for protection (Automotive requirement) significantly amplifies this current peak (discharge current of this capacitor adds to the ESD current and there is substantially no series resistance with this load capacitor to limit its discharge current when the ESD circuit snaps back), SUMMARY Therefore what is needed is a more robust ESD protection circuit capable of handling enhanced high energy ESD discharge without, damage to the protected integrated circuit device. According to an embodiment, an apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad may comprise: a plurality of ESD fingers (300), wherein each of the plurality of ESD fingers may be coupled to a signal pad connection (323), a distributed base connection (316), a polysilieon layer (322) coupled to a gate connection, and a ground connection (318). According to a further embodiment, each of the plurality of ESD fingers (300) may comprise: an NMOS device (312) comprising a high voltage (HV) drain formed by an N-well (330) formed in a P-substrate (308) and coupled to the signal pad (323) through an N+diffusion contact (332) in the N-well (330), a gate formed by the polysilieon layer (322) over the P-substrate (308) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first N+diffusion contact (302) in the P-substrate (308) and coupled to the distributed base connection (3 16); a first NPN bipolar device (306) comprising a collector formed by the N-well (330), a base formed by the P-substrate (308), and an emitter formed by a second N+diffusion contact (3 10) in the P-substrate (308) and coupled to the ground connection (3 18); a second NPN bipolar device (324) comprising a collector formed by the N-well (330), a base formed by the P-substrate (308), and an emitter formed by the first N+diffusion contact (302) in the P-substrate (308) and coupled to the distributed base connection (3 16); a first P+diffusion contact (3 14) in the P-substrate (308) and coupled to the distributed base connection (316), wherein the first P+diffusion contact (314) may be butted proximate to the first N+diffusion contact (302); and a second P+diffusion contact (320) in the P-substrate (308) and coupled to the ground connection (3 18). According to a further embodiment, the second NPN bipolar device (324) may be a secondary contribution NPN bipolar device to the first NPN bipolar device (306). Accordingto a further embodiment, a first resistor (328) may be formed in the P-substrate (308) that couples the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the first P+diffusion contact (314); and a second resistor (326) may be formed in the P-substrate (308) that couples the base of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the second P+diffusion contact (320). According to a further embodiment, the first base resistor (328) may be an unwanted parasitic resistor while second base-emitter resistor (326) may be desired. According to a further embodiment, the first base resistor (328) connects the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the distributed base connection (316). According to a further embodiment, the second resistor (326) may be higher in resistance than the first resistor (328) for maximizing mutual triggering of the plurality of ESD fingers (300). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the distributed base connection (316). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the distributed base connection (316) ) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the ground connection (3 18) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the ground connection (3 1 8). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to an ESD clamp triggering circuit ( 1 10). According to a further embodiment, each of the plurality of ESD fingers (400a) may comprise: an NMOS device (3 12) comprising a high voltage (HV) drain formed by an N-well (330a) butted to a P-well body (308a) and coupled to the signal pad (323) through an N+diffusion contact (332) in the N-well (330a), a gate formed by the polysilicon layer (322) over the P-well body (308a) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first N -(-diffusion contact (302) in the P-well body (308a) and coupled to the distributed base connection ( 16); a first NPN bipolar device (306) comprising a collector formed by the N-well (330a), a base formed by the P-well body (308a), and an emitter formed by a second N+diffusion contact (3 10) in the P-well body (308a) and coupled to the ground connection (318); a second NPN bipolar device (324) may comprise a collector formed by the N-well (330a), a base formed by the P-well body (308a), and an emitter formed by the first N+diffusion contact (302) in the P-well body (308b) and coupled to thedistributed base connection (316); a first P+diffusion contact (314) in the P-well body (308a) and coupled to the distributed base connection (316), wherein the first P+diffusion contact (314) may be butted proximate to the first N+diffusion contact (302); a second P+diffusion contact (320) in the P-well body (308a) and coupled to the ground connection (318); and an isolation substrate (334) having the P-well body (308a) and the N-well (330a) deposed thereon. According to a further embodiment, the second NPN bipolar device (324) may be a secondary contribution NPN bipolar device to the first NPN bipolar device (306). According to a further embodiment, the signal pad connection (323) may be connected to a positive supply while the ground connection (318) may be connected to a signal pad to be protected. According to a further embodiment, a first resistor (328) may be formed in the P-well body (308a) that couples the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the first P+diffusion contact (314); and a second resistor (326) may be formed in the P-well body (308a) that couple the bases of the first NPN bipolar device (306) and the second NPN bipolar device (324) to the second P+diffusion contact (320). According to a further embodiment, the first base resistor (328) may be an unwanted parasitic resistor while the second base-emitter resistor (326) may be a desired parasitic resistor. According to a further embodiment, the second resistor (326) may be higher in resistance than the first resistor (328) for maximizing mutual triggering of the plurality of ESD fingers (400a). According to a further embodiment, the gate formed by the poly silicon layer (322) may be coupled to the distributed base connection (316). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the ground connection (318) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to an ESD clamp triggering circuit ( 1 10). According to a further embodiment, each of the plurality of ESD fingers (400b) may comprise: an NMOS device (3 12) comprising a high voltage (HV) drain formed by a deep N-well (330b) surrounding a P-well body (308b) and coupled to the signal pad connection (323) through an N+diffusion contact (332) in the deep N-well (330b), a gate formed by the polysilicon layer (322) over the P-well body (308b) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first N+diffusion contact (302) in the P-well body (308b) and coupled to the distributed base connection (316); a first NPN bipolar device(306) comprising a collector formed by the deep N-well (330b), a base formed by the P-well body (308b), and an emitter formed by a second N+diffusion contact (310) in the P-well body (308b) and coupled to the ground connection (318); a second NPN bipolar device (324) comprising a collector formed by the deep N-well (330b), a base formed by the P-well body (308b), and an emitter formed by the first N+diffusion contact (302) in the P-well body (308b) and coupled to the distributed base connection (316); a first P+diffusion contact (314) in the P-well body (308b) and coupled to the distributed base connection (316), wherein the first P+diffusion contact (314) may be butted proximate to the first N+diffusion contact (302); a second P+diffusion contact (320) in the P-well body (308b) and coupled to the ground connection (318); and a P-substrate (308) having the deep N-well (330b) formed therein. According to a further embodiment, the second NPN bipolar device (324) may be a secondary contribution NPN bipolar device to the first NPN bipolar device (306). According to a further embodiment, the first base resistor (328) may be an unwanted parasitic resistor while second base-emitter resistor (326) may be a desired parasitic resistor. According to a further embodiment, the second resistor (326) may be higher in resistance than the first resistor (328) for maximizing mutual triggering of the plurality of ESD fingers (400b). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the distributed base connection (316). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the distributed base connection (316) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the ground connection (318) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to the ground connection (318). According to a further embodiment, the gate formed by the polysilicon layer (322) may be coupled to an ESD clamp triggering circuit ( 1 10). According to a further embodiment, the signal pad connection (323) may be connected to a positive supply while the ground connection (318) may be connected to a signal pad to be protected. According to another embodiment, an apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad may comprise: a plurality of ESD fingers (400c), wherein each of the plurality of ESD fingers (400c) may be coupled to a signal pad connection (423), a distributed base connection (416), a polysilicon layer (422) coupled to agate connection, and a ground connection (418); wherein each of the plurality of (400c) may comprise: an PMOS device (412) comprising a drain formed by a P-well (430c) formed in a deep N-well (408c) and coupled to a ground pad (418) through an P+diffusion contact (432) in the P-well (430c), a gate formed by the polysilicon layer (422) over the deep N-well (408c) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a first P+diffusion contact (402) in the deep N-well (408c) and coupled to the distributed base connection (416); a first PNP bipolar device (406) comprising a collector formed by the P-well (430c), a base formed by the deep N-well (408c), and an emitter formed by a second P+diffusion contact (410) in the deep N-well (408c) and coupled to the signal pad connection (423); a second PNP bipolar device (424) comprising a collector formed by the P-well (430c), a base formed by the deep N-well (408c), and an emitter formed by the first P+diffusion contact (402) in the deep N-well (408c) and coupled to the distributed base connection (416); a first N+diffusion contact (414) in the deep N-well (408c) and coupled to the distributed base connection (416), wherein the first N+diffusion contact (414) may be butted proximate to the first P+diffusion contact (402); and a second N+diffusion contact (420) in the deep N-well (408c) and coupled to the signal pad connection (423). According to a further embodiment, the second PNP bipolar device (424) may be a secondary contribution PNP bipolar device to the first PNP bipolar device (406). According to a further embodiment, a first resistor (428) may couple the bases of the first PNP bipolar device (406) and the second PNP bipolar device (424) to the first N+diffusion contact (414). According to a further embodiment, a second resistor (426) may couple the bases of the first PNP bipolar device (406) and the second PNP bipolar device (424) to a second N+diffusion contact (420), wherein the second resistor (426) may be higher in resistance than the first resistor (428) for maximizing mutual triggering of the plurality of ESD fingers (400c). According to a further embodiment, the first base resistor (428) may be an unwanted parasitic resistor while second base-emitter resistor (426) may be a desired parasitic resistor. According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to the distributed base connection (416). According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to the distributed base connection (416) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to the ground connection (418) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (422) may becoupled to the ground connection (418). According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to an ESD clamp triggering circuit (1 10), According to a further embodiment, the signal pad connection (423) may be connected to a positive supply while the ground connection (418) may be connected to a signal pad to be protected. According to yet another embodiment, an apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad may comprise: a plurality of ESD fingers (400d), wherein each of the plurality of ESD fingers (400d) may be coupled to a signal pad connection (423), a distributed base connection (416), a polysilicon layer (422) coupled to a gate connection, and a ground connection (418); wherein each of the plurality of ESD fingers (400d) may comprise: a PMOS device (412) comprising a drain formed by first P+diffusion contact (432) formed in a P-well (430d) formed in a deep N-well (408d) and coupled to a ground pad (418), a gate formed by the polysilicon layer (422) over the deep N-well (408d) and insulated therefrom by a thin oxide layer therebetween, and a source formed by a second P+diffusion contact (442) in the deep N-well (408c) and coupled to the signal pad connection (423); an NPN bipolar device (406) comprising a collector formed by the deep N-well (408d) and coupled to the signal pad connection (423) through the second N+diffusion contact (444), a base formed by the P-well (430d) that may be formed in the deep N-well (408d), and an emitter formed by a first N+diffusion (410) built inside the P-well (430d), and coupled to the ground connection (418; a third P+diffusion contact (414) in the P-well (430d) and coupled to the distributed base connection (416); a first P+diffusion contact (432) formed in the P-well (430d) and acting as the base contact (426) to the ground connection (418); and a P-substrate having the deep N-well (408d) formed therein. According to a further embodiment, a first resistor (428) may be formed between the base of the NPN bipolar device (406) and the first P+diffusion contact (414). According to a further embodiment, a second resistor (426) may be formed between the base of the NPN bipolar device (406) and the first P+diffusion contact (432) formed in the P-well (430d) and coupled to the ground pad connection (18), wherein the second resistor (426) may be higher in resistance than the first resistor (428) for maximizing mutual triggering of the plurality of ESD fingers (400d). According to a further embodiment, the first base resistor (428) may be an unwanted parasitic resistor while second base-emitter resistor (426) may be a desired parasitic resistor. According to a further embodiment, the gate formed by the polysiliconlayer (422) may be coupled to the pad connection (423). According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to the pad connection (423) through a resistor. According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to the pad connection (423). According to a further embodiment, the gate formed by the polysilicon layer (422) may be coupled to an ESD clamp triggering circuit (1 10). According to a further embodiment, the signal pad connection (423) may be connected to a positive supply while the ground connection (418) may be connected to a signal pad to be protected. According to still another embodiment, an apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad may comprise: a plurality of ESD fingers (900), wherein each of the plurality of ESD fingers (900) may be coupled to a signal pad connection (923), a distributed base connection (916), and a ground connection (918); wherein each of the plurality of ESD fingers (900) may comprise: an NPN bipolar device (906) comprising a collector formed by a first N+diffusion contact (932) formed in an N-well (930) formed in a P-substrate (908) and coupled to the signal pad connection (923), a base formed in the P-substrate (908), and an emitter formed by a second N+diffusion contact (910) formed in the P-substrate (908) and coupled to the ground connection (918). According to a further embodiment, a first resistor (928) may be formed between the base of the NPN bipolar device (906) and a first P+diffusion contact (914). According to a further embodiment, a second resistor (926) may be formed between the base of the NPN bipolar device (906) and a second P+diffusion contact (920) coupled to the ground connection (918), wherein the second resistor (926) may be higher in resistance than the first resistor (928) for maximizing mutual triggering of the plurality of ESD fingers (900). According to a further embodiment, the first base resistor (928) may be an unwanted parasitic resistor while second base-emitter resistor (926) may be a desired parasitic resistor. According to another embodiment, an apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad may comprise: a plurality of ESD fingers (1000), wherein each of the plurality of ESD fingers (1000) may be coupled to a signal pad connection (1023), a distributed base connection (1016), and a ground connection (1018); wherein each of the plurality of ESD fingers ( 1000) may comprise: a PNP bipolar device (1006) comprising a collector formed by the P-substrate ( 1008) and coupled to the groundconnection ( 1018) through a first P+diffusion contact (1032) formed in the P-substrate (1008), a base formed by an N-well ( 1030) and coupled to the distributed base connection ( 1016) through a first N+diffusion contact ( 1014) formed in the N-well ( 1030) and coupled to the pad connection ( 1023) through a second N+diffusion contact (1020) formed in the N-well ( 1030), and an emitter formed by a second P+diffusion contact (1010) formed in the N-well base ( 1030) and coupled to the signal pad connection ( 1023). According to a further embodiment, a first resistor ( 1028) may be formed between the base of the PNP bipolar device ( 1006) and the first N+diffusion contact ( 1014). According to a further embodiment, a second resistor ( 1026) may be formed between the base of the PNP bipolar device ( 1006) and a second N+diffusion contact ( 1020) formed in the N-well ( 1030) and coupled to the signal pad connection ( 1023), wherein the second resistor (1026) may be higher in resistance than the first resistor (1028) for maximizing mutual triggering of the plurality of ESD fingers (1000). According to a further embodiment, the first base resistor (1028) may be an unwanted parasitic resistor while second base-emitter resistor (1026) may be a desired a parasitic resistor. According to another embodiment, an apparatus for electrostatic discharge (ESD) protection of an integrated circuit pad may comprise: a plurality of ESD fingers ( 1 100), wherein each of the plurality of ESD fingers (1 100) may be coupled to a signal pad connection ( 1 123), a distributed base connection ( 1 1 16), and a ground connection ( 1 1 18); wherein each of the plurality of ESD fingers (1 100) may comprise: an NPN bipolar device (1 106) comprising a collector formed by a deep N-well ( 1 108) and coupled to the signal pad connection ( 1 123) through a second N+diffusion contact (1 144), a base formed by a P-well ( 1 130) formed in the deep N-well ( 1 108) and coupled to the distributed base connection ( 1 1 16) through a first P+diffusion contact ( 1 1 14) formed in the P-well ( 1 130) and coupled to the ground connection ( 1 1 18) through a second P+diffusion contact ( 1 132) formed in the P-well ( 1 130), and an emitter formed by a first N+diffusion ( 1 1 10) formed inside the P-well ( 1 130), and coupled to the ground connection ( 1 1 18); and a P-substrate having the deep N-well ( 1 108) formed therein. According to a further embodiment, a first resistor ( 1 128) may be formed between the base of the NPN bipolar device ( 1 106) and the first P+diffusion contact ( 1 1 14). According to a further embodiment, a second resistor ( 1 126) may be formed between the base of the NPNbipolar device (1 106) and the second P+diffusion contact ( 1 132) formed in the P-well ( 1 130), wherein the second resistor ( 1 126) may be higher in resistance than the first resistor ( 1 128) for maximizing mutual triggering of the plurality of ESD fingers (1 100), According to a further embodiment, the first base resistor (1028) may be an unwanted parasitic resistor while second base-emitter resistor ( 1026) may be a desired parasitic resistor. According to a further embodiment, the signal pad connection (1 123) may be connected to a positive supply while the ground connection (1 1 18) may be connected to a signal pad to be protected. BRIEF DESCRIPTION OF THE DRAWINGS A more complete understanding of the present disclosure may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 illustrates a schematic block diagram of an electrostatic discharge (ESD) protection circuit having a plurality of ESD protection fingers fabricated in an integrated circuit die, according to the teachings of this disclosure; Figure 2 illustrates a schematic cross-section elevational diagram of a prior art grounded gate N-type metal -oxide-semiconductor (NMOS) ESD protection circuit fabricated in an integrated circuit die; Figure 3 illustrates a schematic cross-section elevational diagram of a grounded gate NMOS ESD protection circuit fabricated in an integrated circuit die, according to a specific example embodiment of this disclosure; Figure 4A illustrates a schematic cross-section elevational diagram of a grounded gate NMOS ESD protection circuit on an isolated substrate and fabricated in an integrated circuit die, according to another specific example embodiment of this disclosure; Figure 4B illustrates a schematic cross-section elevational diagram of a grounded gate lateral N-type diffusion metal oxide semiconductor (NDMOS) ESD protection circuit in an integrated circuit die, according to another specific example embodiment of this disclosure; Figure 4C illustrates a schematic cross-section elevational diagram of a "grounded gate" PDMQS ESD protection circuit in an integrated circuit die, according to another specific example embodiment of this disclosure;Figure 4D illustrates a schematic cross-section elevational diagram of a "grounded gate" PDMOS ESD protection circuit where the distributed base connection is moved to the drain side in an integrated circuit die, according to another specific example embodiment of this disclosure; Figure 5 illustrates a schematic circuit diagram of the prior art grounded gate NMOS ESD protection circuit shown in Figure 2; Figure 6 illustrates a schematic circuit diagram of the grounded gate NMOS ESD protection circuits shown in Figures 3 and 4; Figure 7 illustrates a schematic isometric diagram of the grounded gate NMOS ESD protection circuit shown in Figure 3; Figure 8 illustrates a schematic cross-section elevational diagram of a grounded gate NMOS ESD protection circuit showing a plurality of ESD fingers fabricated in an integrated circuit die, according to specific example embodiments of this disclosure; Figure 9 illustrates a schematic cross-section elevational diagram of a NPN only ESD protection circuit showing a plurality of ESD fingers fabricated in an integrated circuit die, according to specific example embodiments of this disclosure; Figure 10 illustrates a schematic cross-section elevational diagram of a PNP ESD protection circuit showing a plurality of ESD fingers on an isolated substrate and fabricated in an integrated circuit die, according to specific example embodiments of this disclosure; and Figure 1 1 illustrates a schematic cross-section elevational diagram of an isolated NPN ESD protection circuit fabricated in an integrated circuit die, according to another specific example embodiment of this disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims.DETAILED DESCRIPTION High ESD energy bypassing in ESD protection devices requires wide devices that can only be achieved through multiple elementary devices connected in parallel. Such elementary devices will hereinafter be referred to as "fingers." Maximum efficiency is achieved when all of these fingers in parallel are triggering together. Under certain discharge conditions only a few, even a single finger(s), is (are) triggered. Thus ESD protection efficiency is dramatically reduced. ESD protection mainly relies on the inherent companion bipolar device to the MOS device. Usually grounded gate N-type metal-oxide-semiconductor (NMOS) are used as ESD devices. Inherent bipolar companion device to the grounded gate NMOS device is an NPN device. A grounded gate (GG) NMOS device is an NMOS device having its gate connected to its source terminal directly or through a grounding gate resistance, the source node being connected to the ground. The drain and source nodes of the NMOS transistor, that are N-type doped islands diffused into a P-substrate (or Pbody), constitute the collector and emitter terminal of the NPN bipolar companion device while the P-substrate (or Pbody) constitutes the base of this NPN bipolar companion device. The greater the base voltage, the greater the collector current. The GGNMOS device operates as follows when a positive ESD event occurs; Applying the positive discharge to the drain of the GGNMOS device induces a fast increase of the drain voltage of this device. Very quickly the drain voltage reaches the break-down voltage of the drain-to-Pbody junction. This induces a break-down current into the Pbody that flows to the ground through the Pbody contact (P+ diff tie). The current flow induces a voltage drop into the Pbody due to inherent resistance of the Pbody. This voltage drop induces a base-emitter current as soon as it reaches a junction voltage (~ 0.7V) in the source area that is as well the companion NPN emitter region. This base -emitter current is amplified by the beta factor of the companion NPN device thereby inducing an increase of the current flowing into the Pbody as well as the voltage drop. As a matter of fact the base current increases thereby inducing a positive feedback effect commonly known as an "avalanche effect." From this point the current increases very quickly and the drain voltage collapses down to a voltage hereinafter referred to as a "holding voltage." The drain voltage fromwhich the avalanche effect starts is hereinafter referred to as a "snapback voltage" or "triggering voltage." All fingers must trigger simultaneously for maximizing the ESD robustness. However the ESD current concentrates into the fastest fingers since the base voltage of the fastest fingers increases faster than the base voltage of the slowest ones due to larger current in the fastest fingers. Non-uniform finger triggering results in degraded HV ESD protection: High voltage ESD protection circuits usually have a hold voltage dramatically lower than the snapback voltage. Thus once one finger triggers, it tends to sink the entire current since the voltage on the pin drops at a level from which the other fingers cannot snapback. Techniques, like drain-to-gate capacitive coupling, exist for improving simultaneous triggering of the fingers. Increasing the ballast resistors also helps. However, under certain discharge conditions, e.g., 1EC61000-4-2 with 220 pF load capacitor on the signal pad, these techniques are no longer sufficient. The main reason for which some fingers are not triggering is that the minimum energy (base current) required to trigger these fingers wasn't injected accumulated in their bases when the faster finger starts snapping back and dropping out the integrated circuit signal pad (pin) voltage. This base current is injected through the leakage current of the drain junction when its voltage is close to its break-down. Thus dropping the signal pad (pin) voltage stops the leakage and base current injection. According to the teachings of this disclosure, mutual triggering of the fingers is improved by homogenizing the base voltage of each finger. This implies that all of the bases are to be connected together which is not the case with the prior art. This is achieved by modifying the ground connection as shown in Figure 3. The N+ source 302 and local P+ body connection 3 14 are no longer connected to the ground line as done in the prior art through N+ source/emitter 202 and local P+ body/base connection 214 (Figure 2). The N+ source 302 and local P+ body connection 3 14 become the local base contact. It is connected to a "distributed base" 316. Ideally all of the local bases should be connected together with "strong metal" (low resistance) connections. However there is inherent resistance in the distributed base connection 3 16 that is represented by a series resistance 328 (also, e.g., the ground return resistance 228 shown in Figure 2). Care must be taken to minimize the series resistance 328. This is why the N+ source 302 and local P+ body/base diffusion are preferably butted together (placed next to each other). An N+ diffusion 3 10 is added in order to create the emitter of the main ESD NPN bipolar device 306. The emitter contact is createdby adding the N+ diffusion 310. The inherent companion NPN device 324 thereby becomes mainly parasitic. A weak ground return path needs to be created in order to prevent small leakage current in the P-substrate/Pbody from triggering the ESD protection. This weak ground return path is achieved through the P+ substrate contact 320 and inherent base grounding resistance. In order to maximize mutual finger triggering through the base current this base grounding resistance needs to be- large compared to the series resistance 328. This will minimize the resistive divider effect between series resistance 328 of the fastest finger and the parallel association of the grounding resistance 316 of all other fingers. This weak ground-return path / large grounding resistance 326 is achieved by placing only a few minimally sized grounding P+ substrate contact islands 320 (Figure 7). By contrast the N+ emitter diffusion 310 shall be wide (Figure 7). One way to create the weak ground return path and wide emitter diffusion in a minimal area is by creating a few minimally sized P+ diffusion islands inside the wide N+ emitter diffusion as shown in Figure 7. The local base contact in each finger is located at a position where the base voltage significantly increases when the ESD current increases. Thus when a finger is triggered its local base voltage will tend to significantly increase. Since all of the finger bases are connected in parallel this local voltage increase will forward bias the base-emitter junctions of the other fingers, thus triggering them all. By sharing the triggering current from the fastest ESD finger with the slower ones ensures that all fingers are triggered during an ESD event. In some processes, like SOI process, the N-Well HV drain is no longer built inside the P-substrate, but is butted to the P-Well/Pbody 308 of GGNMOS (that is as well 306 NPN base as explained hereinabove) as shown in Figure 4A. This has no significant impact of the overall behavior of the invention, according to the teachings of this disclosure. GGNMOS ESD protection is based on an NMOS device (LV/HV) having its gate grounded (tied to its source body potential). The NMOS device has an intrinsic NPN companion device. The NMOS body i the base of the NPN companion device while the drain and source constitute the collector and emitter terminals, respectively. This NMOS device is normally off, but when its drain voltage increases and reaches the drain-to-body breakdown, carriers are injected into the base of the companion NPN device and thus forward biases the base-emitter junction thereof. This creates a collector current that injects more current in the base that is equal to or greater than a voltage that enables an avalanche effect and the device snaps back. From this moment the current increases very quickly.Referring now to the drawing, the details of a specific example embodiment is schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix. Referring to Figure 1 , depicted is a schematic block diagram of electrostatic discharge (ESD) protection having a plurality of ESD protection fingers fabricated in an integrated circuit die, according to the teachings of this disclosure. The base current of each finger 300 is shared between all the fingers 300a to 300n, thereby improving mutual triggering thereof. When one finger 300 is triggered its base current starts increasing dramatically due to the avalanche effect in this finger. The excess of current is distributed to the other fingers 300 helping them to reach their snapback point. This can be easily implemented with standard HV NMOS devices. The source/base nodes (sb) of the fast and slow fingers 300 are preferably connected together through a strong metal (i.e., very low resistance) distributed base connection 316. Without the strong metal distributed base connection 316, the fast and slow fingers 300 may become decoupled upon an ESD event. Simulated voltage-time graphs of current sharing were run. In prior art ESD fingers (Figure 2), the fast finger was 1 st/2nd Ipk 3.1/2.9A, Ppk 125/75W, for total energy of 760 nano-joules; and for the slow finger(s) 1 st 2nd Ipk 2/0.6A, Ppk 70/l OW, for total energy of 380 nano-joules. In the ESD fingers 300 (Figure 3), according to the teachings of this disclosure, current sharing in the enhanced mutual triggering of the ESD fingers 300 indicated for the fast finger 1 st 2nd Ipk 2, 1/1.1 A, Ppk 90/20W, for total energy of 510 nano- joules; and for the slow finger(s) 1 st/2nd Ipk 2/1 A, Ppk 70/15W, for total energy of 420 nano-joules. A significant difference from the prior art current sharing fingers 200 (Figure 2). These simulations showed very significant improvement in the homogeneity of finger currents. However, simulations do not take into account self heating that increases finger current mismatch. Current mismatch depends mainly on the transit time of the bipolar devices (e.g., 0.35 ns fast fingers, 0.5 ns slow fingers). The 220 picoFarad load capacitance 102 required by the automotive industry combined with the package plus printed circuit board (PCB) line inductance 104 creates local energy storage plus ringing that further increases the stress on the ESD protection devices (as connected to integrated circuit signalpads 106), Thus the best possible homogeneity of ESD finger currents is desired. The aforementioned ESD circuit improvements, which were motivated for automotive applications, also apply to finger currents homogeneity improvement for any other type of systems as well. According to the current simulations the ESD capability may be increased by about 50 percent. Referring to Figures 2 and 3, depicted are schematic cross-section elevational diagrams for comparison purposes of a prior art grounded gate (GO) NMOS ESD protection circuit (Figure 2), and, according to the teachings of this disclosure, a new, novel and non- obvious grounded gate (GG) NMOS ESD protection circuit (Figure 3). As shown in Figure 2, a high voltage (HV) NMOS device 212 has its source formed from an N+diffusion area local butted source/emitter contact 202. This butted source/emitter contact 202 is connected to a ground connection 218. The drain of the NMOS device 212 is formed by the N-well 230 and is connected to a signal pad 223. A poly silicon layer 222 over a thin oxide forms the gate of the HV MOS device 212. The gate of the HV NMOS device 212 may be connected to the ground connection 218 through a resistor (not shown) or to a triggering circuit 1 10, e.g., see Figure 1. A P-body diode 204 is formed between the N-well 230 and the P-substrate 208 which also forms the base f the bipolar transistor 224. Breakdown current will flow through the P-body diode 204. As shown in Figure 3, a high voltage (HV) NMOS device 312 drain is connected to a signal pad 323 through an N+diffusion contact 332. A source thereof is formed from an N+diffusion area local butted source/emitter contact 302. This butted source/ emitter contact 302 is not connected to the ground connection 318 as shown in Figure 2 (prior art). Instead the butted source/ emitter contact 302 and a local P+diffusion butting contact 3 14 are connected to a distributed base connection 3 1 . An N+diffusion contact 3 10 is placed next to the local P+d illusion butting contact 3 14 and becomes the emitter of the NPN bipolar device 306. The collector of the NPN bipolar device 306 is formed with the N-well 330 which also forms the drain of the HV MOS device 3 12. This HV-drain/collector is connected to the signal pad 323 through the N+diffusion contact 332. A polysilicon layer 322 over a thin oxide forms the gate of the HV MOS device 312. A second substrate P+diffusion contact 320 is connected to t e ground connection 3 18 and is added next to the N+diffusion contact 3 10. The N+diffusion contact 3 10 forms the emitter of the NPN bipolar device 306. Thesubstrate resistance between the second substrate P+diffusion contact 320 and N+di ffusion contact (emitter) 310 implements a base-to-emitter resistor 326, The base-to-emitter resistor 326 is needed to prevent early/unwanted triggering on pad glitches. As mentioned hereinabove the base-to-emitter resistor 326 resistor shall be weak (higher resistance) to maximize mutual triggering of the fingers. The gate of the HV NMOS device 312 may be connected to the distributed base connection 316 through a resistor or to an ESD clamp triggering circuit 1 10, e.g., see Figure 1. HV-NMOS current induced through appropriate gate coupling significantly helps in delivering a base current into each finger 300, according to the teachings of this disclosure. A secondary contribution NPN bipolar device 324 is shown in Figure 3 with dashed lines. The N+diffusion area local butted source/base contact 302 is the emitter of the secondary contribution NPN device 324. When an ESD event occurs, it first triggers the HV NMOS device 312 which pulls up the local bases and thereby turns on the NPN bipolar devices 306 and 324 that will handle most of the ESD event current. Since all the local bases are connected in parallel through the distributed strong base connection 316, the first HV NMOS device 312 that triggers will generate the base current for all of the NPN bipolar devices 306 and 324. Therefore, the respective NPN bipolar devices 306 and 324 of the other ESD fingers will be turned on as well within a small time delay. This may not be as efficient as all ESD fingers simultaneously and naturally triggering, but is much better then only triggering a single one or just a few ESD fingers. All the gates of the HV NMOS devices 312 may be connected together and grounded through a resistor ensuring a time constant (Rground * Cgate) of preferably about 30 microseconds. Additionally, drain-to-gate coupling may be required. Adaptive gate coupling may be used as well. Adding an N+diffusion contact 310 for creating the emitter of the main ESD NPN device 306 and ground return contacts 320 increases the area of the unit ESD cell. Increasing the area required for the unit ESD cell is counter intuitive to an integrated circuit designer who will avoid increased ESD cell area since area is critical in integrated circuit designs. However, the benefit of homogeneous finger triggering at substantially the same time is significantly higher than being able to place more fingers 300 in a given integrated circuit die area. Practically, the semiconductor device structure shown in Figure 3 may be very sensitive. Thus a base-to-emitter resistor 326 is preferred in order to prevent the ESD device(s) from triggering on a glitch. Since all of the base-to-emitter resistors 326 and 328are connected in parallel through the distributed base connection 316, the effective base- emitter resistance, Rbe, is low thereby requiring significant current flowing in the faster finger 300 for triggering the whole ESD structure. Ballasting may be used in both collector/drain and emitter sides for minimizing effects of local heating and or local thermal run away. Referring to Figure 4A, depicted is a schematic cross-section elevational diagram of a grounded gate NMOS ESD protection circuit on an isolated substrate and fabricated in an integrated circuit die, according to another specific example embodiment of this disclosure. The circuit shown in Figure 4A functions in substantially the same way as the grounded gate NMOS ESD protection circuit shown in Figure 3 with the addition of an isolation substrate 334, e.g., like triple well or SOI processes where the N-well HV drain is no longer built inside the P-substrate but is butted to the P-Well/Pbody 308 of the GGNMOS. Figure 4B shows a cross-section elevational diagram of a grounded gate lateral N-type diffusion metal oxide semiconductor (NDMOS) ESD protection circuit. These various implementations have no significant impact on the overall behavior of the invention, according to the teachings of this disclosure. According to the teachings of this disclosure, all embodiments described and claimed herein may be applied to HV PMOS or HV PDMOS technologies as well as the embodiment shown in Figure 4C. All devices previously described herein become complementary: The high voltage (HV) DMOS device 430 drain is connected to the ground pad 418 through an P+diffusion contact 432. A source thereof is formed from a P+diffusion area local source contact 402 butted with the local N+diffusion base contact 414 and is connected to the distributed base connection 416. A P+diffusion contact 410 is placed next to the local N+diffusion butting contact 414 and becomes the emitter of the PNP bipolar device 406. The collector of the PNP bipolar device 406 is formed with the P-Well 430 which also forms the drain of the HV MOS device 412. This HV-drain/collector is connected to the ground pad 418 through the P+diffusion contact 432. A polysilicon layer 422 over a thin oxide forms the gate of the HV MOS device 412. A second substrate N+diffusion contact 420 is connected to the pad connection 423 and is added ne t to the P+diffusion contact 410. The P+diffusion contact 410 forms the emitter of the PNP bipolar device 406. The substrate resistance between the second substrate P+diffusion contact 420 and P+diffusion contact (emitter) 410 implements a base-to-emitter resistor 426. The base-to-emitter resistor 426 is needed toprevent early/unwanted triggering on pad glitches. As mentioned hereinabove for the HVNMOS implementation, the base-to-emitter resistor 426 shall be weak (higher resistance) to maximize mutual triggering of the fingers. The gate of the HV PMOS device 412 may be connected to the distributed base connection 416 through a resistor (not shown) or to an ESD clamp triggering circuit 1 10, e.g., see Figure 1. HV-PMOS current induced through appropriate gate coupling significantly helps in delivering a base current into each finger 400c, according to the teachings of this disclosure. Usually the PNP companion device associated with PMOS transistors is less efficient during an ESD event than the NPN companion device of a NMOS transistor. Moving the ESD protection, according to the teachings of this disclosure, to the drain side of the HV-PMOS transistor is shown in Figure 4D. Figure 4D depicts a schematic cross-section elevational diagram of a central drain "grounded gate" dual HV-PMOS ESD protection circuit fabricated in an integrated circuit die, according to another specific example embodiment of this disclosure. The HV drain P-Well 430 of the dual HV-PMOS transistors 412a and 412b is extended out of the central drain contact 432 in order to be able to add new diffusions therein. Two P+diffusions 414a and 414b are added sufficiently far from the P+diffusion central drain contact 432 in order to be able to implement N+diffusion 410a between P+diffusion 414a and the central drain contact 432, and the N+diffusion 410b between P+diffusion 414b and central drain contact 432. N+diffusions 410a and 410b implement the emitters of created NPN transistors 406a and 406b while the central drain contact 432 of the dual HV-PMOS transistors 412a and 412b also becomes a return contact for the base-to-emitter resistors 426a and 426b of these added NPN transistors 406a and 406b. The N+diffusion body contacts 444a and 444b for the HV-PMOS transistors 412a and 412b act as well as collector contacts for the NPN transistors 406a and 406b while added P+diffusions 414a and 414b are used as contacts for the distributed base 416. The triggering current for the ESD protection is created either through leakage current of drain-to-body diodes 404a and 404b and/or MOS current via appropriate gate coupling for HV-PMOS transistors 412a and 412b. Referring to Figures 5 and 6, depicted are schematic circuit diagrams of the intrinsic NPN devices in a prior art ESD protection shown in Figure 2, and the intrinsic NPN devices in the new GGNMOS ESD protection shown in Figure 3, respectively. The secondary contribution NPN devices 324 are depicted as dashed lines in Figure 6. The main reason whysome fingers are not triggering is that the minimum energy (base current) required to trigger these fingers wasn't injected accumulated in their bases when the faster finger starts snapping back and dropping out the pad/pin voltage. This base current is injected through the leakage current of the drain junction of the GGNMOS device 312 when the voltage is close to its break-down voltage. Thus dropping the pad/pin voltage stops the leakage and base current injection. The circuit implementations showed in Figures 3 and 6 share the base currents of each finger between all the fingers, thereby improving mutual triggering of the ESD fingers 300. When one finger is triggering its base current starts increasing dramatical 1 y due to the avalanche effect in this finger. The excess of current is distributed to the other fingers thereby helping them to reach their snapback point. Referring to Figure 7, depicted is a schematic isometric diagram of the grounded gate NMOS ESD protection circuit shown in Figure 3. As described hereinabove, the weak ground-return path / large grounding resistance 326 may be achieved by placing only a few minimally sized grounding P+ substrate contact islands 320 into the wide N+ emitter diffusion 310. This facilitates creation of a weak ground return path and a wide emitter diffusion using a minimal area of the integrated circuit die. Referring to Figure 8, depicted is a schematic cross-section elevational diagram of a grounded gate NMOS ESD protection circuit showing a plurality of ESD fingers fabricated in an integrated circuit die, according to specific example embodiments of this disclosure. A common N-well 330 may be part of at least two finger structures as shown for fingers 300a and 300b. This combination may be repeated over an area of the integrated circuit die. An advantage of the present invention is that it maximizes ESD robustness of HV ESD protection through homogeneous current sharing between ESD fingers. Further features and advantages of the present invention include but are not limited to: 1 ) dramatic improvement of current matching between ESD fingers, 2) maximizes efficiency of HV ESD protection, 3) compliant with bulk and trench isolated (SOI) technologies, 4) applicable to CAN, LIN and many other HV products, and 5) meets very stringent requirements, e.g., automotive application. According to the teachings of this disclosure, all embodiments described and claimed herein apply as well to bipolar only protection (Figure 9). A bipolar only implementation is achieved when the polysilicon gates 322 and source contacts 302 are removed. Only thedistributed base contacts 316 are maintained. The secondary bipolar transistor 324 disappears, but triggering through the gate is no longer possible. Referring to Figure 9, depicted is a schematic cross-section elevational diagram of a NPN only BSD protection circuit showing a plurality of BSD fingers fabricated in an integrated circuit die, according to specific example embodiments of this disclosure. The collectors of the NPN bipolar devices 906a and 906b are formed with the N-well 930 which is coupled to the signal pad 923 through the N+diffusion 932. The contacts formed by the N+diffusions 10a and 10b are built in the P-substrate 908 and form the emitters of the NPN bipolar devices 906a and 906b. First P+diffusion substrate contacts 914a and 914b are placed between the N-well 930 (collectors) and the emitters formed by the N+diffusions 910a and 910b. The P+diffusions 914a and 914b are coupled to the distributed base 916 through parasitic resistances 928a and 928b. Second P+diffusion substrate contacts 920a are 920b are added external to the N+diffusions 910a and 910b (emitters) and connect the base-emitter resistances 926a and 926b to the ground connection 918. As mentioned hereinabove the base-to-emitter resistor 926 resistor shall be weak (higher resistance) to maximize mutual triggering of the fingers 900. Referring to Figure 10, depicted is a schematic cross-section elevational diagram of a PNP BSD protection circuit showing a plurality of BSD fingers on an isolated substrate and fabricated in an integrated circuit die, according to another specific example embodiment of this disclosure. Dual high voltage (HV) PNP devices 1006a and 1 06b have their emitters formed from P+diffusions 1010a and 1010b, respectively, built in the N-Well base 1030. The dual emitter P+diffusions 1010a and 1010b are tied to pad connection 1023. Dual N+diffusions 1014a and 1014b tie the parasitic dual resistors 1028a and 1028b to the distributed base 1016, and N+diffusion 1020 ties the emitter-base resistances 1026a and 1026b to the pad connection 1023. The P-substrate 1008 constitutes the collectors of the dual HV PNP devices 1006a and 1 06b, and is tied to the ground connection 1018 through dual P+diffusion contacts 1032a and 1032b Referring to Figure 1 1 , depicted is a schematic cross-section elevational diagram of an isolated NPN BSD protection circuit fabricated in an integrated circuit die, according to another specific example embodiment of this disclosure. This structure is based on figure 4D where the HV-PMOS 412a/b have been removed. The collectors of the NPN bipolardevices 1 106a and 1 106b are formed with the Deep-N-well 1 108 which is coupled to the signal pad 1 123 through the N+diffusion 1 144a and 1 144b. The HV P-Well 1 130 (that was the drain of HV-PMOS in figure 4D) is the base of dual isolated NPN 1 106a and 1 106b. The central P+diffusion contact 1 132 is the ground return base contact. Two P+dif fusions 1 1 14a and 1 1 14b are added sufficiently far from the central P+diffusion ground return base contact 1 132 in order to be able to implement N+diffusion 1 1 10a between P+diffusion 1 1 14a and the ground return base contact 1 132, and the N+diffusion 1 1 10b between P+diffusion 1 1 14b and central P+diffusion ground return base contact 1 132. N+diffusions 1 1 10a and 1 1 10b implement the emitters of created isolated NPN transistors 1 106a and 1 106b. The central P+diffusion 1 32 implements the return contact for the base-to-emitter resistors 1 126a and 426b of the isolated NPN transistors 1 106a and 1 106b while the added P+dif fusions 1 1 14a and 1 1 14b are used as contacts for the distributed base 1 1 16 through the local resistors 1 128a and 1 128b. As mentioned hereinabove the base-to-emitter resistors 1 126a and 1 126b shall be weak (higher resistance) while local resistors 1 128a and 1 128b shall be as low as possible in order to maximize mutual triggering of the bipolar 1 106a and 1 106b. The triggering current for the ESD protection is created through leakage current of collector-to-base diodes 1 104a and 1 104b. For all of the embodiments described hereinabove, the pad to protect is assumed to be positive versus ground. Elsewhere process intrinsic diodes are forward biased clamping the pad voltage a junction voltage (-0.7 V) below the ground voltage. Protecting a pad versus ground is the most common situation. However some application may require protecting the pad versus a positive supply like the battery voltage (Vbat). According to the teachings of this disclosure, the techniques described herein apply as well to such a situation when the isolated protection presented in figures 4A, 4B, 4C, 4D or 1 1 are used. In order to explain how it works the pad termination 323, 423 or 1 123 is renamed the positive termination while the ground termination 31 8, 418 or 1 1 18 is renamed negative termination. Protecting the pad versus t e positive voltage is achieved by connecting the positive terminations 323, 423 or 1 123 to the positive voltage while negative terminations 3 18, 418 or 1 1 18 is connected to the pad to be protected. It is contemplated and within the scope of this disclosure that one having ordinary skill in integrated circuit design and the benefit of this disclosure could effectively apply the new ESD circuits disclosed herein to any basic bulk process (e.g. for LIN application) orBCD, BiCMOS, triple well, SOI, etc. The main difference is that such processes may have more layers that are not shown in the basic descriptions of the embodiments presented hereinabove. While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure. |
Methods and apparatus related to adaptive control loop protection for fast and robust recovery from low-power states in high speed serial I/O applications are described In some embodiments, a first bit pattern is detected, at a first agent, that indicates a speculative entry by a second agent into a low power consumption state and one or more control loops are frozen. A second bit pattern is detected (after entering the low power consumption state) that indicates exit from the low power consumption state by the second agent and the one or more control loops are unfrozen (e.g., in a specific order). Other embodiments are also claimed and/or disclosed. |
An apparatus comprising:logic, coupled to a first agent, to detect a first bit pattern that indicates a speculative entry by a second agent into a low power consumption state and to cause freezing of one or more control loops; andlogic to detect a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops.The apparatus of claim 1, wherein the one or more control loops are to comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop.The apparatus of claim 2, wherein the second logic is to cause unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop; orwherein the second logic is to inject an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop; orwherein the second logic is to unfreeze the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop.The apparatus of claim 1, further comprising logic to determine whether the second agent has in fact entered the low power consumption state after freezing of the one or more control loops and in response to expiration of a timer; orwherein the first agent and the second agent are to be coupled via a link and wherein the link comprises a Peripheral Component Interconnect Express (PCIe) link; orwherein the first bit pattern is to comprise an EIOS (Electronic Idle Ordered Set) bit pattern; orwherein the second bit pattern is to comprise an EIEOS (Electronic Idle Exit Ordered Set) bit pattern.The apparatus of claim 1, wherein the first agent is to comprise a PCIe controller; orwherein the second agent is to comprise an input/output device.The apparatus of claim 1, wherein the first agent and the second agent are to be coupled via a link; optionallywherein the link is to comprise a point-to-point coherent interconnect.The apparatus of claim 1, wherein the first agent is to comprise one or more of the logic to detect the first bit pattern and the logic to detect the second bit pattern; orwherein one or more of the first agent, the second agent, and memory are on a same integrated circuit chip.A method comprising:detecting, at a first agent, a first bit pattern that indicates a speculative entry by a second agent into a low power consumption state and to cause freezing of one or more control loops; anddetecting a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops.The method of claim 8, wherein the one or more control loops comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop.The method of claim 9, wherein detecting the second bit patter causes unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop; orwherein detecting the second bit patter causes injection of an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop; orwherein detecting the second bit patter causes unfreezing of the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop.A system comprising:a processor having a first agent and a second agent; andlogic to detect a first bit pattern that indicates a speculative entry by the second agent into a low power consumption state and to cause freezing of one or more control loops; andlogic to detect a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops.The system of claim 11, wherein the one or more control loops are to comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop.The system of claim 12, wherein the logic to detect the second bit pattern is to cause unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop; orwherein the logic to detect the second bit pattern is to inject an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop; orwherein the logic to detect the second bit pattern is to unfreeze the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop.An apparatus comprising means to perform a method as set forth in any of claims 8 to 13.A computer program product comprising computer program code configured, when executed, to implement a method or realize an apparatus as set forth in any preceding claim. |
FIELDThe present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to adaptive control loop protection for fast and robust recovery from low-power states in high speed serial link I/O applications.BACKGROUNDOne common Input/Output (I/O or IO) interface used in computer systems is Peripheral Component Interconnect express (PCIe). As PCIe speeds are increased, however, some resulting signal distortion reduces signal communication reliability. For example, PCIe links with high data transfer rates may generally use a self-corrective feedback control loop to control analog receiver circuits. However, noisy data input may occur at entry of and exit from a power state, which would cause the feedback control loop to react and may unsettle it to values that are not optimal for an electrically robust link.BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.Fig. 1 illustrates a block diagram of an embodiment of a computing system including PCIe devices and or other I/O devices, which can be utilized to implement one or more embodiments discussed herein.Fig. 2 illustrates a block diagram of an embodiment of a computing system, which can be utilized to implement one or more embodiments discussed herein.Fig. 3A illustrates a flow diagram of a method, according to an embodiment.Fig. 3B illustrates a block diagram for a low power state exit Finite State Machine controlled CDR loop filter, according to an embodiment.Fig. 4 illustrates a block diagram of an embodiment of a computing system, which can be utilized to implement one or more embodiments discussed herein.Fig. 5 illustrates a block diagram of an embodiment of a computing system, which can be utilized to implement one or more embodiments discussed herein.Detailed descriptionIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, some embodiments are practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Various aspects of embodiments of the invention are performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software") or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, or some combination thereof.PCIe Gen3(where PCIe refers to PCI express, which may be in accordance with PCIe Base Specification Revision 3.0 (e.g., version 1.0, November 10, 2010)) and other serial I/O's with high data transfer rates use self-corrective feedback control loops to control analog receiver circuits. During full link training in the beginning, these loops go through acquisition to settle to optimal start values for link to function electrically in a robust manner. Once a link is out of training and fully functional, control loops continuously correct analog circuits to sample the incoming data within bit error rate requirement. However, noisy data input, for example, may potentially happen at the entry and exit of a power state, and would cause the feedback control loops to react and may unsettle them to values that are not optimal for an electrically robust link. It is extremely important for these kinds of receivers to have an accurate solution to prevent the loops from being exposed to any kind of noisy data like what may happen during the entry and exit of link power management states.In the legacy Gen1 and Gen2 PCIe designs, complex analog squelch circuits generally provide a reliable way of detecting entry into lower power management state (squelch) to protect loops from noisy squelch data. In high speed Gen3 PCIe design (or even in some PCIe Gen2 designs), reliable analog indication of entry into power state at 8GT/s and above is not feasible due to complex inter-symbol interference (ISI) and small signal amplitude. In these high speed designs, analog indication is replaced with digital way of decoding and detection of an EIOS (Electronic Idle Ordered Set) bit pattern indicating intent to enter into a lower power state. Controller in the receiver PHY layer processes this EIOS pattern and sends indication to Analog Front End (or AFE, which includes analog circuits that receive incoming analog signals and resolve them to receive data in binary format; and convert binary transmit data into analog signals to send over PCIe link to link partner devices).A PCIe link partner (such as controller 135 of Fig. 1 ) may send a signal to root-complex receiver that indicates entry to a lower power state, e.g., L0s (where L0s generally refers to power savings state) that is relatively short duration idle mode with expectation of fast wake-up. Root-complex receiver uses this signal for entering a lower power state like L0s and in turn sends indication to AFE to shut-off (or make idle) appropriate analog circuits. But, the current way of digital detection and L0s solution takes a significantly long time. Link-partner may enter squelch mode during this time and start sending squelch data, exposing the loops to squelch data, e.g., for 50 to 100 ns depending on Gen3 or Gen2 data-rates for processing the EIOS data in order to confirm entry in to low power-state. And also in many boundary logic conditions, loop will be exposed to noisy data during squelched condition for a longer period, as there may not be reliable indication from the controller about entry into a lower power state. For example, when the link is in recovery sub-states, EIOS detection to LTSSM (Link Training and Status State Machine) is masked, and the controller may not send the L0s entry signal to AFE and that would result in corrupting loops to drift to a non-optimal position that may not be recoverable. Under such conditions the AFE receiver may become exposed to noisy signals and common-mode jumps and the control loops try to correct for these and settle down to values that may be completely suboptimal for regular data traffic. This may cause link failure(s) after the receiver exits the lower power state.This situation can be partially mitigated by increasing the length and duration of nFTS (the number of Fast Training Sequences required to assist AFE receiver to achieve bit lock) patterns at the exit from the lower power state, but this may severely reduce the power and performance benefits as the overall exit latency increases. Such exit latency increase takes away the time that can be spent in the lower power state which reduces power management benefits/efficiency. For many applications and workloads, there can be repeated back-to-back entry to and exits from lower power states. In these cases, the problem can manifest in an even more severe form and even with longer nFTS, and the receiver may encounter burst errors. Hence, if the issue is not addressed, products with PCIe Gen3 capable circuit architecture would face: (i) link degradation or link failure after exit from a lower power state; and/or (ii) reduction in power and performance benefit as exit latency would be longer.Moreover, the symptoms of receiver recovery problems may include:(a) nFTS timeout and link entering recoveries on L0s exit;(b) Scenarios of Sudden Link Downs (SLDs) where controller do not assert RX_L0s (Receive L0s) on EIOS. It results in AFE RX loops being exposed to squelch data for a relatively long time, which eventually corrupts adaptive loops beyond self-recovery;(c) Slow degradation of link performance on back-to-back L0s events. It is due to the noisy data at the beginning of a low power state exit which could cause the receiver adaptive loop to drift, and the L0 residency time is not long enough for the receiver to fully recover before entering the next L0s state.To this end, in some embodiments, a controller (e.g., PCIe controller 135 of Fig. 1 ) processes EIOS in a special way to generate a relatively early indication that an end point (or agent) is entering a lower power state (e.g., by decoding COM IDLE IDLE IDLE in Gen1/Gen2 and first 4 EIOS symbols in Gen3). This early EIOS is a potential indication that root complex could enter into L0s state but it could drop EIOS and stay in L0 state in some boundary cases. AFE uses this early EIOS indication to cause freezing of the control loops (and also arm the analog squelch exit detection logic to detect the squelch exit from the low power state). This freeze mechanism will prevent the control loops from reacting to noisy squelch data after a link-partner completes transmission of EIOS. It may take a significant amount of time (e.g., up to 100ns) for a controller to process the EIOS data in order to confirm entry in to low power-state. If the normal L0s entry signal is used, the adaptive loops may be exposed to squelch data for the controller processing latency each time it enters L0s state. An analog squelch exit signal is then sampled/detected after a delay period (e.g., a programmable analog squelch circuit warm up time, such as 20ns, 40ns or 80ns) from freeze indication to detect an un-squelched state. The control loops would then be unfrozen in response to a change to the analog squelch exit signal (e.g., when it is asserted after the programmable warm up timer).Moreover, the early EIOS indication from the controller may not always result in LTSSM entering ASPM L0s state. In these speculative cases, the control loops are opened up (i.e., unfrozen) as squelch exit would be indicated eventually (e.g., based on detection of an EIEOS (Electronic Idle Exit Ordered Set) bit pattern). In case that the end point (or agent) exits the low power state shortly after entry (for example, the PCIe specification defines 20ns minimum L0s residency), the adaptive control loops would also be enabled after some delay (e.g., 40ns or 80ns warm up time) in response to an indication of squelch exit. Hence, the speculative control loop freeze before low-power state entry would not lock the loops prematurely due to the built-in fail-safe mechanism.Various embodiments are discussed herein with reference to a computing system component, such as the components discussed herein, e.g., with reference to Figs. 1-2 and 4-5 . More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment of the invention. The system 100 includes one or more agents 102-1 through 102-M (collectively referred to herein as "agents 102" or more generally "agent 102"). In an embodiment, the agents 102 are components of a computing system, such as the computing systems discussed with reference to Figs. 2 and 4-5 .As illustrated in Fig. 1 , the agents 102 communicate via a network fabric 104. In an embodiment, the network fabric 104 can include one or more interconnects (or interconnection networks) that communicate via a serial (e.g., point-to-point) link and/or a shared communication network. Each link may include one or more lanes. For example, some embodiments can facilitate component debug or validation on links that allow communication with fully buffered dual in-line memory modules (FBD), e.g., where the FBD link is a serial link for coupling memory modules to a host controller device (such as a processor or memory hub). Debug information is transmitted from the FBD channel host such that the debug information is observed along the channel by channel traffic trace capture tools (such as one or more logic analyzers).In one embodiment, the system 100 can support a layered protocol scheme, which includes a physical layer, a link layer, a routing layer, a transport layer, and/or a protocol layer. The fabric 104 further facilitates transmission of data (e.g., in form of packets) from one protocol (e.g., caching processor or caching aware memory controller) to another protocol for a point-to-point network. Also, in some embodiments, the network fabric 104 can provide communication that adheres to one or more cache coherent protocols.Furthermore, as shown by the direction of arrows in Fig. 1 , the agents 102 transmit and/or receive data via the network fabric 104. Hence, some agents utilize a unidirectional link while others utilize a bidirectional link for communication. For instance, one or more agents (such as agent 102-M) transmit data (e.g., via a unidirectional link 106), other agent(s) (such as agent 102-2) receive data (e.g., via a unidirectional link 108), while some agent(s) (such as agent 102-1) both transmit and receive data (e.g., via a bidirectional link 110).Also, in accordance with an embodiment, one or more of the agents 102 include one or more Input/Output Hubs (IOHs) 120 to facilitate communication between an agent (e.g., agent 102-1 shown) and one or more Input/Output ("I/O" or "IO") devices 124 (such as PCIe I/O devices). The IOH 120 includes a Root Complex (RC) 122 (that includes one or more root ports) to couple and/or facilitate communication between components of the agent 102-1 (such as a processor, memory subsystem, etc.) and the I/O devices 124 in accordance with PCIe specification (e.g., in accordance with PCI Express Base Specification 3.0, also referred to as PCIe 3.0 or PCI Gen3 or PCIe Gen3). In some embodiments, one or more components of a multi-agent system (such as processor core, chipset, input/output hub, memory controller, etc.) include the RC 122 and/or IOHs 120, as will be further discussed with reference to the remaining figures.Additionally, the agent 102 includes a PCIe controller 135 to manage various operations of a PCIe interface including, for example, to improve the quality and/or speed of high-speed (e.g., serial) I/O channels of PCIe components in the agent 102. Further, as illustrated in Fig. 1 , the agent 102-1 has access to a memory 140. As will be further discussed with reference to Figs. 2-5 , the memory 140 stores various items including for example an OS, a device driver, etc.More specifically, Fig. 2 is a block diagram of a computing system 200 in accordance with an embodiment. System 200 includes a plurality of sockets 202-208 (four shown but some embodiments can have more or less socket). Each socket includes a processor and one or more of IOH 120, RC 122, and PCIe Controller 135. In some embodiments, IOH 120, RC 122, and/or PCIe Controller 135 can be present in one or more components of system 200 (such as those shown in Fig. 2 ). Further, more or less 120, 122, and/or 135 blocks are present in a system depending on the implementation. Additionally, each socket is coupled to the other sockets via a point-to-point (PtP) link, or a differential interconnect, such as a Quick Path Interconnect (QPI), MIPI (Mobile Industry Processor Interface), etc. As discussed with respect the network fabric 104 of Fig. 1 , each socket is coupled to a local portion of system memory, e.g., formed by a plurality of Dual Inline Memory Modules (DIMMs) that include dynamic random access memory (DRAM).In another embodiment, the network fabric may be utilized for any System on Chip (SoC) application, utilize custom or standard interfaces, such as, ARM compliant interfaces for AMBA (Advanced Microcontroller Bus Architecture), OCP (Open Core Protocol), MIPI (Mobile Industry Processor Interface), PCI (Peripheral Component Interconnect) or PCIe (Peripheral Component Interconnect Express).Some embodiments use a technique that enables use of heterogeneous resources, such as AXI/OCP technologies, in a PC (Personal Computer) based system such as a PCI-based system without making any changes to the IP resources themselves. Embodiments provide two very thin hardware blocks, referred to herein as a Yunit and a shim, that can be used to plug AXI/OCP IP into an auto-generated interconnect fabric to create PCI-compatible systems. In one embodiment a first (e.g., a north) interface of the Yunit connects to an adapter block that interfaces to a PCI-compatible bus such as a direct media interface (DMI) bus, a PCI bus, or a Peripheral Component Interconnect Express (PCIe) bus. A second (e.g., south) interface connects directly to a non-PC interconnect, such as an AXI/OCP interconnect. In various implementations, this bus may be an OCP bus.In some embodiments, the Yunit implements PCI enumeration by translating PCI configuration cycles into transactions that the target IP can understand. This unit also performs address translation from re-locatable PCI addresses into fixed AXI/OCP addresses and vice versa. The Yunit may further implement an ordering mechanism to satisfy a producer-consumer model (e.g., a PCI producer-consumer model). In turn, individual IPs are connected to the interconnect via dedicated PCI shims. Each shim may implement the entire PCI header for the corresponding IP. The Yunit routes all accesses to the PCI header and the device memory space to the shim. The shim consumes all header read/write transactions and passes on other transactions to the IP. In some embodiments, the shim also implements all power management related features for the IP.Thus, rather than being a monolithic compatibility block, embodiments that implement a Yunit take a distributed approach. Functionality that is common across all IPs, e.g., address translation and ordering, is implemented in the Yunit, while IP-specific functionality such as power management, error handling, and so forth, is implemented in the shims that are tailored to that IP.In this way, a new IP can be added with minimal changes to the Yunit. For example, in one implementation the changes may occur by adding a new entry in an address redirection table. While the shims are IP-specific, in some implementations a large amount of the functionality (e.g., more than 90%) is common across all IPs. This enables a rapid reconfiguration of an existing shim for a new IP. Some embodiments thus also enable use of auto-generated interconnect fabrics without modification. In a point-to-point bus architecture, designing interconnect fabrics can be a challenging task. The Yunit approach described above leverages an industry ecosystem into a PCI system with minimal effort and without requiring any modifications to industry-standard tools.As shown in Fig. 2 , each socket is coupled to a Memory Controller (MC)/Home Agent (HA) (such as MCO/HAO through MC3/HA3). The memory controllers are coupled to a corresponding local memory (labeled as MEM0 through MEM3), which can be a portion of system memory (such as memory 412 of Fig. 4 ). In some embodiments, the memory controller (MC)/Home Agent (HA) (such as MCO/HAO through MC3/HA3) can be the same or similar to agent 102-1 of Fig. 1 and the memory, labeled as MEM0 through MEM3, can be the same or similar to memory devices discussed with reference to any of the figures herein. Generally, processing/caching agents send requests to a home node for access to a memory address with which a corresponding "home agent" is associated. Also, in one embodiment, MEM0 through MEM3 can be configured to mirror data, e.g., as master and slave. Also, one or more components of system 200 can be included on the same integrated circuit die in some embodiments.Furthermore, one implementation (such as shown in Fig. 2 ) is for a socket glueless configuration with mirroring. For example, data assigned to a memory controller (such as MC0/HA0) is mirrored to another memory controller (such as MC3/HA3) over the PtP links.Fig. 3A illustrates a flow diagram of a method 300 to provide adaptive control loop protection for fast and robust recovery from low-power states in high speed serial I/O link applications, according to some embodiments. In various embodiments, the operations discussed with reference to Fig. 3A are performed by one or more of the components discussed with reference to Figs. 1 , 2 , 4 , and/or 5 (such as PCIe controller 135 or one or more logic within the controller 135, etc.).Referring to Figs. 1-3A , at an operation 302, a link between two agents (e.g., any of the agents discussed with reference to Fig. 1 , such as the agent 102-1 and one of the I/O devices 124 (including an end point of a PCIe link)) is in a normal, operating state (L0). At an operation 304, an early EIOS pattern is detected (e.g., by PCIe controller 135). Once the early EIOS pattern is detected, one or more control loops (such as one or more of CDR (Clock Data Recovery, which may infer a clock, at least in part, based on analysis of the corresponding data), AGC (Automatic Gain Control, which may utilize a feedback loop to adjust gain to an appropriate level), DFE (Decision Feedback Equalization, which may provide equalization/adaptation to time-varying properties like inter-symbol-interference of the communication link), and CTOC (Continuous Time Offset Cancellation, which may provide for linear common mode error detection and offset correction) in accordance with some embodiments) are speculatively frozen at an operation 306. In an embodiment, state and/or other information relating to the control loops are saved at operation 306 (e.g., for faster recovery from the freeze). At an operation 308, method 300 waits for a first timer to expire (e.g., after 20ns or 40ns or 80ns). This timer ensures that analog squelch-exit-detection circuit in AFE is sufficiently warmed-up for exit detection. Once the first timer expires, an operation 310 checks squelch-exit-detection circuit to determine whether a lower power consumption state has been asserted/entered. If not, the frozen loops of operation 306 are unfrozen at an operation 312 and the link returns to a normal, operating state at operation 302.Accordingly, at the entry into a low-power state, an embodiment uses an early speculative indication of an imminent lower-power state entry (e.g., based on the EIOS pattern detected by the PCIe controller 135) to store the control loop states and freeze the loops, so that these loops cannot become corrupted during the rest of the entry process and during the low-power state. If a false squelch happens and the receiver is still in operation mode, all the loops will quickly go back to normal operation as illustrated in the Fig. 3A block diagram.If the lower power consumption is in fact asserted at operation 310, the link enters the low power consumption state and EIEOS pattern detection is activated at an operation 314. Once an EIEOS pattern is detected at an operation 316, the CDR loop is enabled at an operation 318 (as will be further discussed below). At an operation 320, method 300 waits for a second timer expiration to allow for the CDR acquisition prior to enabling the remaining loops (e.g., AGC, DFE, and CTOC loops) at operation 322. This approach prevents error propagation in adaptive loops due to initially corrupted data at the beginning of low-power state exits; hence, improve both link stability and the overall bit lock time. After operation 322, method 300 resumes with operation 302.In an embodiment, freezing of the one or more loops at operation 306 is performed in the following order: CDR loop, DFE loop, AGC loop, and CTOC loop. However, all embodiments are not limited to this order for freezing of the one or more loops. Additionally, in one embodiment, unfreezing of the one or more loops at operations 318 and 322 is performed in the following order: CDR loop, DFE loop, AGC loop, and CTOC loop. However, all embodiments are not limited to this order for unfreezing of the one or more loops, except that CDR loop unfreezing (and wait for CDR acquisition at operation 320) are to occur prior to unfreezing the remaining loops.In some embodiments, a (e.g., digital) Finite State Machine (FSM) is used to: (a) save receiver AFE control loop states (e.g., for CDR, AGC, DFE, and CTOC loops) before entering power saving states, using a speculative entry signal based on EIOS pattern. Unfreeze loops back to normal operation if the speculative entry is not detected after expiration of a first timer; and/or (b) step by step enabling all the loops after exiting power saving states.In an embodiment, PCIe controller or a system agent will detect the early indication of EIOS that link is potentially going into a power management state and incoming data will soon be squelched. This early indication is sent to AFE to freeze the control loops (e.g., CDR, AGC, DFE and CTOC) and not to react to incoming data, and also arm squelch exit detection logic at same time. Warm up timer will start after arming the squelch logic. Timer will expire after 20ns/40ns/80ns (which may be programmable) of squelch arming. After timer expires, analog squelch exit signal is sampled for valid analog squelch exit. When the valid squelch exit is detected, CDR will unfreeze first and start the Acquisition (ACQ) cycle from last frozen codes. After the CDR, ACQ cycle is complete, AGC, DFE, and CTOC will unfreeze and start tracking from last snapshot codes prior to the freeze. Normal link operation (e.g., full exit from power management mode) will resume from the point of AGC, DFE and CTOC unfreeze after operation 322.Moreover, in some embodiments, an FSM is used to unfreeze control loops in a staggered sequence when exiting low-power states (such as L0s). The clock recovery (CDR) loop will unfreeze first. AGC, DFE, and CTOC loops will not unfreeze until CDR acquisition is completed.For example, phase drift can occur during the residency of the low power states such as L0s. At the exit of low power states, the receiver sampling clock is no longer aligned with the incoming data. AGC, DFE and CTOC loops would drift to erroneous directions if they are enabled at the same time of the CDR loop. Furthermore, the erroneous drift of the adaptive loops could interact with each other and potentially reach to a non-recoverable state, causing link failure. This approach prevents error propagation in adaptive loops due to initially corrupted data at the beginning of low-power state exits; hence, improve both link stability and the overall bit lock time.Under certain scenarios, the CDR may operate in a slow responsive phase region when receiver just exits the lower power states. It can be caused by large transmit phase drift and or common mode drift, such that the CDR phase detector is in a dead zone. An FSM is used to inject an artificial frequency offset to pull out the dead zone and assist fast sampling phase recovery.In one embodiment, Fig. 3B illustrates a block diagram for a low power state exit FSM controlled CDR loop filter 350. During normal operation, the phase input 351 is sent to the first order filter 370, and the integrator 360, which drives the second order filter 380. The phase output 354 is the summation of Filter 370 and Filter 380. When exiting from low power states, an FSM 390 controls the addition of an artificial frequency offset 352 into the integrator 360 by asserting inject offset 353 to logic "1" for one clock cycle. The amount of frequency offset injected is programmable. If the phase detector is in the dead zone, the phase output will be modulated by the frequency offset to push out of the slow responsive region. Once the phase detector goes back to function, the first order filter will pick up tracking and self correct the injected frequency offset. This is due to the loop gain in Filter 370 is much larger than the gain in Filter 380. The integrator will eventually converge back to its previously value.Since an embodiment does not rely on the controller's L0s signal to freeze and unfreeze the loops, it also protects loops for certain boundary conditions. One such boundary condition may be that an end point (or agent) enters low power state immediately after entering link-up state while root port is still in recovery due to link errors. In such cases, the signal that is responsible for indication of entering L0s will never be asserted to AFE, as the receiver side of LTSSM never actually enters L0s. An embodiment uses an early indication of entry (into power state) sent by controller instead of the L0s entry signal, and therefore can be more reliably used for freezing the loops. With this scheme, link will have much shorter power management exit latencies and become more stable.Furthermore, some embodiments provide a safe solution for PCIe/QPI serial I/O AFE designs to achieve fast recovery while exiting power saving states. Moreover, the receiver problems associated with power saving states may be in part due to adaptive loop drift from the operation conditions. The drift is can be caused by error propagation in LMS (Least Mean Squared) adaptation when receive data are corrupted at the beginning of power saving states entry and exit. In extreme scenarios, back-to-back L0s events can be so frequent that there is no time for adaptive loops to recover before entering next power saving mode. Eventually receiver can drift far enough to cause sudden link down. To solve the adaptive loop drift problem in silicon, a digital loop protection FSM can be implemented (e.g., in AFE receiver) to protect loops based on early EIOS indication from controller and sequentially enabling loop adaptation using analog squelch exit detect. Early (EIOS) squelch detection from controller inputs can trigger AFE RX loop protection FSM to ignore potentially corrupted data even when loop adaptation is enabled. This protection mechanism prevents RX from entering erroneous conditions, and staggered unfreeze of CDR followed by AGC, DFE, and CTOC control loops will improve link stability and shorten bit lock time.During the exit of the low-power state, an embodiment provides a sequential procedure of enabling all the loops to avoid error propagation and shorten bit lock time. When exiting L0s/L1, receive data corruption can arise from large TX phase drift as well as TX common mode drift. Such drifts would trigger erroneous AGC, DFE, and CTOC adaptation, which not only increases the recovery time but also jeopardizes link stability. In one loop protection FSM, CDR loop is first enabled after L0s/L1 exit. Sometimes CDR phase detection becomes less effective in the beginning when the transmit phase is drifted into the dead zone of the CDR during the residency of the low power states. Artificial frequency offset is injected to pull receiver out of those slow responsive regions. Once CDR recovered the sampling phase, the initially injected artificial frequency offset will be automatically cleared due to the CDR loop adaptation. AGC, DFE, and CTOC loops are then turned back on to track the incoming signals and fine tune receiver configuration.Hence, some embodiments provide for features including one or more of the following: (1) loops are protected from noisy data in robust manner using speculative L0s signal from controller to AFE; (2) unlike loop freeze, unfreeze is done in staggered fashion, CDR will be unfrozen first while keeping the AGC, DFE, and CTOC frozen. After CDR is stabilized using nFTS, the rest of loops (AGC, DFE, CTOC) are unfrozen to track the data for dynamic adjustments. Freezing all loops (CDR, AGC, DFE, CTOC) at the same time using early version of EIOS and unfreeze CDR first using analog Squelch Exit followed AGC/DFE/CTOC unfreeze will provide maximum protection and robust link operation and short L0s exit latencies.Furthermore, faster recovery is possible in accordance with various embodiments, as the link can start from pre-stored values instead of having to go through fresh training, and use artificial frequency offset during CDR acquisition window to progress through slow responsive phase regions, thereby increasing power saving as the link can stay in lower power state relatively longer. Also, some embodiments improve link stability at exit from low power states and enables power reduction in high speed serial I/O's that use feedback control loops. For example, robust link performance with back-to-back L0s entry and exit (e.g., with L0 state in between with short residency of 400ns) would be possible by practicing some embodiments. Also, the nFTS value used for power management is generally provided at the beginning of Gen3 training sequence. Some embodiments provide very short (e.g., nFTS of less than 30) L0s exit latency at Gen3 speed.Fig. 4 illustrates a block diagram of a computing system 400 in accordance with an embodiment of the invention. The computing system 400 includes one or more central processing unit(s) (CPUs) 402-1 through 402-N or processors (collectively referred to herein as "processors 402" or more generally "processor 402") that communicate via an interconnection network (or bus) 404. The processors 402 include a general purpose processor, a network processor (that processes data communicated over a computer network 403), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 402 can have a single or multiple core design. The processors 402 with a multiple core design can integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 402 with a multiple core design can be implemented as symmetrical or asymmetrical multiprocessors.Also, the operations discussed with reference to Figs. 1-3B are performed by one or more components of the system 400. In some embodiments, the processors 402 can be the same or similar to the processors 202-208 of Fig. 2 . Furthermore, the processors 402 (or other components of the system 400) include one or more of the IOH 120, RC 122, and the PCIe Controller 135. Moreover, even though Fig. 4 illustrates some locations for items 120/122/135, these components can be located elsewhere in system 400. For example, I/O device(s) 124 can communicate via bus 422, etc.A chipset 406 also communicates with the interconnection network 404. The chipset 406 includes a graphics and memory controller hub (GMCH) 408. The GMCH 408 includes a memory controller 410 that communicates with a memory 412. The memory 412 stores data, including sequences of instructions that are executed by the CPU 402, or any other device included in the computing system 400. For example, the memory 412 stores data corresponding to an operation system (OS) 413 and/or a device driver 411 as discussed with reference to the previous figures. In an embodiment, the memory 412 and memory 140 of Fig. 1 can be the same or similar. In one embodiment of the invention, the memory 412 can include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory can also be utilized such as a hard disk. Additional devices can also communicate via the interconnection network 404, such as multiple CPUs and/or multiple system memories.Additionally, one or more of the processors 402 can have access to one or more caches (which include private and/or shared caches in various embodiments) and associated cache controllers (not shown). The cache(s) can adhere to one or more cache coherent protocols. Such cache(s) store data (e.g., including instructions) that are utilized by one or more components of the system 400. For example, the cache locally caches data stored in a memory 412 for faster access by the components of the processors 402. In an embodiment, the cache (that is shared) can include a mid-level cache and/or a last level cache (LLC). Also, each processor 402 can include a level 1 (L1) cache. Various components of the processors 402 can communicate with the cache directly, through a bus or interconnection network, and/or a memory controller or hub.The GMCH 408 also includes a graphics interface 414 that communicates with a display device 416, e.g., via a graphics accelerator. In one embodiment of the invention, the graphics interface 414 can communicate with the graphics accelerator via an accelerated graphics port (AGP). In an embodiment of the invention, the display 416 (such as a flat panel display) can communicate with the graphics interface 414 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 416. In an embodiment, the display signals produced by the display device pass through various control devices before being interpreted by and subsequently displayed on the display 416.A hub interface 418 allows the GMCH 408 and an input/output control hub (ICH) 420 to communicate. The ICH 420 provides an interface to I/O devices that communicate with the computing system 400. The ICH 420 communicates with a bus 422 through a peripheral bridge (or controller) 424, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 424 provides a data path between the CPU 402 and peripheral devices. Other types of topologies can be utilized. Also, multiple buses can communicate with the ICH 420, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 420 include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices.The bus 422 communicates with an audio device 426, one or more disk drive(s) 428, and a network interface device 430 (which is in communication with the computer network 403). Other devices can also communicate via the bus 422. Also, various components (such as the network interface device 430) can communicate with the GMCH 408 in some embodiments of the invention. In addition, the processor 402 and one or more components of the GMCH 408 and/or chipset 406 are combined to form a single integrated circuit chip (or be otherwise present on the same integrated circuit die) in some embodiments.Furthermore, the computing system 400 includes volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory includes one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 428), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).Fig. 5 illustrates a computing system 500 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 5 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. The operations discussed with reference to Figs. 1-4 are performed by one or more components of the system 500.As illustrated in Fig. 5 , the system 500 includes several processors, of which only two, processors 502 and 504 are shown for clarity. The processors 502 and 504 each include a local memory controller hub (MCH) 506 and 508 to enable communication with memories 510 and 512. The memories 510 and/or 512 store various data such as those discussed with reference to the memory 412 of Fig. 4 . As shown in Fig. 5 , the processors 502 and 504 also include the cache(s) discussed with reference to Fig. 4 in some embodiments.In an embodiment, the processors 502 and 504 can be one of the processors 402 discussed with reference to Fig. 4 . The processors 502 and 504 exchange data via a point-to-point (PtP) interface 514 using PtP interface circuits 516 and 518, respectively. Also, the processors 502 and 504 each exchange data with a chipset 520 via individual PtP interfaces 522 and 524 using point-to-point interface circuits 526, 528, 530, and 532. The chipset 520 further exchanges data with a high-performance graphics circuit 534 via a high-performance graphics interface 536, e.g., using a PtP interface circuit 537.At least one embodiment of the invention is provided within the processors 502 and 504 or chipset 520. For example, the processors 502 and 504 and/or chipset 520 include one or more of the IOH 120, RC 122, and the PCIe Controller 135. Other embodiments of the invention, however, exist in other circuits, logic units, or devices within the system 500 of Fig. 5 . Furthermore, other embodiments of the invention can be distributed throughout several circuits, logic units, or devices illustrated in Fig. 5 . Hence, location of items 120/122/135 shown in Fig. 5 is exemplary and these components may or may not be provided in the illustrated locations.The chipset 520 communicates with a bus 540 using a PtP interface circuit 541. The bus 540 can have one or more devices that communicate with it, such as a bus bridge 542 and I/O devices 543. Via a bus 544, the bus bridge 542 communicates with other devices such as a keyboard/mouse 545, communication devices 546 (such as modems, network interface devices, or other communication devices that communicate with the computer network 403), audio I/O device, and/or a data storage device 548. The data storage device 548 stores code 549 that is executed by the processors 502 and/or 504.The following examples pertain to further embodiments. Example 1 includes an apparatus comprising: logic, coupled to a first agent, to detect a first bit pattern that indicates a speculative entry by a second agent into a low power consumption state and to cause freezing of one or more control loops; and logic to detect a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops. In example 2, the subject matter of example 1 can optionally include an apparatus, wherein the one or more control loops are to comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop. In example 3, the subject matter of example 2 can optionally include an apparatus, wherein the second logic is to cause unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop. In example 4, the subject matter of example 2 can optionally include an apparatus, wherein the second logic is to inject an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop. In example 5, the subject matter of example 2 can optionally include an apparatus, wherein the second logic is to unfreeze the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop. In example 6, the subject matter of example 1 can optionally include an apparatus, further comprising logic to determine whether the second agent has in fact entered the low power consumption state after freezing of the one or more control loops and in response to expiration of a timer. In example 7, the subject matter of example 1 can optionally include an apparatus, wherein the link comprises a Peripheral Component Interconnect Express (PCIe) link. In example 8, the subject matter of example 1 can optionally include an apparatus, wherein the first bit pattern is to comprise an EIOS (Electronic Idle Ordered Set) bit pattern. In example 9, the subject matter of example 1 can optionally include an apparatus, wherein the second bit pattern is to comprise an EIEOS (Electronic Idle Exit Ordered Set) bit pattern. In example 10, the subject matter of example 1 can optionally include an apparatus, wherein the first agent is to comprise a PCIe controller. In example 11, the subject matter of example 1 can optionally include an apparatus, wherein the second agent is to comprise an input/output device. In example 12, the subject matter of example 1 can optionally include an apparatus, wherein the first agent and the second agent are to be coupled via a link. In example 13, the subject matter of example 12 can optionally include an apparatus, wherein the link is to comprise a point-to-point coherent interconnect. In example 14, the subject matter of example 1 can optionally include an apparatus, wherein the first agent is to comprise one or more of the logic to detect the first bit pattern and the logic to detect the second bit pattern. In example 15, the subject matter of example 1 can optionally include an apparatus, wherein one or more of the first agent, the second agent, and memory are on a same integrated circuit chip.In example 16, a method comprises: detecting, at a first agent, a first bit pattern that indicates a speculative entry by a second agent into a low power consumption state and to cause freezing of one or more control loops; and detecting a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops. In example 17, the subject matter of example 16 can optionally include a method, wherein the one or more control loops comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop. In example 18, the subject matter of example 17 can optionally include a method, wherein detecting the second bit patter causes unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop. In example 19, the subject matter of example 17 can optionally include a method, wherein detecting the second bit patter causes injection of an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop. In example 20, the subject matter of example 17 can optionally include a method, wherein detecting the second bit patter causes unfreezing of the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop.Example 21 includes a system comprising: a processor having a first agent and a second agent; and logic to detect a first bit pattern that indicates a speculative entry by the second agent into a low power consumption state and to cause freezing of one or more control loops; and logic to detect a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops. In example 22, the subject matter of example 21 can optionally include a system, wherein the one or more control loops are to comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop. In example 23, the subject matter of example 22 can optionally include a system, wherein the logic to detect the second bit pattern is to cause unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop. In example 24, the subject matter of example 22 can optionally include a system, wherein the logic to detect the second bit pattern is to inject an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop. In example 25, the subject matter of example 22 can optionally include a system, wherein the logic to detect the second bit pattern is to unfreeze the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop. In example 26, the subject matter of example 21 can optionally include a system, further comprising logic to determine whether the second agent has in fact entered the low power consumption state after freezing of the one or more control loops and in response to expiration of a timer. In example 27, the subject matter of example 21 can optionally include a system, wherein the link comprises a Peripheral Component Interconnect Express (PCIe) link. In example 28, the subject matter of example 21 can optionally include a system, wherein the first bit pattern is to comprise an EIOS (Electronic Idle Ordered Set) bit pattern. In example 29, the subject matter of example 21 can optionally include a system, wherein the second bit pattern is to comprise an EIEOS (Electronic Idle Exit Ordered Set) bit pattern. In example 30, the subject matter of example 21 can optionally include a system, wherein the first agent is to comprise a PCIe controller. In example 31, the subject matter of example 21 can optionally include a system, wherein the second agent is to comprise an input/output device. In example 32, the subject matter of example 21 can optionally include a system, wherein the first agent and the second agent are to be coupled via a link. In example 33, the subject matter of example 21 can optionally include a system, wherein the first agent is to comprise one or more of the logic to detect the first bit pattern and the logic to detect the second bit pattern. In example 34, the subject matter of example 21 can optionally include a system, wherein one or more of the first agent, the second agent, and memory are on a same integrated circuit chip.Example 35 includes an apparatus to provide fast and robust recovery from low-power states in high speed serial links, the apparatus comprising: means for detecting, at a first agent, a first bit pattern that indicates a speculative entry by a second agent into a low power consumption state and to cause freezing of one or more control loops; and means for detecting a second bit pattern that indicates exit from the low power consumption state by the second agent and to cause unfreezing of the one or more control loops. In example 36, the subject matter of example 35 can optionally include an apparatus, wherein the one or more control loops comprise one or more of: a CDR (Clock Data Recovery) control loop, an AGC (Automatic Gain Control) control loop, a DFE (Decision Feedback Equalization) control loop, and a CTOC (Continuous Time Offset Cancellation) control loop. In example 37, the subject matter of example 36 can optionally include an apparatus, wherein the means for detecting the second bit patter causes unfreezing of the CDR control loop prior to the AGC control loop, DFE control loop, and CTOC control loop. In example 38, the subject matter of example 36 can optionally include an apparatus, wherein the means for detecting the second bit patter causes injection of an artificial frequency offset into the CDR control loop to assist fast locking through slow responsive phase regions, prior to enabling the AGC control loop, DFE control loop, and CTOC control loop. In example 39, the subject matter of example 36 can optionally include an apparatus, wherein the means for detecting the second bit patter causes unfreezing of the AGC control loop, DFE control loop, and CTOC control loop in response to expiration of a timer that indicates acquisition of the CDR control loop.In example 40, a computer-readable medium comprising one or more instructions that when executed on a processor configure the processor to perform one or more operations of any of examples 16 to 20. In example 41, the subject matter of examples 1 to 15 can optionally include an apparatus, wherein a processor is to comprise the first agent and the second agent. In example 42, the subject matter of examples 16 to 20 can optionally include a method, wherein a processor is to comprise the first agent and the second agent.In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-5 , can be implemented as hardware (e.g., circuitry), software, firmware, microcode, or combinations thereof, which can be provided as a computer program product, e.g., including a tangible (e.g., non-transitory) machine-readable or (e.g., non-transitory) computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. Also, the term "logic" may include, by way of example, software, hardware, or combinations of software and hardware. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-5 . Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program is transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals transmitted via a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection).Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter. |
This disclosure generally relates to USB Type-C, and, in particular, DisplayPort Alternate Mode communication in a USB Type-C environment. In one embodiment, a device (110) determines a DisplayPort mode and determines an orientation of a USB Type-C connector plug (120). A multiplexer (142a) multiplexes a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug (120). |
CLAIMSWhat is claimed is:1. A method, comprising:determining, by a device, a DisplayPort mode by detecting a received signal on a first sideband use (SBU1) pin of the device or a second sideband use (SBU2) pin of the device;determining, by the device, an orientation of a USB Type-C connector plug by detecting a type of the received signal;multiplexing, by a multiplexer, a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug; andmultiplexing, by the multiplexer, the received signal based in part on the determined orientation of the USB Type-C connector plug.2. A method, comprising:determining, by a device, a DisplayPort mode by detecting a received signal;determining, by the device, an orientation of a USB Type-C connector plug; and multiplexing, by a multiplexer, a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug.3. The method of Claim 2, wherein determining a DisplayPort mode by detecting a received signal comprises determining a DisplayPort mode by detecting a received signal on a first sideband use (SBU1) pin of the device or a second sideband use (SBU2) pin of the device.4. The method of Claim 3, wherein detecting a received signal on a SBU1 pin of the device or a SBU2 pin of the device comprises detecting a pull up in either the SBU1 pin or the SBU2 pin.5. The method of Claim 2, further comprising multiplexing, by the multiplexer, the received signal based in part on the determined orientation of the USB Type-C connector plug.6. The method of Claim 2, wherein determining an orientation of a USB Type-C connector plug comprises determining an orientation of a USB Type-C connector plug by detecting a type of the received signal.7. The method of Claim 2, wherein the device comprises one of the following:a source device coupled to a USB Type-C connector plug with a normal orientation; a source device coupled to a USB Type-C connector plug with an inverted orientation; a sink device coupled to a USB Type-C connector plug with a normal orientation; or a sink device coupled to a USB Type-C connector plug with an inverted orientation.8. The method of Claim 2, wherein the device does not comprise a PD controller.9. A system, comprising:a device configured to:determine a DisplayPort mode by detecting a received signal; anddetermine an orientation of a USB Type-C connector plug; anda multiplexer coupled to the device, the multiplexer configured to multiplex a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug.10. The system of Claim 9, wherein determining a DisplayPort mode by detecting a received signal comprises determining a DisplayPort mode by detecting a received signal on a first sideband use (SBU1) pin of the device or a second sideband use (SBU2) pin of the device.11. The system of Claim 10, wherein detecting a received signal on a SBU1 pin of the device or a SBU2 pin of the device comprises detecting a pull up in either the SBU1 pin or the SBU2 pin.12. The system of Claim 9, wherein the multiplexer is further configured to multiplex the received signal based in part on the determined orientation of the USB Type-C connector plug.13. The system of Claim 9, wherein determining an orientation of a USB Type-C connector plug comprises determining an orientation of a USB Type-C connector plug by detecting a type of the received signal.14. The system of Claim 9, wherein the device comprises one of the following:a source device coupled to a USB Type-C connector plug with a normal orientation; a source device coupled to a USB Type-C connector plug with an inverted orientation; a sink device coupled to a USB Type-C connector plug with a normal orientation; or a sink device coupled to a USB Type-C connector plug with an inverted orientation.15. The system of Claim 9, wherein the device does not comprise a PD controller.16. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:determine a DisplayPort mode by detecting a received signal;determine an orientation of a USB Type-C connector plug; andmultiplex a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug.17. The media of Claim 16, wherein determining a DisplayPort mode by detecting a received signal comprises determining a DisplayPort mode by detecting a received signal on a first sideband use (SBU1) pin of the device or a second sideband use (SBU2) pin of the device.18. The media of Claim 16, wherein detecting a received signal on a SBU1 pin of the device or a SBU2 pin of the device comprises detecting a pull up in either the SBU1 pin or the SBU2 pin.19. The media of Claim 16, wherein the software is further operable when executed to multiplex the received signal based in part on the determined orientation of the USB Type-C connector plug.20. The media of Claim 16, wherein determining an orientation of a USB Type-C connector plug comprises determining an orientation of a USB Type-C connector plug by detecting a type of the received signal. |
DETECTION OF DISPLAYPORT ALTERNATE MODE COMMUNICATION[0001] This generally relates to USB Type-C, and, in particular, DisplayPort Alternate Mode communication in a USB Type-C environment.BACKGROUND[0002] Universal Serial Bus (USB) is a peripheral interface for attaching a wide variety of computing devices, such as personal computers, digital telephone lines, monitors, modems, mice, printers, scanners, game controllers, keyboards, storage devices, and/or the like.[0003] USB Type-C is a new standard under the USB umbrella. The USB Type-C connector supports power, data, and video at the same time. The Type-C connector supports up to 100W of power delivery, up to lOGbps of USB SuperSpeed+ (SS+) data transfer and up to 8. lGbps of DisplayPort Alternate Mode (DP Alt Mode) video. In addition to DP Alt Mode video, the Type- C connector supports various other DP Alt Mode video and data standards such as MHL, HDMI, and Thunderbolt.[0004] The USB Type-C device that passes information through the USB Type-C connector and, specifically, the multiplexer in the USB Type-C device relies heavily on the Power Distribution (PD) controller to successfully perform operations. For that reason, the PD controller has to control the multiplexer through a control interface or directly through configuration pins. This type of operation imposes a hardware and software burden on the multiplexer and the PD controller. Moreover, dedicated pins need to be made available for the control interface on both the PD controller and the multiplexer, and the PD Controller needs to include the multiplexer in its firmware and other programming resources. Also, in the scenario of remote daughter cards, cables, or modules, long and cost-inefficient cabling may be required to connect the PD controller and the multiplexer.SUMMARY[0005] In accordance with this disclosure, a device determines a DisplayPort mode and determines an orientation of a Universal Serial Bus (USB) Type-C connector plug. A multiplexer multiplexes a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates a system architecture of a USB Type-C source device exchanging information with a USB Type-C sink device over a USB Type-C connector.[0007] FIG. 2 illustrates a USB Type-C connector pinout.[0008] FIG. 3 illustrates a connection between a USB Type-C source device and a USB Type- C sink device in a normal connector plug orientation.[0009] FIG. 4 illustrates a connection between a USB Type-C source device and a USB Type- C sink device in an inverted connector plug orientation.[0010] FIG. 5 illustrates an example method for detecting a DisplayPort Alternate Mode transmission and connector plug orientation.DETAIL DESCRIPTION OF EXAMPLE EMBODIMENTS[0011] The disclosure describes one or more methods for automatically detecting a Universal Serial Bus (USB) Type-C DisplayPort Alternate Mode and orientation of a connector. In one embodiment, a device determines a DisplayPort mode and determines an orientation of a USB Type-C connector plug. A multiplexer multiplexes a DisplayPort transmission based in part on the determined orientation of the USB Type-C connector plug.[0012] The disclosure may present several technical advantages. Technical advantages of the method may include reducing the amount of hardware resources, such as configuration pins on both the Power Distribution (PD) controller and the multiplexer and reducing the need for long cabling between both devices. Reducing the amount of hardware resources may also reduce the form size of the USB Type-C device. Another technical advantage of the method may include relieving the software burden on the PD controller and also the USB Type-C device.[0013] FIG. 1 illustrates system architecture 100 of USB Type-C source device 110 exchanging information with USB Type-C sink device 130 over USB Type-C connector 120.[0014] As illustrated, source device 110 includes auto detection device l40a, multiplexer l42a, PD controller l44a, receptacle l46a, power supply l48a, USB device l50a, and DisplayPort (DP) source 152. Source device 110 may include any device that is USB Type-C compatible, and may transmit DisplayPort Alternate Mode (DP Alt Mode) information to sink device 130. Moreover, source device 110 may receive DP Alt Mode information from sink device 130. [0015] Likewise, as illustrated, sink device 130 includes auto detection device l40b, multiplexer l42b, PD controller l44b, receptacle l46b, power supply l48b, USB device l50b, and DP sink 154. Sink device 130 may include any device that is USB Type-C compatible, and may receive DP Alt Mode information to source device 130. Moreover, sink device 130 may transmit DP Alt Mode information to source device 110.[0016] Auto detection device 140 may comprise any device, circuitry, and/or logic that detects a DP Alt Mode and also detect the orientation of the plug. While illustrated as a separate component from multiplexer 142, auto detection device 140 may be incorporated in multiplexer 142, PD controller 144, and/or any other device, circuitry, and/or logic included in source device 110. In certain embodiments, auto detection device 140 is external to source device 110. Auto detection device 140 may be similar when implemented in source device 110 ( e.g ., auto detection device l40a) or sink device 130 (e.g., auto detection device l40b).[0017] Multiplexer 142 may comprise any device, circuitry, and/or logic that selects one of several input signals and multiplexes the selected input to its proper internal port. Multiplexer 142 may help transfer signals received by receptacle 146 and may properly transfer that signal to USB host 150 and/or DP Alt Mode source 152. As explained in further detail below, multiplexer 142 may change the multiplexing of one or more received signals based on the connector plug orientation of connector 120. Multiplexer 142 may be similar when implemented in source device 110 (e.g., multiplexer l42a) or sink device 130 (e.g., multiplexer l42b).[0018] In particular, multiplexer 142 connects sideband signaling (e.g., first sideband (SBU1) signal and second sideband (SBU2) signal) to its respective positive auxiliary and/or negative auxiliary ports. For example, multiplexer 142 may connect SBU1 and SBU2 signaling to its respective positive and/or negative auxiliary ports based on whether the device is a source or sink device and based on whether the connector plug orientation is normal or inverted. Specifically, multiplexer 142 may connect SBU1 and SBU2 signaling to its respective positive and/or negative auxiliary ports in the following manner:Table 1: Transmission Based on Connector Orientation[0019] Directing data, video, and AUX signals to appropriate Type-C connector pins based on Type-C connector plug orientation and the location of the Type-C port (downstream facing port or an upstream facing port) is required to make the Type-C ecosystem work as intended. Conventionally, multiplexer 142 is needed to direct high-speed USB data, video, and auxiliary signaling to the appropriate pins. To properly complete this task, multiplexer 142 must be aware of the presence or absence of a DP Alt Mode and detect the Type-C connector plug orientation.[0020] PD controller 144 may implement functionalities defined in the USB PD specification. Source device l lO’s host controller may manage and control the PD controller for power delivery. The commands may be communicated over a bus interface comprising a data line and a clock line. PD controller 144 may be similar when implemented in source device 110 (e.g., PD controller l44a) or sink device 130 (e.g., PD controller l44b).[0021] Receptacle 146 may be any type of pinout that transmits and/or receives data, power, and/or video via connector 120. For example, receptacle l46a may transmit DP Alt Mode video information, USB data, and/or power from source device 110 via connector 120. As another example, receptacle l46b may receive DP Alt Mode video information, USB data, and/or power for sink device 130 via connector 120. The pinouts in receptacle 146 are explained in further detail in FIG. 2. Receptacle 146 may be similar when implemented in source device 110 (e.g., receptacle l46a) or sink device 130 (e.g., receptacle l46b).[0022] USB device 150 may be any type of device, circuitry, and/or logic that is able to transmit and/or receive USB data. USB device 150 may be similar when implemented in source device 110 (e.g., USB device l50a) or sink device 130 (e.g., USB device l50b).[0023] DP source 152 may be any type of device, circuitry, and/or logic that is able to transmit and/or receive DP Alt Mode data and/or information. DP Alt Mode may leverage the alternate mode function of the USB Type-C interface, and may provide video, SuperSpeed USB, and power all in one connector.[0024] Connector 120 may be any type of connector that connects source device 110 and sink device 130. In particular, connector 120 plugs into receptacle l46a of source device 110 and receptacle l46b of sink device 130. Connector 120 supports the transfer of data, power, and/or video using the USB Type-C protocol. In particular connector 120 is able to support alt mode applications. Connector 120 may be reversible in that each end of the connector is able to plug into receptacle l46a of source device 110 and/or receptacle l46b of sink device 130. Moreover, connector 120 is able to be plugged into receptacle 146 in either a normal or inverted orientation.[0025] As illustrated, source device 110 includes auto detection device l40a, multiplexer l42a, receptacle l46a, power supply l48a, USB device l50a, and DP source 152. Source device 110 may include any device that is USB Type-C compatible, and may receive DP Alt Mode information from sink device 110.[0026] Likewise, as illustrated, sink device 130 includes auto detection device l40b, multiplexer l42b, receptacle l46b, power supply l48b, USB device l50b, and DP sink 154. Sink device 130 may include any device that is USB Type-C compatible, and may receive DP Alt Mode information from source device 110.[0027] In an exemplary embodiment, multiplexer 142 may not receive signaling from PD controller 142, and, in particular, multiplexer 142 may not receive signaling from PD controller 142 indicating whether DP Alt Mode is present and/or the connector plug orientation of connector 120. Instead, auto detection device 140 may indicate to multiplexer 142 whether DP Alt Mode is present and/or the connector plug orientation of connector 120. In certain embodiments, source device 110 and/or sink device 130 may not comprise a PD controller 144. In certain embodiments, source device 110 and/or sink device 130 may comprise a PD controller 144, but PD controller 144 may not indicate to multiplexer 142 whether DP Alt Mode is present and/or the connector plug orientation of connector 120. Moreover, in certain embodiments, auto detection device 140 may be incorporated into multiplexer 142, or auto detection device 140 may be separate from multiplexer 142.[0028] Auto detection device 140 determines the presence or absence of a DP Alt Mode by detecting a received signal on either the SBU1 pin or SBU2 pin. In certain embodiments, SBU1 and/or SBU2 only support the transmission of DP Alt Mode signals. Specifically, SBU1 and/or SBU2 may only be used as a DP Alt Mode auxiliary signal channeling in certain embodiments. Consequently, when auto detection device 140 determines the signal is being received on the SBU1 pin and/or SBU2 pin, auto detection device 140 may infer that a device that supports DP Alt Mode is connected.[0029] In certain embodiments, auto detection device 140 detects the presence and/or absence of DP Alt Mode by detecting a reception of one or more signals in either the SBU1 pin or SBU2 pin of the device. In certain embodiments, auto detection device 140 may detect a transmitted signal in the positive auxiliary that is connected to either the SBU1 pin or SBU2 pin and/or in the negative auxiliary that is connected to either the SBU1 pin of SBU2 pin.[0030] Moreover, auto detection device 140 may detect a received signal in either the SBU1 pin or the SBU2 pin by detecting a pull up (e.g., a high voltage) on the SBU1 pin or SBU2 pin. For example, in auto detection device 140 in source device 110, auto detection device 140 may detect a transmitted signal by detecting a high voltage on the SBU1 pin indicating a signal transmission via a pull-up resistor. Similarly, in auto detection device 140 in sink device 130, auto detection device 140 may detect a signal transmission by detecting a high voltage on the SBU2 pin indicating a signal transmission via a pull-up resistor.[0031] In certain embodiments, auto detection device 140 and/or multiplexer 142 may know whether the auto detection device 140 and/or multiplexer 142 is in a source device or sink device via a pin, memory, and/or register in the device.[0032] In certain embodiments, the auxiliary signal may not be transmitted to auto detection device 140. In this embodiment, auto detection device 140 snoops the USB data line to determine display port mode. The USB data line may use low frequency polling signaling before transmitting the USB data. For transmitting DP Alt Mode information, however, the USB data line may not communicate a low frequency polling signaling. Accordingly, auto detection device 140 may detect a high-speed signal without low frequency periodic signaling signals and, therefore, determine that DP Alt Mode information is then being sent across the USB data line. By submitting DP Alt Mode information across USB data line, auto detection device 140 determines that the communication is DP Alt Mode four-lane. DP Alt Mode four-lane can carry four DisplayPort lanes across connector 120. In certain embodiments, auto detection device 140 may detect a low-speed signal and low frequency periodic signaling signals, and, therefore, determine that DP Alt Mode information is not being sent across USB data line. Because USB data line is not transmitting DP Alt Mode information, auto detection device 140 determines that the communication is either DP Alt Mode one-lane or DP Alt Mode two-lane. Unlike DP Alt Mode four-lane, DP Alt Mode two-lane can carry two DisplayPort lanes across connector 120 and DP Alt Mode one-lane can only carry one DisplayPort lane across connector 120.[0033] Auto detection device 140 may also determine the connector plug orientation based on the type of signal transmission on SBU1 and/or SBU2. Based on the type of signal transmission on SBU1 and/or SBU2 at the positive auxiliary and/or the negative auxiliary, auto detection device 140 may determine the connector plug orientation. For example, auto detection device 140 in source device 110 may detect that the SBU1 signal is communicated to the positive auxiliary. Accordingly, auto detection device 140 may then be able to detect that connector 120 is connected to the normal orientation. Likewise, auto detection device 140 in source device 110 may detect that SBU1 is communicated to the negative auxiliary. Accordingly, auto detection device 140 may then be able to detect that connector 120 is connected in an inverted orientation. Moreover, as another example, auto detection device 140 in sink device 130 may detect that the SBU1 signal is communicated to the negative auxiliary. Auto detection device 140 may then detect the connector 120 is in a normal orientation. As a final example, auto detection device 140 may detect that the SBU1 signal is communicated to the positive auxiliary, and, accordingly, auto detection device 140 may then the deduce that connector 120 is in an inverted orientation.[0034] The following table illustrates an exemplary embodiment of the conditions used by auto deduction device 140 to determine the orientation of connector 120:Table 2: Exemplary Logic for Determining Connector Orientation[0035] In certain embodiments, auto detection device 140 may detect the presence of the DP Alt Mode and/or orientation of connector 120 based on snooping the auxiliary signal. As discussed earlier, whenever DP Alt Mode is present, the DP Alt Mode signals are communicated over the auxiliary channel. By performing auxiliary channel snooping, the presence of DP Alt Mode is established when a valid auxiliary is detected. In certain embodiments, the orientation of connector 120 is determined by analyzing the preamble of the auxiliary signal.[0036] Moreover, the orientation can be detected through a Manchester decoding scheme of either the positive auxiliary and negative auxiliary signal. First, auto detection device 140 analyzes a number of clock cycles (e.g., 28 clock cycles) to determine a current state of the auxiliary signal. At this point, a state machine for auto detection device 140 is in Acquire Mode. [0037] After a number of clock cycles pass, the state machine for auto detection device 140 moves to Sync High State mode. In Sync High State mode, the Manchester logic is searching for a Manchester violation in the auxiliary signal. A Manchester violation is a received signal level that is different than what auto detection device 140 anticipates. For example, if auto detection device 140 is anticipating a high auxiliary signal because auto detection device 140 assumes the connector plug orientation is normal but, instead, receives a low auxiliary signal, then the auto detection device 140 detects a Manchester violation. Accordingly, auto detection device 140 knows that the connector 120 is inverted and sets an internal inverted orientation flag.[0038] Auto detection device 140 may also reset its state machine if a Manchester violation occurs. In certain embodiments, during the Sync High State mode, the only expected input is an input signal that is between 1 and 5 clock lengths. If a signal is received that is less than 1 clock length or greater than 5 clock lengths, then a Manchester violation has occurred and auto detection device 140 may reset the state machine.[0039] The state machine may then move to Sync Low State mode. In Sync Low State mode, auto detection device 140 may anticipate a valid low auxiliary signal. However, when the auxiliary signal changes to high again, auto detection device 140 may set an internal inverted orientation flag, and the auxiliary signal inversion will be applied for the remainder of the signal.[0040] By determining whether the connector plug orientation is normal or inverted using the preamble of the auxiliary signal, the signal’s polarity has already been corrected by the time the first data bit in the auxiliary signal arrives.[0041] Another component that may determine the connector plug orientation is the auxiliary port polarity. The auxiliary port polarity can be recognized based on the polarity of the first packet of the auxiliary signal.[0042] Moreover, by snooping the auxiliary signal, auto detection device 140 can detect from the auxiliary signal the number of lanes transmitting DP Alt Mode. For example, the auxiliary signal data stream may indicate that the DP Alt Mode signal is being transmitted on two lanes (i.e., DP 2 Lane mode). As another example, the auxiliary signal data stream may indicate DP Alt Mode signal is being transmitted on four lanes (i.e., DP 4 Lane mode). In certain exceptional cases, auto detection device 140 may activate DP 2 Lane Mode even if the auxiliary signal data stream indicates that only one lane is transmitting DP Alt Mode information. [0043] Now, given the detection of the orientation of connector 120, the detection of the transmitted DP Alt Mode signal, and knowledge of whether the device is a source device 110 or sink device 130, multiplexer 142 is then able to properly direct the input signals to the proper internal ports. For example, multiplexer 142 may multiplex one or more DP signals based on the determined orientation of the USB Type-C connector plug. As another example, multiplexer 142 may multiplex the received signal at the SBU1 pin and/or SBU2 pin based on the determined orientation of the USB Type-C connector plug. FIG. 3 and FIG. 4 illustrates in further detail the proper multiplexing of received signals by multiplexer 142 based on these variables.[0044] Multiplexer 172 may multiplex the display port transmission differently depending on the orientation of USB Type-C connector 120. Similarly, the signal received at the SBU1 pin and/or the SBU2 pin may be multiplexed different depending on the orientation of the USB Type-C connector 120.[0045] FIG. 2 illustrates a USB Type-C connector pinout 200. USB Type-C connector pinout 200 may be found in receptacle 146 of source device 110, receptacle 146 of sink device 130, and/or connector plug of connector 120. In the illustrated embodiment, connector pinout 200 comprises ground pins 2l0a-d, TX1+/- signal pins 2l2a-b, RX1+/- signal pins 2l3a-b, bus power pin 2l4a-d, configuration channel pins 2l6a-b, USB data pins 2l8a-d, sideband pins 220a-b, RX2+/- signal pins 222a-b, and TX2+/- signal pins 223a-b.[0046] In normal Type-C connector plug orientation, USB data passes through TX1+/- signal pins 2l2a-b and RX1+/- signal pins 2l3a-b, and, in the flipped connector plug orientation, USB data passes through TX2+/- signal pins 223a-b and RX2+/- signal pins 222a-b. When 2 lanes of DP Alt Mode video are also transmitted along with USB data, in the normal Type-C connector plug orientation, DP Alt Mode video is channeled through TX2+/- signal pins 223a-b and RX2+/- signal pins 222a-b, and, in the flipped connector plug orientation, USB data goes through TX1+/- signal pins 2l2a-b and RX1+/- signal pins 2l3a-b. If there is no USB data, all the four differential pair pins (TX1+/-, RX1+/-, TX2+/-, and RX2+/-) can be used to transfer four lanes of DP Alt Mode signals.[0047] DP Alt Mode video also involves low-speed Auxiliary (AUX) signal that is transmitted through the sideband pins 220a-b of the Type-C connector. In the illustrated Figure, sideband pin 220a represents the SBU1 pin and sideband pin 220b represents the SBU2 pin. AUX signal is differential with two single-ended signals of opposite polarity - positive (AUXP) and negative (AUXN). AUXP and AUXN are connected to either SBU1 or SBU2 depending on the Type-C connector plug orientation and whether the Type-C connector is on a downstream facing port (DFP) or an upstream facing port (UFP).[0048] FIG. 3 illustrates a connection between a USB Type-C source device and a USB Type- C sink device in a normal connector plug orientation. In the illustrated embodiment, receptacle l46a and receptacle l46b share the same pin layout as illustrated in Fig. 2. While illustrated as a separate component, source device 110 may comprise auto detection device l40a, multiplexer l42a, and/or PD controller l44a. Similarly, while illustrated as a separate component, sink device 130 may comprise auto detection device l40b, multiplexer l42b, and/or PD controller l44b.[0049] Moreover, by illustration, the connections between receptacle l46a and l46b are represented by alphabetical indicators. Each alphabetical indicator represents a connection between the two pins. For example, RX2 222 in receptacle l46a (as indicated by the letter‘A’) is connected to TX2 223 in receptacle l46b (as also indicated by the letter‘A’).[0050] As illustrated, in normal Type-C connector plug orientation for source device 110, USB data 320a passes through TX1+/- signal pins 212 and RX1+/- signal pins 213 and, when 2 lanes of DP Alt Mode information 3 l0a are transmitted, the 2 lanes of DP Alt Mode information 3 l0a are transmitted along TX2+/- signal pins 223 and RX2+/- signal pins 222. DP Alt Mode video also involves a low-speed AUX signal that gets transmitted through sideband pins 220. In normal Type-C connector plug orientation, AUXp 330a is connected to SBU1 pin 220a and AUXn 220b is connected to SBU2 pin 220b.[0051] As further illustrated, in normal Type-C connector plug orientation for sink device 130, USB data 320b passes through TX1+/- signal pins 212 and RX1+/- signal pins 213 and, when 2 lanes of DP Alt Mode information 310b are transmitted, the 2 lanes of DP Alt Mode information 3 lOa are transmitted along TX2+/- signal pins 223 and RX2+/- signal pins 222. Similar to the DP Alt Mode video transferred on source side 110, DP Alt Mode video also involves a low-speed AUX signal that gets transmitted through sideband pins 220a-b. In normal Type-C connector plug orientation, AUXp 330b is connected to SBU2 pin 220b and AUXn 340b is connected to SBU1 pin 220a on sink device 130.[0052] Given the detection of the orientation of connector 120, the detection of the transmitted DP Alt Mode signal, knowledge of whether the device is a source device 110 or sink device 130, multiplexer 142 is then able to properly direct the input signals to the proper output ports as illustrated in FIG. 3 for a normal connector plug orientation.[0053] FIG. 4 illustrates a connection between a USB Type-C source device and a USB Type- C sink device in an inverted connector plug orientation. In the illustrated embodiment, receptacle l46a and receptacle l46b share the same pin layout as illustrated in Fig. 2. While illustrated as a separate component, source device 110 may comprise auto detection device l40a, multiplexer l42a, and/or PD controller l44a. Similarly, while illustrated as a separate component, sink device 130 may comprise auto detection device l40b, multiplexer l42b, and/or PD controller l44b.[0054] Moreover, by illustration, the connections between receptacle l46a and l46b are represented by alphabetical indicators. Each alphabetical indicator represents a connection between the two pins. For example, RX2 222a-b in receptacle l46a (as indicated by the letter ‘A’) is connected to TX2 223a-b in receptacle l46b (as also indicated by the letter‘A’).[0055] As illustrated, in an inverted Type-C connector plug orientation for source device 110, USB data 320a passes through TX2+/- signal pins 223a-b and RX2+/- signal pins 222a-b and, when 2 lanes of DP Alt Mode information 3 l0a are transmitted, the 2 lanes of DP Alt Mode information 3 l0a are transmitted along TX1+/- signal pins 2l2a-b and RX1+/- signal pins 2l3a- b. DP Alt Mode video also involves a low-speed AUX signal that gets transmitted through sideband pins 220a-b. In normal Type-C connector plug orientation, AUXp 330a is connected to SBU2 pin 220b and AUXn 340a is connected to SBU1 pin 220a on source device 110.[0056] As further illustrated, in an inverted Type-C connector plug orientation for sink device 130, USB data 320b passes through TX2+/- signal pins 223 a-b and RX2+/- signal pins 222a-b and, when 2 lanes of DP Alt Mode information 3 lOb are transmitted, the 2 lanes of DP Alt Mode information 3 l0a are transmitted along TX1+/- signal pins 2l2a-b and RX1+/- signal pins 2l3a- b. Similar to the DP Alt Mode video transferred on source side 110, DP Alt Mode video also involves a low-speed AUX signal that gets transmitted through sideband pins 220a-b. In an inverted Type-C connector plug orientation, AUXp 330b is connected to SBU1 pin 220a and AUXn 340b is connected to SBU2 pin 220b on sink device 130.[0057] Given the detection of the orientation of connector 120, the detection of the transmitted DP Alt Mode signal, knowledge of whether the device is a source device 110 or sink device 130, multiplexer 142 is then able to properly direct the input signals to the proper output ports as illustrated in FIG. 4 for an inverted connector plug orientation.[0058] FIG. 5 illustrates example method 500 for detecting a DisplayPort Alternate Mode transmission and connector plug orientation.[0059] The method may begin at step 510, where a device determines a DisplayPort Alt Mode by detecting a signal received on either the SBU1 pin or the SBU2 pin. In certain embodiments, SBU1 pin and/or SBU2 pin only support the transmission of DP Alt Mode signals. Specifically, SBU1 and/or SBU2 may only be used as a DP Alt Mode auxiliary signal port in certain embodiments. Consequently, when auto detection device 140 determines the signal is being received on SBU1 and/or SBU2, auto detection device 140 may infer that a device that supports DP Alt Mode is connected.[0060] In certain embodiments, auto detection device 140 by detecting a high voltage (i.e., a pull up) on the SBU1 pin or SBU2 pin. Alternatively, auto detection device 140 may detect a signal transmission by detecting a pull up on the positive auxiliary and/or negative auxiliary.[0061] In addition, auto detection device 140 then detects a signal transmission in the SBU1 pin or SBU2 pin. Moreover, in certain embodiments, auto detection device 140 and/or multiplexer 142 may know whether the auto detection device 140 and/or multiplexer 142 is in a source device or sink device via a pin, memory, and/or register in the device.[0062] At step 520, auto detection device 140 may also determine the connector plug orientation based on a signal transmission on SBU1 and/or SBU2. Based on the signal transmission on SBU1 at the positive or negative auxiliary and/or SBU2 at the positive or negative auxiliary, auto detection device 140 may determine the connector plug orientation. For example, auto detection device 140 in source device 110 may detect that the SBU1 signal is communicated to the positive auxiliary. Accordingly, auto detection device 140 may then be able to detect that connector 120 is connected to the normal orientation. Likewise, auto detection device 140 in source device 110 may detect that SBU1 signal is communicated to the negative auxiliary. Accordingly, auto detection device 140 may then be able to detect that connector 120 is connected in an inverted orientation. Moreover, as another example, auto detection device 140 in sink device 130 may detect that the SBU1 signal is communicated to the negative auxiliary. Auto detection device 140 may then detect the connector 120 is in a normal orientation. As a final example, auto detection device 140 may detect that the SBU1 signal is communicated to the positive auxiliary, and, accordingly, auto detection device 140 may then the deduce that connector 120 is in an inverted orientation.[0063] At step 530, multiplexer 142 multiplexes a DP signal based in part on the determined orientation of the USB Type-C connector plug. At step 540, multiplexer 142 multiplexes the SBU1 signal and/or the SBU2 signal based in part on the determined orientation of the USB Type-C connector plug.[0064] In particular embodiments, one or more computer systems perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems on a non-transitory storage media performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. |
A computing system includes a first security central processing unit (SCPU) of a system-on-a-chip (SOC), the first SCPU configured to execute functions of a first security level. The computing system also includes a second SCPU of the SOC coupled with the first SCPU and coupled with a host processor, the second SCPU configured to execute functions of a second security level less secure than the first security level, and the second SCPU executing functions not executed by the first SCPU. |
1.A computing system comprising:a first secure central processing unit (SCPU) of a system on a chip (SOC), the first secure central processing unit being configured to perform a function of a first security level;a second secure central processing unit of the system-on-chip coupled to the first secure central processing unit and coupled to the main processor, the second secure central processing unit configured to perform the first security level A second security level function of low security, and the second secure central processing unit performs functions that are not performed by the first secure central processing unit.2.The computing system of claim 1 wherein said first secure central processing unit is programmed by a system-on-a-chip vendor and said second secure central processing unit is programmable by an end user of said system-on-a-chip.3.The computing system of claim 1 further comprising a dedicated secure communication bus connecting said first secure central processing unit and said second secure central processing unit, said primary processor and said on-chip system The third party customer cannot access the secure communication bus.4.The computing system of claim 1 further comprising a shared register bus coupled to said second secure central processing unit that the client of said chip has accessed.5.The computing system of claim 1 wherein the functions of the first level of security comprise managing a root key, performing a first secure boot, and routing a third party content provider's secret.6.The computing system of claim 1 wherein the functions of the second level of security include digital rights management, license management, transcoder management, watermarking, and processing of data in the secure memory.7.The computing system of claim 1 wherein said second secure central processing unit is provided with a trust level higher than said primary processor and said primary processor is not allowed to perform said second security level Features.8.The computing system of claim 1 wherein said first secure central processing unit is further configured with software code that preferentially processes commands from said second secure central processing unit and generates said A plurality of unique commands executed by the second secure central processing unit, the plurality of unique commands not being executable by the host processor.9.A system-on-a-chip (SOC) that includes:Main processorMultiple sensors for tracking on-chip conditions to detect possible intrusions by an attacker;Highly secure first secure central processing unit (SCPU);An intermediate level second secure central processing unit coupled to the first secure central processing unit, the second secure central processing unit including an interrupt controller configured to monitor data from the sensor To monitor the on-chip condition and generate an interrupt or connection in response to a detection condition indicative of an intrusion that adjusts the operation of the main processor or the second secure central processing unit in real time to ensure a secure system operation .10.A system on a chip (SOC) includes:a first secure central processing unit (SCPU), the first secure central processing unit being configured to perform a function of a first security level;a second secure central processing unit of the system-on-chip coupled to the first secure central processing unit and coupled to the main processor, the second secure central processing unit configured to perform the first security level a second security level feature with low security; anda local checker coupled to the first secure central processing unit and configured to cause a peripheral device of the second secure central processing unit to a third party client of the system on chip that has accessed the host processor It is inaccessible. |
Multiple security - CPU systemCross-reference to related applicationsThe present application claims US Patent Application No. 13/705,991, filed on Dec. 5, 2012, U.S. Patent Application Serial No. 61/684,479, filed on Aug. The priority is hereby incorporated by reference in its entirety.Technical fieldThe present disclosure relates to system security performed by a secure central processing unit (SCPU), and more particularly to security functions that are performed at different security levels by multiple SCPUs in the system.Background techniqueThe rapid development of electronic and communication technologies driven by consumer demand has led to the widespread adoption of data-driven devices that include processing and converting third-party media content. Third party consumers or customers want their content to be handled securely so that they cannot be copied or used outside of certain privilege levels. Systems that digitally send content from a multimedia provider to a consumer seek to include a higher level of security, thereby preventing competing vendors from accessing each other's secrets. In a large system on a chip (SOC), a single secure central processing unit (SCPU) can perform security functions.Summary of the inventionAccording to an aspect of the present invention, there is provided a computing system comprising: a first secure central processing unit (SCPU) of a system on a chip (SOC) configured to perform a function of a first security level; and a a second SCPU coupled to the first SCPU and coupled to the main processor, the second SCPU configured to perform a second security level lower than the first security level, and the The two SCPUs perform functions that are not performed by the first SCPU.Wherein the first SCPU is programmed by a SOC provider and the second SCPU is programmable by an end user of the SOC.The computing system further includes a dedicated secure communication bus connecting the first SCPU and the second SCPU, the host processor and a third party client present on the SOC having no access to the secure communication bus.The computing system further includes a shared register bus coupled to the second SCPU that the client of the chip has accessed.The first security level function includes managing a root key, performing a first secure boot, and routing a third party content provider's secret.The second security level function includes digital rights management, license management, code converter management, watermarking, and data processing in secure memory.The second SCPU is provided with a trust level higher than the main processor, and the main processor is not allowed to perform the function of the second security level.Wherein the first SCPU is further configured with software code, the software code preferentially processing commands from the second SCPU, and generating a plurality of unique commands executed by the second SCPU, the multiple unique The commands cannot be executed by the main processor.The computing system further includes a secure memory, wherein the second SCPU includes an instruction checker configured to determine whether an instruction executed by the second SCPU call is located in an area of the secure memory Inside.The instruction checker causes the computing system to reset in response to the detected attempt to execute an instruction outside the region of the secure memory by the second SCPU.The second SCPU is configured to acquire and store a secure time from an internet server using a secure protocol.In accordance with another aspect of the present invention, a system on a chip (SOC) is provided, comprising: a main processor; a plurality of sensors for tracking on-chip conditions to detect possible intrusions by an attacker; a first security that is highly secure a central processing unit (SCPU); and an intermediate stage second SCPU coupled to the first SCPU, the second SCPU including an interrupt controller configured to monitor data from the sensor The on-chip condition is monitored and an interrupt or connection is generated in response to a detection condition indicative of an intrusion that adjusts the operation of the main processor or the second SCPU in real time to ensure a safe system operation.The system on chip further includes a dedicated secure communication bus connecting the first SCPU and the second SCPU, the host processor and a third party client programmed on the SOC being inaccessible to the secure communication bus .The system-on-a-chip further includes a secure memory, wherein the second SCPU includes an instruction checker configured to determine whether an instruction executed by a component from the secure memory call is approved Executed by the said.The system on chip further includes a separate memory buffer for the main processor and the second SCPU, the memory buffer for the second SCPU being configured to be inaccessible and configured by the main processor The control logic is provided to the second SCPU.According to still another aspect of the present invention, a system on chip is provided, comprising: a first secure central processing unit (SCPU) configured to perform a function of a first security level; a second SCPU of the SOC, and the first An SCPU coupled to and coupled to the main processor, the second SCPU configured to perform a second security level lower than the first security level; and a local checker, An SCPU is coupled and configured such that a peripheral device of the second SCPU is inaccessible to a third party client that has accessed the SOC of the host processor.Wherein the native checker is further configured to prevent the third party client from accessing a particular area of a dynamic random access memory (DRAM) of the SOC.Wherein the local checker is further configured by the first SCPU to prevent blocking access by the second SCPU to a common register bus of the SOC.Wherein, the peripheral device comprises a universal asynchronous receiver/transmitter (UART), a timer, an interrupt, a memory, and a data storage.The system on chip further includes a dedicated secure communication bus connecting the first SCPU and the second SCPU, the host processor and a third party client programmed on the SOC having no access to the secure communication bus.DRAWINGSThe system and method can be better understood with reference to the following drawings and description. In the figures, like reference numerals indicate corresponding parts in the different drawings.1 is a block diagram of an exemplary multi-secure central processing unit (CPU) system on a chip in an operating environment.2 is a more detailed block diagram of an exemplary multi-secure central processing unit (CPU) on-chip system disclosed in FIG.3 is a flow diagram of an exemplary method for implementing the multi-secure CPU system-on-chip of FIG. 1.4 is a flow diagram of an exemplary method for implementing intrusion detection within the multi-secure CPU on-chip system of FIG. 1.Detailed waysThe following discussion relates to system security performed by a Secure Central Processing Unit (SCPU), and more particularly to security functions performed in a system by multiple SCPUs operating at different levels of security. For the purposes of this description, two SCPUs are described, but more SCPUs can be implemented. The SCPU can be used, for example, as an example on a system on a chip (SOC), such as in a set top box (STB) that can be used to stream media to a consumer. Such media may include audio/video content that is visible to the consumer on the media device.In a large system-on-a-chip with a single secure central processing unit (SCPU) that performs security functions, the system trust level can be binary: the operation is highly secure or completely untrustworthy. For operations that require a medium security level, there are two options: (1) perform these operations in the SCPU; or (2) perform these operations in the host.The first option may not be ideal because the SCPU can be responsible for highly sensitive tasks like managing one-time password (OTP) authentication, sending consumer secrets, and so on. Mixing these highly sensitive tasks with lower security features poses a risk and detracts from the main mission of the SCPU. The second option may not be ideal because performing a medium security task on the host makes the SOC insecure and the host CPU untrustworthy.Performing security functions on the chip by the host exposes the security too large to provide sufficient security for chip operations. In addition, the combination of high security and lower security functions, as performed by a single SCPU, may expose system security to a certain level of risk and reduce the primary mission of the SCPU that protects the most sensitive functions of the chip operation.In addition, since the SCPU manages the ownership information of the SOC vendor, it is problematic to allow the terminal user to program the SCPU. However, some intermediate level security tasks are preferably performed by the end user code, so the SOC vendor wishes to allow the consumer to program some aspects of the SCPU function. Allowing the user to program only the on-chip SCPU functions exposes the chip's safe operation to additional risks and attacks.In a SOC, having a single SCPU may not be enough. A multi-secure CPU approach is used, such as having a first SCPU dedicated to high security functions and a second SCPU for lower security tasks such as digital rights management (DRM), management coding, watermarking, and the like. For purposes of explanation, in this document, the first SCPU is labeled as a secure CPU-A and the second security level SCPU is labeled as a secure CPU-B.1 is an example diagram of a multiple secure central processing unit (SCPU) system on chip (SOC) 100 in an operating environment. The SOC 100 can be integrated into or coupled to the media device 10 when configured to operate. 2 is a more detailed block diagram of an exemplary multi-secure central processing unit (CPU) on-chip system disclosed in FIG.Referring to Figures 1 and 2, system 100 can include a first SCPU 102 (also referred to as a secure CPU-A), a second SCPU 104 (also referred to as a CPU-B), and a main processor 110 that performs general processing of most chip operations. Device. The safety CPU-A may be smaller than the safety CPU-B and configured to operate at a first safety level that is higher than the second safety level of the CPU-B operation. The security CPU-B may be set to a higher trust level than the main processor 110 and the main processor may be refused to perform the second security level function. Main processor 110 can also be located at least partially outside of SOC 100.For example, the functionality of the secure CPU-A at the first security level may include managing a root key, performing a first secure boot, and transmitting a third party content provider's secret. For example, the functions of the secure CPU-B at the second security level include: digital rights management, license management, transcoder management, watermarking, and data processing in secure memory. The security CPU-A may be configured with software code that processes instructions from the secure CPU-B as prioritized and generates a plurality of unique commands executed by the second SCPU, the commands not being processed by the master The device 110 executes.Because it is configured to perform most of the processor-intensive security functions, in some implementations, the secure CPU-B can be as powerful as the main processor 110, for example, performing up to 1,000 or 1,500 or more when filing (filing) Dhrystone Millions of Instructions per second (DMIPS). Therefore, the security CPU-B focuses on lower security functions. The Safety CPU-A will require a fraction of the power and operate at less than 1,000 DMIPS.System 100 can further include an on-chip sensor 113, a memory such as dynamic random access memory (DRAM) 115, and a local checker 118 coupled to a plurality of peripheral devices 120 of secure CPU-A and secure CPU-B. "coupled to" herein means directly connected to a component or indirectly connected to one or more components. The DRAM 115 may include a portion of the protection code 116 stored in the secure memory 117. The secure memory 117 can be divided into specific or determined areas of the DRAM.The secure CPU-A may include a host interface 122 in communication with the main processor 110 and an SCPU-B interface 124 in communication with the secure CPU-B. The secure CPU-B may include a CPU-B 130, a local static random access memory (SRAM) 132, an instruction cache (i-cache) 134, a data cache (d-cahe) 136, an instruction checker 138, and an interrupt control. The device 140. The local SRAM 132 can be a dedicated local memory accessed by the secure CPU-B, where the instructions can be saved, but the temporarily stored data is not accessible by the host processor 110 or other on-chip host or client.The secure CPU-B and the secure CPU-A can be coupled to a dedicated secure communication bus 142 that operates as a dedicated channel between the CPU-B 130 and the secure CPU-A. The main processor and those third party users present on the SOC 100 may not be able to access the secure communication bus 142. The secure communication bus 142 can be configured and executed in a master-slave relationship by a combination of hardware and firmware, wherein in some operations, the secure CPU-A is either the master or slave of the secure CPU-B. For example, the secure CPU-A may be the master device that securely starts the CPU-B in the memory. However, for example, the secure CPU-A can also receive commands from the secure CPU-B or the local checker 118.A third party present on the SOC may have its own CPU, its own logical block, or a combination of hardware and software with the ability to access the SOC 100 on the chip. The third party CPU may include a secure interface managed by the CPU-A.System 100 can further include a shared register bus 144 that is accessed by third party users present on SOC 100. The shared register bus 144 can be used to write to the registers of the memory 115. As disclosed herein, the secure CPU-A can be configured to prevent certain on-chip customers from intentionally stopping the operation of the secure CPU-B.The local checker 118, which may be coupled to the secure CPU-A and the secure CPU-B, may be a hardware configured to prevent some users or hardware present on the SOC 100 from accessing certain areas of the DRAM. Similarly, the local checker 118 prevents the secure CPU-B from accessing the shared register bus 144 and/or reading the DRAM from the SOC 100 or the DRAM written to the SOC 100.The secure CPU-A can also program the local checker 118 to ensure that the third party body that has access to the host processor 110 cannot access the internal peripheral device 120 of the secure CPU-B. Peripherals may include, but are not limited to, Universal Asynchronous Receiver/Transmitter (UART), timers, interrupts, memory, data storage, media devices, or combinations thereof.Instruction checker 138 can supervise instructions that are executed outside of the DRAM and determine whether instructions executed by a component from a secure memory call are approved for execution by the component. To approve the instructions for execution, the command checker 138 can ensure that the secure CPU-B is not operating outside of the secure memory 117 that has been authenticated as a secure operation or conditionally accessed by the host processor 110. For example, the instruction checker can monitor the read and write operations of the DRAM 115 and compare the DRAM address of the memory access with the address range set by the secure CPU-A as a pre-authentication area for executing the instruction. If the secure CPU-B attempts to execute an instruction outside the storage area, restart the secure CPU-B or reset the entire SOC.In one example, the content saved to secure storage 117 may contain media content that the customer does not wish to publish in an unauthorized manner. The secure CPU-B can decrypt the content for viewing on the consumer device, but does not allow other peripheral devices to access or publish the content outside of the system 100. Security CPU-B ensures that consumers can view the content but not directly through the host. The security CPU-A and CPU-B can set hardware that restricts access to the secure memory 117 by certain chip components. For example, the secure CPU-A and the secure CPU-B may make the area of the memory not limited to the main processor. In addition, the secure CPU-B can perform watermarking or manipulating timestamps on content that the consumer can view.More specifically, the secure memory 117 can be accessed only by the secure CPU-A and CPU-B, as well as the local decompression and rendering engine. Thus, the secure CPU-B can decrypt the content into the storage area, and then the partial display process can read the decrypted content for local rendering. None of these steps requires host processor 110 to access secure memory. The secure CPU-B can secure the data stream by decrypting the content to the restricted area without reaching the main processor.Interrupt controller 140 can be coupled to CPU-A and configured to detect on-chip conditions based on data generated by sensor 113. For example, on-chip sensor 113 can generate data related to the properties of the chip, such as temperature, voltage level, and clock speed at a particular point on the chip. If one of these attributes changes too much or changes in the wrong way, it may be a normal operation that indicates potential intrusion or hacking attempts to compromise the SOC allowing access to secure data and/or secure operations. Interrupt controller 140 may aggregate and mask interrupts from other functional modules of SOC 100, which may include interrogation of sensors to detect those preset thresholds that are used to determine whether interrupt controller 140 has blocked the interrupted sensor.Further, the interrupt controller 140 may generate an interrupt or a connection in response to a detection condition indicating an intrusion. The interrupt or connection can adjust the operation of the main processor 110 or the safety CPU-B in real time to ensure safe system operation. The main processor may also have a separate memory buffer instead of the memory buffer used by the secure CPU-B, wherein the memory buffer of the secure CPU-B may be configured such that the main processor 110 cannot access and be configured as the second secure CPU. -B provides control logic.With further reference to Figure 1, system 100 can communicate with network server 20 and media client 30 over a network 15, such as the Internet or any wide area or local area network. Client 30 may be a consumer who obtains SOC 100, which is used by the consumer to send media content to consumer media device 10. The secure CPU-B can obtain the time and date from the web server 20 via the Internet using a secure protocol. The time and date can be considered as a safe time and stored by the secure CPU-B in the secure memory 117 in the local SRAM 132 or DRAM 115. In this way, the secure CPU-B prevents the host processor or other on-chip programming component from accessing secure time, which can be used for digital rights management and other forms of lower level security functions during the execution of the security function.3 is an exemplary flow diagram of a method for implementing the multi-secure CPU system of FIG. 1. The system on a chip (SOC) may include a first secure central processing unit (SCPU) and a second SCPU (202, 210). The SOC can operate the first SCPU at a first security level (206). The SOC may operate the second SCPU (214) at a second security level that is not secure by the first security level but is safer than the level of operation of the host processor. The first SCPU can generate commands (218) that only the second SCPU can execute.The second SCPU can determine whether the instruction executed from the secure memory call is approved for execution by the component of the SOC that invoked the instruction (222). If the component is not authorized to execute the instruction, neither the first SCPU nor the second SCPU is allowed to execute the instruction (226). If the component is approved to execute the instruction, the second SCPU can determine whether the instruction is to perform a first security level or a second security level function (230). If the component is required to perform the first security function, the first SCPU executes the instruction (234). If the component is required to perform a second security function, the second SCPU executes the instruction (238). Depending on the function, the first SCPU may also be a request component of the first or second security function, and the second SCPU may be a request component of the first or second security function.4 is a flow diagram of an exemplary method for implementing intrusion detection within the multi-secure CPU on-chip system of FIG. 1. A system on a chip (SOC) is operable to operate a first secure central processing unit (SCPU) and a second SCPU (302). The second SCPU is operable to operate the interrupt controller (306). The interrupt controller can monitor the on-chip condition of the SOC by monitoring data received from the on-chip sensor (310). On-chip conditions may include, for example, temperature, voltage level, and clock speed at a particular point on the chip. The second SCPU may generate an interrupt or a connection in response to the detection condition (314) indicating the intrusion. The second SCPU can adjust the operation of the main processor or the second SCPU to ensure safe system operation (318) due to an interrupt or connection.The above methods, apparatus, and logic may be implemented in many different ways in many different hardware, software, or combinations of hardware and software. For example, all or part of a system may be included in a controller, a microprocessor or an application specific integrated circuit (ASIC), or may be separate logic or components, or combined on a single integrated circuit or distributed among multiple integrated circuits. A combination of analog or digital circuits of the type is implemented.While various embodiments of the invention have been described, it will be understood that Therefore, the invention is not limited by the scope of the appended claims and the equivalents thereof. |
Systems, apparatuses and methods may provide for technology that detects one or more local variables in source code, wherein the local variable(s) lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the local variable(s), and incorporate the pipeline execution code into an output of a compiler. In one example, the pipeline execution code includes an initialization of a pool of buffer storage for the local variable(s). |
A semiconductor apparatus comprising:one or more substrates; andlogic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to:detect one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code;automatically generate pipeline execution code for the one or more local variables; andincorporate the pipeline execution code into an output of a compiler.The semiconductor apparatus of claim 1, wherein the pipeline execution code is to include an initialization of a pool of buffer storage for the one or more local variables.The semiconductor apparatus of claim 2, wherein the initialized pool of buffer storage is to be greater than a local storage amount corresponding to a single iteration of the loop.The semiconductor apparatus of claim 2 or 3, wherein the pipeline execution code is to further include a definition of a plurality of tokenized slots in the initialized pool of buffer storage, and wherein each tokenized slot is to correspond to a pipelined iteration of the loop.The semiconductor apparatus of any of the preceding claims, wherein the pipeline execution code is to include a pipeline depth definition.The semiconductor apparatus of any of the preceding claims, wherein the one or more local variables are to be detected after a registerization of the source code, automatic generation of the pipeline execution code is to be in response to detection of the one or more local variables, and the source code is to be associated with a communication channel in a dataflow graph.The semiconductor apparatus of of any of the preceding claims, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.At least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to:detect one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code;automatically generate pipeline execution code for the one or more local variables; andincorporate the pipeline execution code into an output of a compiler.The at least one computer readable storage medium of claim 8, wherein the pipeline execution code is to include an initialization of a pool of buffer storage for the one or more local variables.The at least one computer readable storage medium of claim 9, wherein the initialized pool of buffer storage is to be greater than a local storage amount corresponding to a single iteration of the loop.The at least one computer readable storage medium of claim 9 or 10, wherein the pipeline execution code is to further include a definition of a plurality of tokenized slots in the initialized pool of buffer storage, and wherein each tokenized slot is to correspond to a pipelined iteration of the loop.The at least one computer readable storage medium of any of claims 8 to 11, wherein the pipeline execution code is to include a pipeline depth definition.The at least one computer readable storage medium of any one of claims 8 to 12, wherein the one or more local variables are to be detected after a registerization of the source code, automatic generation of the pipeline execution code is to be in response to detection of the one or more local variables, and the source code is to be associated with a communication channel in a dataflow graph.A method comprising:detecting one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code;automatically generating pipeline execution code for the one or more local variables; andincorporating the pipeline execution code into an output of a compiler.The method of claim 14, wherein the pipeline execution code includes an initialization of a pool of buffer storage for the one or more local variables. |
COPYRIGHT NOTICEA portion of the disclosure of this patent document contains material which is subject to (copyright or mask work) protection. The (copyright or mask work) owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all (copyright or mask work) rights whatsoever.TECHNICAL FIELDEmbodiments generally relate to compilers. More particularly, embodiments relate to automatic compiler dataflow optimizations to enable pipelining of loops with local storage requirements.BACKGROUNDDataflow graphs may be used to model computer source code in terms of the dependencies between individual operations performed by the code. A compiler may transform the source code into the dataflow graph, which is typically executed by accelerator hardware such as a field programmable gate array (FPGA), configurable spatial accelerator (CSA), or other dataflow architecture. While the accelerator hardware may be useful when dealing with high performance computing (HPC) and/or data center applications that operate on relatively large data arrays and structures, there remains considerable room for improvement. For example, if the operations of the source code involve the execution of loops that internally declare "private" variables for large data arrays, the ability to hold (e.g., "registerize") the underlying data in the internal channels (e.g., communication arcs, buffers, latency insensitive channels/LICs, etc.) of the accelerator may be limited. As a result, the private variables may be treated as purely memory-based variables, which may cause performance losses.BRIEF DESCRIPTION OF THE DRAWINGSThe various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:FIG. 1 is a block diagram of an example of a compiler output according to an embodiment;FIG. 2A is a source code listing of an example of a loop with fixed-size local storage according to an embodiment;FIG. 2B is a source code listing of an example of a loop with runtime-varying local storage according to an embodiment;FIG. 2C is a source code listing of an example of a loop with an explicitly designated private variable according to an embodiment;FIG. 2D is a source code listing of an example of a loop with a dynamically allocated local variable according to an embodiment;FIG. 3 is a block diagram of an example of a communication arc in a dataflow graph according to an embodiment;FIG. 4 is a flowchart of an example of a method of operating a compiler according to an embodiment;FIG. 5 is a block diagram of an example of a compiler according to an embodiment;FIG. 6 is a block diagram of an example of a performance-enhanced computing system according to an embodiment;FIG. 7 is an illustration of an example of a semiconductor apparatus according to an embodiment;FIG. 8 is a block diagram of an example of a processor according to an embodiment; andFIG. 9 is a block diagram of an example of a multi-processor based computing system according to an embodiment.DESCRIPTION OF EMBODIMENTSTurning now to FIG. 1 , a compiler 20 is shown, where the compiler 20 automatically transforms source code 22 into an output 24 that is executable by a dataflow architecture such as, for example, an FPGA, CSA, and so forth. In an embodiment, the source code 22 is written in a high-level language such as, for example, C, C++, or Fortran augmented by parallel annotations (e.g., OpenMP parallel pragmas) to achieve runtime parallelism in the dataflow architecture. The source code 22 may generally use loops to perform various operations. Indeed, the runtime performance of applications may be dominated by the time spent in executing loops to perform tasks. On a dataflow architecture such as CSA, the performance of parallel loops may be accelerated by a) creating multiple copies of the loop bodies (e.g., "workers"), b) executing the workers in parallel, and c) pipelining execution of the workers.In the illustrated example, the source code 22 contains one or more local variables 26 (e.g., private variables), which lack dependencies across iterations of the loops in the source code 22. As will be discussed in greater detail, such a variable might occur naturally when declared inside a loop. In an embodiment, the local variable(s) 26 are occasionally used for relatively large data arrays. To improve the throughput of the loops containing the local variable(s) 26 in such a case, the illustrated compiler 20 generates pipeline execution code 28 for the local variable(s) 26 and incorporates the pipeline execution code 28 into the output 24 of the compiler 20. Thus, the illustrated local variables are allocated in a way that each loop iteration gets its own copy, thereby permitting pipelined execution. As already noted, pipelining execution of the workers may significantly enhance performance.FIG. 2A shows source code 30 containing a loop (e.g., "for (int i=0; i<n; i++") that declares a variable "b", which may be considered a local variable because it lacks dependencies across iterations of the loop. In the illustrated example, the variable has a fixed size (e.g., an array of 100 integers). Thus, the local storage requirements of the variable b are fixed and statically known to the compiler. The illustrated source code 30 may be readily substituted for the source code 22 ( FIG. 1 ), already discussed. Accordingly, pipeline execution code may be automatically generated for the illustrated local variable.FIG. 2B shows source code 32 containing a loop (e.g., "for (int i = ibegin; i < iend; i++)") that declares a variable "spa", which also lacks dependencies across iterations of the loop and is considered a local variable. In the illustrated example, the size of the variable varies and is only known at runtime. The illustrated source code 32 may be readily substituted for the source code 22 ( FIG. 1 ), already discussed. Accordingly, pipeline execution code may be automatically generated for the illustrated local variable.FIG. 2C shows source code 34 containing a loop (e.g., "for (int j=x; j<y; j++)") that uses a variable "b", where the variable b is explicitly designated as a private variable (e.g., using the "private" clause). Other explicit clauses such as "firstprivate", "lastprivate", "reduction", etc., may also be used. In the illustrated example, the variable has a fixed size (e.g., an array of 100 integers). Thus, the local storage requirements of the variable b are fixed and statically known to the compiler. The illustrated source code 34 may be readily substituted for the source code 22 ( FIG. 1 ), already discussed. Accordingly, pipeline execution code may be automatically generated for the illustrated local variable.FIG. 2D shows source code 36 containing a loop (e.g., "for (int i=0; i<n; i++)") that dynamically allocates memory for a variable "b" from within the loop. In the illustrated example, the variable is a local variable that lacks dependencies across iterations of the loop and the size of the variable may either remain constant or vary. The illustrated source code 36 may be readily substituted for the source code 22 ( FIG. 1 ), already discussed. Accordingly, pipeline execution code may be automatically generated for the illustrated local variable.Turning now to FIG. 3 , a communication arc 40 (e.g., LIC) between a first functional unit 42 (e.g., node) in a dataflow graph and a second functional unit 44 in the dataflow graph is shown. In the illustrated example, the functional units 42 and 44 are used to perform operations in a loop on data associated with local variables. In an embodiment, the communication arc 40 includes buffer storage (not shown) such as, for example, one or more line buffers, FIFO (first in first out) buffers, etc., which may be used to hold values that enable apportioning data associated with local variables in the loop to different loop iterations.FIG. 4 shows a method 50 of operating a compiler. The method 50 may generally be implemented in a compiler such as, for example, the compiler 20 ( FIG. 1 ), already discussed. More particularly, the method 50 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), FPGAs, complex programmable logic devices (CPLDs), in fixed-functionality logic hardware using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.For example, computer program code to carry out operations shown in the method 50 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc.).Illustrated processing block 52 provides for detecting one or more local variables in source code, wherein the local variable(s) lack dependencies across iterations of a loop in the source code. The source code may be associated with a communication channel as, for example, the communication arc 40 ( FIG. 3 ) in a dataflow graph. In an embodiment, block 52 includes automatically parsing and/or searching the source code for loops with fixed-size local storage (e.g., as in FIG. 2A ), runtime-varying local storage (e.g., as in FIG. 2B ), explicitly designated private variables (e.g., as in FIG. 2C ), dynamically allocated local variables (e.g., as in FIG. 2D ), and so forth. Moreover, block 52 may be conducted after a registerization of the source code.Block 54 automatically generates (e.g., in response to the detection of the one or more local variables) pipeline execution code for the local variable(s). As will be discussed in greater detail, block 54 may include generating executable instructions to initialize a pool of buffer storage for the local variable(s), define a pipeline depth, and define a plurality of tokenized slots in the initialized pool of buffer storage. In one example, the initialized pool of buffer storage is greater than the local storage amount corresponding to a single iteration of the loop. Moreover, each tokenized slot may correspond to a pipelined iteration of the loop. Illustrated block 56 incorporates the pipeline execution code into the output of the compiler. The method 50 therefore enhances performance by enabling the pipelining of loops containing private data, which improves throughput. Indeed, the overall cycles needed to execute a loop may be significantly less than the product of static loop cycles (e.g., the number of cycles needed to execute one iteration of the loop) and the loop iteration count.FIG. 5 shows a compiler 60 that may implement one or more aspects of the method 50 ( FIG. 4 ), already discussed. Additionally, the compiler 60 may be readily substituted for the compiler 20 ( FIG. 1 ), already discussed. In general, the compiler 60 enables pipelined execution of loops containing local variables and may be explained with reference to a piece of sample source code and compiler-generated pseudo-code. For further reference, the end of this disclosure includes actual intermediate representation (IR) results using an LLVM compiler for a similar sample before and after the principal compiler transformations described herein.Using dynamically allocated local storage in a loop as an example, with a constant array size of 100 chosen for simplicity, it may be assumed that the compiler 60 selects two workers for the loop and chooses a pipeline depth of three for each worker loop.An OpenMP language extension may also be implemented to allow explicit control over worker creation and pipeline depth. Such an extension may be considered optional.The OpenMP language extension is:The pipeline(depth) sub-clause specifies how many loop iterations are to be allowed to execute concurrently. The num_workers and static clauses specify how many workers to create and the way to distribute the loop iterations across the workers. Other parallel annotation languages and/or APIs (application programming interfaces) such as OpenACC, OpenCL, SYCL, etc., may also be used.The solution for correctly handling private variables in pipelined loops may span many passes in the compiler 60. The transformations are in three places as shown in FIG. 5 :A worker creation stage 62 may be used when local storage arises from OpenMP clauses. In an embodiment, the worker creation stage 62 replaces OpenMP directives with expansions for multiple workers. The worker creation stage 62 may also represent local storage using dynamic allocation. Pseudocode for the worker creation stage 62 is provided below. Loop:b = alloca ...// the body of this loop references the local variable b...< inner j-loop>...End-loop:A local storage expansion stage 64 handles a relatively large portion of the transformations described herein. In one example, the local storage expansion stage 64 handles allocation and referencing of private variables that remain. The pass of the illustrated stage 64 is conducted relatively late to allow other compiler optimizations to registerize local variables as far as possible. Accordingly, variables that could not otherwise be registerized are dealt with in the stage 64. If a loop has a set S of private variables, then the stage 64 creates an array of type S with dimension the pipeline depth, which is dynamic count of iterations in flight.A dataflow operation conversion stage 66 may handle the management of the individual slots in the private variable array created for each loop.Worker CreationThe worker creation stage 62 may create multiple workers as directed by OpenMP directives. For non-OpenMP loops, the worker creation stage 62 may automatically decide the number of workers to generate. Similarly, OpenMP directives may specify the pipeline depth, or the compiler 60 may select the degree of pipelining to generate. For the purposes of discussion, it is assumed that two workers are created and that a pipeline depth of three is selected.A pair of LLVM IR intrinsics may be introduced to support loop-local storage: r = pipeline.limited.entry(int id, int depth)pipeline.limited.exit(r)These intrinsics enclose the loops that need local storage. The arguments of the "entry" call specify the pipeline depth and mark the place where allocation for the enclosed loops occurs. The "exit" marks the deallocation point. This representation ensures that independent of the number of workers generated, a single allocation/deallocation is done for the loops.Pseudo-code of the original single loop after the worker creation stage 62 is shown below. In the illustrated example, the original loop has been replicated to form two workers. Additionally, the local variable in the original loop becomes a separate local variable in each of the new loops. Pipelining has not been accounted for yet and is done later in the local storage expansion stage 64. The pseudo-code after processing by the worker creation stage 62 might be:Local Storage ExpansionIn an embodiment, the local storage expansion stage 64 performs the transformation to account for pipelining. The pipeline depth of three is enforced using the concept of a token and a pool of three token values is created for each worker. In one example, an iteration may begin when a token can be obtained from the pool. This operation is modeled by a call to "token.take", which completes only when a local storage slot becomes available. When an iteration is completed, the token is returned to the pool. This return is modeled by a call to "token.return". In one example, since only three distinct token values exist, only three iterations can execute concurrently in each worker.Pseudo-code after the local storage expansion stage 64 might be: // Local variable pool declaration#define num_workers 2#define depth 3struct worker_pool {struct loop_pool {double B[100];} ls[depth];} pool[num_workers];// Allocate the poolpool = CsaMemAlloc(sizeof(worker_pool));// Worker 0w0 = &pool[0];Loop0:// token. take will return one of these values:// &w0.1s[0], &w0.1s[1], ..., w0.1s[depth-1]w0_pool = token.take(w0, sizeof(ls), depth);B_loop_local = &w0_pool.B;// all uses of B in the loop are replaced with B_loop_local...< inner j-loop>token.return(pool, w0_pool);End-loopO:// Worker 1w1 = &pool[1];Loop1:// token. take will return one of these values:// &w1.ls[0], &w1.ls[1], ..., w1.ls[depth-1]w1_pool = token.take(w1, sizeof(ls), depth);B_loop_local = &w1_pool.B;// all uses of B in the loop are replaced with B_loop_local...< inner j-loop>token.return(pool, w1_pool);End-loop1:// Deallocate the poolCsaMemFree(pool);Dataflow Operation ConversionThe final stage in implementing loop-local storage is during the dataflow operation conversion stage 66, which converts IR code into dataflow operations. The intrinsics token.take and token.return may be abstract representations of a mechanism that doles out a fixed number of tokens. In an embodiment, the physical implementation of this mechanism uses CSA LICs. The fundamental property of CSA LICs is to hold multiple values, to deliver values from one end of the LIC when read, and to write values at the other end of the LIC when written. This property may be used to permit only a fixed number of values to circulate through the loop body. In one example, the depth of the LIC is chosen to be the user-specified pipeline depth. Additionally, the values in the LIC may be offsets of individual slots allocated for the private variables of a loop. When a new iteration of the loop begins, a value is read from the LIC and added to a base address to generate the slot address for the current iteration of the loop. When the iteration completes, the offset may be written back to the LIC. Because the LIC holds only "depth" number of values, only depth number of iterations may execute concurrently, with each using a separate local storage slot. Example dataflow operations that implement this scheme are shown below.In a dataflow machine, instructions execute when their input dependencies are satisfied. In the following, an "inord" is an input ordinal (e.g., a signal that an input dependence has been satisfied) and an "outord" is generated by an instruction when the instruction completes execution to indicate that the result in now available. The gate64, add64 and mov instructions are explained first, and then their use in implementing token.take and token.return.gate64result, inord, valueThe instruction does not execute until inord is available. Then, "value" becomes available as the result.add64result, input1, input2The instruction does not execute until input1 and input2 are available. Then, "result" becomes available as the sum of "input1" and "input2".mov0result, inord, valueThe instruction does not execute until "inord" is available. Then, "value" becomes available as the result.The pseudocode below is an example output of the dataflow operation conversion stage 66 for a CSA implementation. // Each loop iteration requires 400 bytes of local storage// There are 2 workers created for the original loop// A pipeline depth of 3 is implemented// Total local storage = 400 ∗ 3 ∗ 2 bytes = 2400 bytes// Worker0 uses a pool that ranges from bytes 0 to 1199// Worker 1 uses a pool that ranges from bytes 1200 to 2399// Within each worker's pool, the 3 slots have offsets 0, 400, 800// A LIC of depth 3 is initialized with offset values:// offset_of(slot0), offset_of(slot1), offset_of(slot2).lic@8 .i64 %slot_offset...% slot_offset: ci64 = init64 0% slot_offset: ci64 = init64 400% slot_offset: ci64 = init64 800...// token_take implemented on CSA// Dynamic memory allocation outside the loop generates the pool addresspool = ...... // Equivalent of CsaMemAlloc(2400)...// In the loop, when the token_take is ready to execute// the pool address is made available to the add64 instructiongate64 pool_gated, token_take_inord, pool// The address of the local storage slot assigned to this iteration// is computedadd64 slot addr, slot offsets, pool_gated...// token_return implemented on CSA// In the loop, when the token _return is ready to execute// The slot_offset is written back at the end of the LICgate64 slot offsets, token return inord, slot offsets// the completion of token_return is signaled with this mov0mov0 token_return_outord, token_return_inordIn this way, the dataflow properties of CSA LICs are exploited to enable pipelining of parallel loops while guaranteeing that enough local storage is available for dynamic loop iterations. The compiler 60 may conduct this transformation automatically and a prototype OpenMP language extension has been implemented to demonstrate the advantages of the solution.Turning now to FIG. 6 , a performance-enhanced computing system 151 is shown. The system 151 may generally be part of an electronic device/platform having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, convertible tablet, server), communications functionality (e.g., smart phone), imaging functionality (e.g., camera, camcorder), media playing functionality (e.g., smart television/TV), wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry), vehicular functionality (e.g., car, truck, motorcycle), robotic functionality (e.g., autonomous robot), Internet of Things (IoT) functionality, etc., or any combination thereof. In the illustrated example, the system 151 includes a host processor 153 (e.g., central processing unit/CPU) having an integrated memory controller (IMC) 155 that is coupled to a system memory 157.The illustrated system 151 also includes an input output (IO) module 159 implemented together with the host processor 153 and a graphics processor 161 (e.g., graphics processing unit/GPU) on a semiconductor die 163 as a system on chip (SoC). The illustrated IO module 159 communicates with, for example, a display 165 (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display), a network controller 167 (e.g., wired and/or wireless), and mass storage 169 (e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory).In an embodiment, the host processor 153, the graphics processor 161 and/or the IO module 159 execute instructions 171 retrieved from the system memory 157 and/or the mass storage 169 to perform one or more aspects of the method 50 ( FIG. 4 ), already discussed. Thus, execution of the illustrated instructions 171 may cause the computing system 151 to detect one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the one or more local variables, and incorporate the pipeline execution code into an output of a compiler.In an embodiment, the pipeline execution code includes an initialization of a pool of buffer storage for the one or more local variables. In such a case, the initialized pool of buffer storage may be greater than (e.g., several multiples of) a local storage amount corresponding to a single iteration of the loop. Moreover, the pipelined execution code may further include a definition of a plurality of tokenized slots in the initialized pool of buffer storage, where each tokenized slot corresponds to a pipelined iteration of the loop. In an embodiment, the pipelined execution code further includes a pipeline depth definition. In one example, the local variable(s) are detected after a registerization of the source code and the source code is associated with a communication channel in a dataflow graph. Additionally, the automatic generation of the pipeline execution code may be conducted in response to the detection of the local variable(s).The illustrated system 151 is therefore performance-enhanced at least to the extent that the pipelining of loops containing private data improves throughput. Indeed, the overall cycles needed to execute a loop may be significantly less than the product of static loop cycles and the loop iteration count.FIG. 7 shows a semiconductor package apparatus 173. The illustrated apparatus 173 includes one or more substrates 175 (e.g., silicon, sapphire, gallium arsenide) and logic 177 (e.g., transistor array and other integrated circuit/IC components) coupled to the substrate(s) 175. The logic 177 may be implemented at least partly in configurable logic or fixed-functionality logic hardware. In one example, the logic 177 implements one or more aspects of the method 50 ( FIG. 4 ), already discussed. Thus, the logic 177 may detect one or more local variables in source code, wherein the local variable(s) lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the local variable(s), and incorporate the pipeline execution code into an output of a compiler. The illustrated apparatus 173 is therefore performance-enhanced at least to the extent that the pipelining of loops containing private data improves throughput. Indeed, the overall cycles needed to execute a loop may be significantly less than the product of static loop cycles and the loop iteration count.In one example, the logic 177 includes transistor channel regions that are positioned (e.g., embedded) within the substrate(s) 175. Thus, the interface between the logic 177 and the substrate(s) 175 may not be an abrupt junction. The logic 177 may also be considered to include an epitaxial layer that is grown on an initial wafer of the substrate(s) 175.FIG. 8 illustrates a processor core 200 according to one embodiment. The processor core 200 may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP), a network processor, or other device to execute code. Although only one processor core 200 is illustrated in FIG. 8 , a processing element may alternatively include more than one of the processor core 200 illustrated in FIG. 8 . The processor core 200 may be a single-threaded core or, for at least one embodiment, the processor core 200 may be multithreaded in that it may include more than one hardware thread context (or "logical processor") per core.FIG. 8 also illustrates a memory 270 coupled to the processor core 200. The memory 270 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory 270 may include one or more code 213 instruction(s) to be executed by the processor core 200, wherein the code 213 may implement one or more aspects of the method 50 ( FIG. 4 ), already discussed. The processor core 200 follows a program sequence of instructions indicated by the code 213. Each instruction may enter a front end portion 210 and be processed by one or more decoders 220. The decoder 220 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution.The processor core 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic 250 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back end logic 260 retires the instructions of the code 213. In one embodiment, the processor core 200 allows out of order execution but requires in order retirement of instructions. Retirement logic 265 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core 200 is transformed during execution of the code 213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250.Although not illustrated in FIG. 8 , a processing element may include other elements on chip with the processor core 200. For example, a processing element may include memory control logic along with the processor core 200. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches.Referring now to FIG. 9 , shown is a block diagram of a computing system 1000 embodiment in accordance with an embodiment. Shown in FIG. 9 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080. While two processing elements 1070 and 1080 are shown, it is to be understood that an embodiment of the system 1000 may also include only one such processing element.The system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated in FIG. 9 may be implemented as a multi-drop bus rather than point-to-point interconnect.As shown in FIG. 9 , each of processing elements 1070 and 1080 may be multicore processors, including first and second processor cores (i.e., processor cores 1074a and 1074b and processor cores 1084a and 1084b). Such cores 1074a, 1074b, 1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 8 .Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b. The shared cache 1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074a, 1074b and 1084a, 1084b, respectively. For example, the shared cache 1896a, 1896b may locally cache data stored in a memory 1032, 1034 for faster access by components of the processor. In one or more embodiments, the shared cache 1896a, 1896b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.While shown with only two processing elements 1070, 1080, it is to be understood that the scope of the embodiments is not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of processing elements 1070, 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element(s) may include additional processors(s) that are the same as a first processor 1070, additional processor(s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between the processing elements 1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070, 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088. As shown in FIG. 9 , MC's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034, which may be portions of main memory locally attached to the respective processors. While the MC 1072 and 1082 is illustrated as integrated into the processing elements 1070, 1080, for alternative embodiments the MC logic may be discrete logic outside the processing elements 1070, 1080 rather than integrated therein.The first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 1076 1086, respectively. As shown in FIG. 9 , the I/O subsystem 1090 includes P-P interfaces 1094 and 1098. Furthermore, I/O subsystem 1090 includes an interface 1092 to couple I/O subsystem 1090 with a high performance graphics engine 1038. In one embodiment, bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090. Alternately, a point-to-point interconnect may couple these components.In turn, I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.As shown in FIG. 9 , various I/O devices 1014 (e.g., biometric scanners, speakers, cameras, sensors) may be coupled to the first bus 1016, along with a bus bridge 1018 which may couple the first bus 1016 to a second bus 1020. In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012, communication device(s) 1026, and a data storage unit 1019 such as a disk drive or other mass storage device which may include code 1030, in one embodiment. The illustrated code 1030 may implement one or more aspects of the method 50 ( FIG. 4 ), already discussed. Further, an audio I/O 1024 may be coupled to second bus 1020 and a battery 1010 may supply power to the computing system 1000.Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of FIG. 9 , a system may implement a multi-drop bus or another such communication topology. Also, the elements of FIG. 9 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 9 .Additional Notes and Examples:Example 1 includes a performance-enhanced computing system comprising a network controller, a processor coupled to the network controller, and a memory coupled to the processor, the memory including a set of executable program instructions, which when executed by the processor, cause the processor to detect one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the one or more local variables, and incorporate the pipeline execution code into an output of the compiler.Example 2 includes the computing system of Example 1, wherein the pipeline execution code is to include an initialization of a pool of buffer storage for the one or more local variables.Example 3 includes the computing system of Example 2, wherein the initialized pool of buffer storage is to be greater than a local storage amount corresponding to a single iteration of the loop.Example 4 includes the computing system of Example 2, wherein the pipeline execution code is to further include a definition of a plurality of tokenized slots in the initialized pool of buffer storage, and wherein each tokenized slot is to correspond to a pipelined iteration of the loop.Example 5 includes the computing system of Example 1, wherein the pipeline execution code is to include a pipeline depth definition.Example 6 includes the computing system of any one of Examples 1 to 5, wherein the one or more local variables are to be detected after a registerization of the source code, automatic generation of the pipeline execution code is to be in response to detection of the one or more local variables, and the source code is to be associated with a communication channel in a dataflow graph.Example 7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to detect one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the one or more local variables, and incorporate the pipeline execution code into an output of a compiler.Example 8 includes the semiconductor apparatus of Example 7, wherein the pipeline execution code is to include an initialization of a pool of buffer storage for the one or more local variables.Example 9 includes the semiconductor apparatus of Example 8, wherein the initialized pool of buffer storage is to be greater than a local storage amount corresponding to a single iteration of the loop.Example 10 includes the semiconductor apparatus of Example 8, wherein the pipeline execution code is to further include a definition of a plurality of tokenized slots in the initialized pool of buffer storage, and wherein each tokenized slot is to correspond to a pipelined iteration of the loop.Example 11 includes the semiconductor apparatus of Example 7, wherein the pipeline execution code is to include a pipeline depth definition.Example 12 includes the semiconductor apparatus of any one of Examples 7 to 11, wherein the one or more local variables are to be detected after a registerization of the source code, automatic generation of the pipeline execution code is to be in response to detection of the one or more local variables, and the source code is to be associated with a communication channel in a dataflow graph.Example 13 includes the semiconductor apparatus of any one of Examples 7 to 12, wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.Example 14 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to detect one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code, automatically generate pipeline execution code for the one or more local variables, and incorporate the pipeline execution code into an output of a compiler.Example 15 includes the at least one computer readable storage medium of Example 14, wherein the pipeline execution code is to include an initialization of a pool of buffer storage for the one or more local variables.Example 16 includes the at least one computer readable storage medium of Example 15, wherein the initialized pool of buffer storage is to be greater than a local storage amount corresponding to a single iteration of the loop.Example 17 includes the at least one computer readable storage medium of Example 15, wherein the pipeline execution code is to further include a definition of a plurality of tokenized slots in the initialized pool of buffer storage, and wherein each tokenized slot is to correspond to a pipelined iteration of the loop.Example 18 includes the at least one computer readable storage medium of Example 14, wherein the pipeline execution code is to include a pipeline depth definition.Example 19 includes the at least one computer readable storage medium of any one of Examples 14 to 18, wherein the one or more local variables are to be detected after a registerization of the source code, automatic generation of the pipeline execution code is to be in response to detection of the one or more local variables, and the source code is to be associated with a communication channel in a dataflow graph.Example 20 includes a method of operating a compiler, the method comprising detecting one or more local variables in source code, wherein the one or more local variables lack dependencies across iterations of a loop in the source code, automatically generating pipeline execution code for the one or more local variables, and incorporating the pipeline execution code into an output of the compiler.Example 21 includes the method of Example 20, wherein the pipeline execution code includes an initialization of a pool of buffer storage for the one or more local variables.Example 22 includes the method of Example 21, wherein the initialized pool of buffer storage is to be greater than a local storage amount corresponding to a single iteration of the loop.Example 23 includes the method of Example 21, wherein the pipeline execution code further includes a definition of a plurality of tokenized slots in the initialized pool of buffer storage, and wherein each tokenized slot is to correspond to a pipelined iteration of the loop.Example 24 includes the method of Example 20, wherein the pipeline execution code includes a pipeline depth definition.Example 25 includes the method of any one of Examples 20 to 24, wherein the one or more local variables are detected after a registerization of the source code, automatic generation of the pipeline execution code is in response to detection of the one or more local variables, and the source code is associated with a communication channel in a dataflow graph.Example 26 includes means for performing the method of any one of Examples 20 to 25.Thus, technology described herein may include an automated compiler transformation that can take as input a loop that has some form of local loop storage and dynamically pipeline the loop using one or more workers for a dataflow architecture such as CSA. The compiler may detect local storage remaining in loops after registerization and allocate enough memory to hold the private variables for a) each worker, and b) each concurrent execution of a worker. As each worker body commences execution, the worker body may be assigned a unique slot in the allocated private storage. When the worker completes execution of an iteration, the local storage slot associated with the worker may be automatically recycled for use in future iterations.Several applications/benchmarks such as, for example, the SPGemm (sparse matrix-matrix multiplication) and Apriori benchmarks, may benefit from the transformation technology described herein.Embodiments are applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the computing system within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.As used in this application and in the claims, a list of items joined by the term "one or more of' may mean any combination of the listed terms. For example, the phrases "one or more of A, B or C" may mean A; B; C; A and B; A and C; B and C; or A, B and C.Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. |
Aspects disclosed herein relate to combining instructions to load data from or store data in memory while processing instructions in processors. An exemplary method includes detecting a pattern of pipelined instructions to access memory using a first portion of available bus width and, in response to detecting the pattern, combining the pipelined instructions into a single instruction to access the memory using a second portion of the available bus width that is wider than the first portion. Devices including processors using disclosed aspects may execute currently available software in a more efficient manner without the software being modified. |
CLAIMS1. A method, comprising:detecting a pattern of pipelined instructions to access memory using a first portion of available bus width; andin response to detecting the pattern, combining the pipelined instructions into a single instruction to access the memory using a second portion of the available bus width that is wider than the first portion.2. The method of claim 1, wherein detecting the pattern comprises examining a set of instructions in an instruction set window of a given width of instructions.3. The method of claim 1, wherein the pipelined instructions combined into the single instruction comprise consecutive instructions.4. The method of claim 1, wherein:the pipelined instructions combined into the single instruction comprise non- consecutive instructions; anddetecting the pattern comprises determining that other instructions between the non-consecutive instructions do not alter memory locations accessed by the non- consecutive instructions .5. The method of claim 1 , wherein detecting the pattern comprises comparing instructions in a pipeline to patterns of instructions stored in a table.6. The method of claim 5, further comprising updating the table based on instructions recently detected in the pipeline.7. The method of claim 1, wherein:detecting the pattern comprises detecting pipelined instructions to store values of a first bit-width in consecutive memory locations; andthe single instate! ion comprises an instruction to store a single value of a second bit-width in a single memory location.8. The method of claim 1, wherein:detecting the pattern comprises detecting pipelined instructions to read values of a first bit-width from consecutive memory locations; andthe single instruction comprises an instruction to read a single value of a second bit-width from a single memor - location,9. A processor, comprising:a pattern detection circuit configured to:detect a pattern of pipelined instructions to access memory using a first portion of available bus width; andin response to detecting the pattern., combine the pipelined instructions into a single instruction to access the memor - using a second portion of the available bus width that is wider than the first portion.10. The processor of claim 9, wherein the pattern, detection circuit is configured to detect the pattern by examining a set of instructions in an instruction set window of a given width of instructions.11. The processor of claim 9, wherein the pattern detection circuit is configured to combine consecutive instructions into the single instruction.12. The processor of claim 9, wherein the pattern detection circuit is configured to: combine non-consecutive instructions into the single instruction; and determine that oilier instructions between the non-consecutive instructions do not alter memory locations accessed by the non-consecutive instructions.13. The processor of claim 9, wherein the pattern detection circuit is configured to detect the pattern by comparing instructions in a pipeline to patterns of instructions stored in a table,14. The processor of claim 9, wherein:the pattern detection circuit is configured to detect the pattern by detecting instructions to store values of a first bit-width in consecutive memory locations; and the single instruction comprises an instruction to store a Single value of a second bit-width in a single memory location.15. The processor of claim 9, wherein:the pattern detection circuit is configured to detect the pattern by detecting instructions to read values of a first bit-width from consecutive memory locations; and the single instruction comprises an instruction to read a single value of a second bit-width from, a single memory location.16. An apparatus, comprising:means for detecting a pattern of pipelined instructions to access memory using a first portion of available bus width; andmeans for combining, in response to detecting the pattern, the instructions into a single instruction to access the memory using a second portion of the available bus width that is wider than the first portion.17. The apparatus of claim 16, wherein the means for detecting the pattern comprises means for examining a set of instructions in an instruction set window of a given width of instructions.18. The apparatus of claim 16, wherein the means for combining comprises means for combining consecutive instructions.19. The apparatus of claim 16, wherein:the means for combining comprises means for combining non-consecutive instractions; andthe means for detecting the pattern comprises means for determining that other instructions between the non-consecutive instructions do not alter memory locations accessed by the non-consecutive instructions.20. The apparatus of claim 16, wherein the means for detecting the pattern comprises means for comparing instructions in a pipeline to patterns of instructions stored in a table. |
COMBINING LOADS OR STORES IN COMPUTER PROCESSINGCLAIM FOR PRIORITY UNDER 35 U.S.C. § 119[0001] This application claims priority to U.S. Application No. 15/055,160, filed February 26, 2016, which assigned to the assignee of the present application and is expressly incorporated by reference herein in its entirety.BACKGROUND[0002] Aspects disclosed herein relate to the field of computer processors. More specifically, aspects disclosed herein relate to combining instructions to load data from or store data in memory while processing instructions in processors.[0003] In processing, a pipeline is a set of data processing elements connected in series, where the output of one element is the input of the next one. Instructions are fetched and placed into the pipeline sequentially. In this way multiple instructions can be present in the pipeline as an instruction stream and can be all processed simultaneously, although each instruction will be in a different stage of processing in the stages of the pipeline.[0004] A processor may support a variety of load and store instruction types. Not all of these instructions may take full advantage of a bandwidth of an interface between the processor and an associated cache or memory. For example, a particular processor architecture may have load (e.g., fetch) instructions and store instructions that target a single 32-bit word, while recent processors may supply a data-path to the cache of 64 or 128 bits. That is, compiled machine code of a program may include instructions that load a single 32-bit word of data from a cache or other memory, while an interface (e.g., a bus) between the processor and the cache may be 128 bits wide, and thus 96 bits of the width are unused during the execution of each of those load instructions. Similarly, the compiled machine code may include instructions that store a single 32-bit word of data in a cache or other memory, and thus 96 bits of the width are unused during the execution of each of those store instructions. SUMMARY[0005] Aspects disclosed herein relate to combining instructions to load data, from or store data in memory while processing instructions in processors.[0006] In one aspect, a method is provided. The method generally includes detecting a pattern of pipelined instructions to access memor ' using a first portion of available bus width and, in response to detecting the pattern, combining the instructions into a single instruction to access the memory using a second portion of the available bus width that is wider than the first portion.[0007] In another aspect, a processor is provided. The processor generally includes a pattern detection circuit configured to detect a pattern of pipelined instructions to access memory using a first portion of available bus width and, in response to detecting the pattern, combine the instructions into a single instraction to access the memory using a second portion of the available bus width that is wider than the first portion.[0008] In still another aspect, an apparatus is provided. The apparatus generally includes means for detecting a pattern of pipelined instructions to access memory using a first portion of available bus width and means for combining, in response to detecting the pattern, the instructions into a single instraction to access the memory using a second portion of the available bus width that is wider than the first portion.[0009] The claimed aspects may provide one or more advantages over previously known solutions. According to some aspects, load and store operations may be perfonned in a manner that uses available memory bandwidth more efficiently, which may improve performance and reduce power consumption.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS[0010] So that the manner in which the above recited aspects are attained and can be understood in detail, a more particular description of aspects of the disclosure, briefly summarized above, may be had by reference to the appended drawings.[0011] It is to be noted, however, that the appended drawings illustrate only aspects of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other aspects. [Θ012] Figure 1 is a functional block diagram of an exemplar ' processor configured to recognize sequences of instructions that may be replaced by a more bandwidth- efficient instruction, according to aspects of the present disclosure.[0013] Figure 2 is a flow chart illustrating a method for computing, according to aspects of the present disclosure.[0014] Figure 3 illustrates an exemplary processor pipeline, according to aspects of the present disclosure.[0015] Figure 4 illustrates an exemplar - storage instruction table (SIT), according to aspects of the present disclosure.[0016] Figure 5 is a block diagram illustrating a computing device, according to aspects of the present disclosure.DETAILED DESCRIPTION[0017] Aspects disclosed herein provide a method for recognizing sequences (e.g., patterns or idioms) of smaller load instructions (loads) or store instructions (stores) targeting adjacent memory in a program (e.g., using less than the full bandwidth of a data-path) and combining these smaller loads or stores into a larger (e.g., using more of the bandwidth of the data-path) load or store. The data-path may comprise a bus, and the bandwidth of the data-path may be the number of bits that the bus may convey in a single operation. For example (illustrated with assembly code), the sequence of loads:LDR R0, [SP, #8] ; load R0 from memory at SP+8LDR R l, [SP, #12] ; load Rl from memory at SP+12 may be recognized as a pattern that could be replaced with a more bandwidth-efficient command or sequence of commands, because each of the loads uses only 32 bits of bandwidth (e.g., a bit-width of 32 bits) while accessing memory twice. In the example, the sequence may be replaced with the equivalent (but more bandwidth-efficient) command:LORD R0, Rl, [SP, #8] ; load R0 and Rl from, memory at SP+8 that uses 64 bits of bandwidth (e.g., a bit-width of 64 bits) while accessing memory once. Replacing multiple "narrow" instructions with a "wide" instruction may allow higher throughput to caches or memory and reduce the overall instruction count executed by the processor.[0018] According to aspects of the present disclosure, the recognition of sequences as replaceable and the replacement of the sequences may be performed in a processing system including at least one processor, such that each software sequence is transfonned on the fly in the processing system each time the software sequence is encountered. Thus, implementing the provided methods does not involve any change to existing software. That is, software that can run on a device not including a processing system operating according to aspects of the present disclosure may be run on a device including such a processing system with no changes to the software. The device including the processing system operating according to aspects of the present disclosure may perform load and store operations in a more bandwidth-efficient manner (than a device not operating according to aspects of the present disclosure) by replacing some load and store commands while executing the software, as described above and in more detail below.[0019] Figure 1 is a functional block diagram, of an example processor (e.g., a CPU) 101 configured to recognize sequences of instructions that may be replaced by a more bandwidth-efficient instruction, according to aspects of the present disclosure described in more detail below. Generally, the processor 101 may be used in any type of computing device including, without limitation, a desktop computer, a laptop computer, a tablet computer, and a smart phone. Generally, the processor 101 may include numerous variations, and the processor 101 shown in Figure 1 is for illustrative purposes and should not be considered limiting of the disclosure. For example, the processor 101 may be a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or another type of processor. In one aspect, the processor 101 is disposed on an integrated circuit including an instruction execution pipeline 112 and a storage instruction table (SIT) 111.[0020] Generally, the processor 101 executes instructions in an instruction execution pipeline 1 12 according to control logic 114. The pipeline 1 12 may be a superscalar design, with multiple parallel pipelines, including, without limitation. parallel pipelines 1 12a and 1 12b. H e pipelines 1 12a, 112b include various non- architected registers (or latches) 1 16, organized in pipe stages, and one or more arithmetic logic units (ALU) 118. A physical register file 120 includes a plurality of architected registers 121.[0021] The pipelines 112a, 112b may fetch instructions from an instruction cache (1- Cache) 122, while an instruction-side translation lookaside buffer (ITLB) 124 may manage memory addressing and permissions. Data may be accessed from a data cache (D-cache) 126, while a main translation lookaside buffer (TLB) 128 may manage memory addressing and pennissions. In some aspects, the ITLB 124 may be a copy of a part of the TLB 128. In other aspects, the ITLB 124 and the TLB 128 may be integrated. Similarly, in some aspects, the I -cache 122 and D-cache 126 may be integrated, or unified. Misses in the I-cache 122 and/or the D-cache 126 may cause an access to higher level caches (such as L2 or L3 cache) or main (off-chip) memory 132, which is under the control of a memoiy interface 130. The processor 101 may include an input/output interface (I/O IF) 134 that may control access to various peripheral devices 136.[0022] The processor 101 also includes a pattern detection circuit (PDC) 140. As used herein, a partem detection circuit comprises any type of circuitry (e.g., logic gates) configured to recognize sequences of reads from or stores to caches and memory and replace recognized sequences with commands that are more bandwidth-efficient, as described in more detail herein. Associated with the pipeline or pipelines 112 is a storage instruction table (STI) 1 1 1 that may be used to maintain attributes of read commands and write commands that pass through the pipelines 112, as will be described in more detail below.[0023] Figure 2 is a flow chart illustrating a method 200 for computing that may be performed by a processor, according to aspects of the present disclosure. In at least one aspect, the PDC is used in performing the steps of the method 200. The method 200 depicts an aspect where the processor detects instructions that access adjacent memoiy and replaces the instructions with a more bandwidth-efficient instruction, as mentioned above and described in more detail below. [Θ024] At block 210, the method begins by the processor (e.g., the PDC) detectmg a pattern of pipelined instructions (e.g., commands) to access memory using a first portion of available bus width. As described in more detail below, the processor may detect patterns wherein the instructions are consecutive, non-consecutive, or interleaved with other detected patterns. Also as described in more detail below, the processor may detect a pattern wherein instructions use a same base register with differing offsets, instructions use addresses relative to a program counter that is increased as instructions execute, or mstractions use addresses relative to a stack pointer.[0025] At block 220, the method continues by the processor, in response to detecting the pattern, combining the pipelined mstractions into a single instruction to access the memory using a second portion of the available bus width that is wider than the first portion. The processor 101 may replace the pattern of instructions with the single instruction before passing the single instruction and possibly other (e.g., unchanged) instructions from Decode stage to an Execute stage in a pipeline.[0026] The various operations described above may be performed by any suitable means capable of performing the corresponding functions. The means may include circuitry and/or moduie(s) of a processor or processing system. For example, means for detecting (a pattern of pipelined instructions to access memory using a first portion of available bus width) may be implemented in the pattern detection circuit 140 of the processor 101 shown in FIG. 1. Means for combining the pipelined instructions (in response to detecting the pattern., into a single instruction to access the memory using a second portion of the available bus width that is wider than the first portion) may be implemented in any suitable circuit of the processor 101 shown in FIG . 1, including the pattern detection circuit 140, circuits within the pipeline(s) 112, and/or the control logic 114.[0027] According to aspects of the present disclosure, a processor (e.g., processor 101 in Figure 1) may recognize consecutive (e.g., back-to-back) loads (e.g., mstractions that load data from a location) or stores (e.g., instructions that store data to a location) as a sequence of loads or stores targeting memory at contiguous offsets. Examples of these are provided below:S I R R4, [R0] : 32b R4 to memory at RO+0STR R5, [R0, #4] ; 32b R5 to memory at RO+4 STRB Rl, [SP, #-5] ; 8b Rl to memor ' at SP-5STRB R2, [SP, #-4] ; 8b R2 to memory- at SP-4VLDR D2, [R8, #8] ; 64b D2 from memory at R8+8VLD D7, [R8, #16] ; 64b D7 from memory at R8+16In the first pair of commands, a 32-bit value from register R4 is written to a memory location located at a value stored in the R0 register, and then a 32-bit value from register R5 is written to a memory location four addresses (32 bits) higher than the value stored in the RQ register. In the second pair of commands, an eight-bit value from register Rl is written to a memory location located five addresses lower than a value stored in the stack pointer (SP), and then an eight-bit value from register R2 is written to a memory location located four addresses lower than the value stored in the SP, i .e., one address or eight bits higher than the location to which Rl was written. In the third pair of commands, a 64-bit value is read from a memory location located eight addresses higher than a value stored in register R8, and then a 64-bit value is read from a memory location located sixteen addresses higher than the value stored in register R8, i.e. eight addresses or 64 bits higher than the location read from in the first command. A processor operating according to aspects of the present disclosure may recognize consecutive commands accessing memory at contiguous offsets, such as those above, as a pattern that may be replaced by a command that is more bandwidth-efficient. The processor may then replace the consecutive commands with the more bandwidth- efficient command as described above with reference to Figure 2.[0028] According to aspects of the present disclosure, a processor may recognize consecutive (e.g., back-to-back) loads or stores with base-updates as a partem of commands that access contiguous memory that may be replaced by a command that is more bandwidth-efficient. As used herein, the term base-update generally refers to an instruction that alters the value of an address-containing register used in a sequence (e.g., a pattern) of commands. A processor may recognize that a sequence of commands targets adjacent memory when base-updates in the commands are considered. For example, in the below pair of instructions, data is read from adjacent memory locations due to the base-update in the first command: LDR R7, [RO], #4 ; 32b from memory- at RO; RO = RO + 4LD R3, [RO] ; 32b from memory at ROA processor operating according to aspects of the present disclosure may recognize consecutive commands with base-updates, such as those above, as a pattern that may be replaced by a command that is more bandwidth-efficient, and then replace the commands as described above with reference to Figure 2.[0029] According to aspects of the present disclosure, a processor may recognize consecutive (e.g., back-to-back) program-counter-relative (PC-relative) loads or stores as a pattern which may be replaced by a command that is more bandwidth efficient. A processor may recognize that a sequence of commands targets adjacent memory when changes to the program counter (PC) are considered. For example, in the below pair of instructions, data is read from adjacent memory locations due to the PC changing after the first command is executed.LDR Rl, [PC, #20] ; PC=X, load from memory at X+20+8LD R2, [PC, #20] ; load from memory at X+4+20+8[0030] In the above pair of instructions, a 32-bit value is read from a memory location located 28 locations (224 bits) higher than a first value (X) of the PC, the PC is advanced four locations, and then another 32-bit value is read from the memory location located 32 locations (256 bits) higher than the first value (X) of the PC. Thus, the above pair of commands may be replaced as shown below: j LDR R1 , [PC, #20] PC=X, load from memory at X+20+8 ^ I LDR R2, [PC, #20] load from memory at X+4+20+8 §LORD Rl, R2, [PC #20j[0031] According to aspects of the present disclosure, a processor may recognize a non-consecutive (e.g., non-back-to-back) sequence of loads or stores as a sequence of loads or stores targeting memory at adjacent locations. If there are no intervening instructions that will alter addresses referred to by loads or stores in a program, then it may be possible to pair those loads or stores and replace the paired loads or stores with a more bandwidth-efficient command. For example, in the below set of instructions, data is read from adjacent memory locations in non-consecutive LDR (load) commands, and the memory locations being read are not altered by any of the intervening commands.LDR Rl, [R0] ; 32b from memory at R0R2, #42 ; doesn't alter address register (R0) R3, R2 ; doesn't alter address register (R0) LDR R4, [R0, #4] ; 32b from memory at RO+4In the above set of instractsons, the first and fourth instructions may be replaced with a single read command targeting the eight adjacent memory locations starting at the location specified by the value in the RO register because the second and third instructions do not alter any of those eight adjacent memory locations as shown below:{LDR Rl, [ROJ 32b from memory at R0}=>MOV R2, #42 doesn't alter address register (RO)ADD R3, R2 doesn't alter address register (RO){LDR R4, [RO, #4] 32b from memory at R0+4}=>I.DRI) Rl, R4,While the replacement instruction (for the original first and fourth instructions) is shown below the intervening instructions in the list above, this order is for convenience and is not intended to be limiting of the order of the commands as they are passed to an Execute stage of a pipeline. In particular, the replacement instruction may be passed to an Execute stage of a pipeline before, between, or after the intervening instructions.[0032] The patterns described above may occur in non-consecutive (e.g., non-back- to-back) variations. Thus, a processor operating according to the present disclosure may recognize any of the previously described patterns with intervening instructions that do not alter any of the targeted adjacent memory locations and replace the recognized patterns with equivalent commands that are more bandwidth-efficient.For example, in each of the below sets of instructions, data is read from or stored in adjacent memory locations in non-consecutive commands, and the memory locations being accessed are not altered by any of the intervening commands.LDR RO, SP, #8] load R0 from memory at SP+8MOV R3, #60 doesn't alter memory at SP+8 or SP+12LDR R l, [SP, #12] ; load Rl from memory at SP+12 S I R R4, [RO] 32b R4 to memoiy at RO+0MOV R2, #21 doesn't alter memory at ROS I R R5. [RO, #4] 32b R5 to memory at RO+4STRB Rl, [SP, #-5] 8b Rl to memory at SP-5MOV R2, #42 doesn't alter memoiy at SP-5 or SP-4STRB R2, [SP, #-4] 8b R2 to memory at SP-4VLDR 02. [R8, #8] 64b D2 from memory at R8+8ADD R1, R2 doesn't alter memory at R8+8 or R8VLDR D7, [R8, #16; 64b D2 from memory at R8+16In each of the above sets of instructions, memory at adjacent locations is targeted by commands performing similar operations with intervening commands that do not alter the memory locations. A processor operating according to aspects of the present disclosure may recognize non-consecutive commands, such as those above, as a pattern that may be replaced by a command that is more bandwidth-efficient, and then replace the commands as described above with reference to Figure 2 while leaving the intervening commands unchanged.[0034] According to aspects of the present disclosure, a processor may recognize non-consecutive (e.g., non-back-to-back) loads or stores with base-updates as a pattern which may be replaced by a command that is more bandwidth-efficient. For example, in the below set of instructions, data is read from adjacent memoiy locations due to the base-update in the first command:LDR R7, [RO], #4 ; 32b from memory at RO; RO = RO + 4ADD R1, R2 ; doesn't alter memory at RO or RO + 4LDR R3, [RO] ; 32b from memory at ROThus, the first and third commands may be replaced by a single load command, as shown below:{LDR R7, [RO], #4 ; 32b from memory at RO; RO ===:RO +■] ·ADD R1, R2 ; doesn't alter memoiy at RO or RO + 4{LDR R3, [RO] ; 32b from memoiy at R0}=>LORD R7, R3, [RO],A processor operating according to aspects of the present disclosure may recognize non- consecutive commands with base-updates as a pattern that may be replaced by a more bandwidth-efficient command, and then replace the non-consecutive commands with the more bandwidth-efficient command as described above with reference to Figure 2.[0035] According to aspects of the present disclosure, a processor may recognize non-consecutive (e.g., non-back-to-back) PC-relative loads or stores as a pattern which may be replaced by a command that is more bandwidth-efficient. A processor may recognize that a sequence of commands targets adjacent memory when changes to the program counter (PC) are considered and intervening commands do not alter the targeted memory. For example, in the below set of instructions, data is read from adjacent memory locations due to the PC changing after the first command is executed.LDR Rl, [PC, #20] PC=X, load from memory at X+20+8MOV R2, #42 doesn't alter memory at X+28 or X+32LDR R3, [PC, #16] load from memory at X+8+16+8Thus, the first and third commands may be replaced by a single load command, as shown below:{LDR Rl, [PC, #20] PC=X, load from memory at X+20+8 }=> MOV R2, #42 doesn't alter memory at X+28 or X+32{LDR R3, [PC, #16] load from memory at X+8+16+8}=>LORD Rl, R3, [PC. #20]A processor operating according to aspects of the present disclosure may recognize non- consecutive PC-relative commands as a pattern that may be replaced by a more bandwidth-efficient command, and then replace the non-consecutive commands with the more bandwidth-efficient command as described above with reference to Figure 2.[0036] According to aspects of the present disclosure, a processor operating according to the present disclosure may recognize any of the previously described patterns (e.g., sequences) interleaved with another of the previously described patterns and replace the recognized patterns with equivalent commands that are more bandwidth- efficient. That is, in a group of commands, two or more pairs of loads or stores may be eligible to be replaced by the processor with more bandwidth-efficient commands. For example, in the below set of instructions, data is read from adjacent memory locations by a first pair of instmctions and from a different set of adjacent memory locations by a second pair of instructions. LDR Rl, [R0], #4 ; 32b from memory at RO; RO = RO + 4LDR R7, [SP] : 32b from memory at SPLDR R4, [RO] ; 32b from memory at RO (pair with 1stLDR)LDR R5, [SP, #4] : 32b from memory at SP+4 (pair with 2ndLDR)A processor operating according to aspects of the present disclosure may recognize interleaved patterns of commands that may be replaced with more bandwidth-efficient commands. Thus, a processor operating according to aspects of the present disclosure that encounters the above exemplary pattern may replace the first and third instructions with an instruction that is more bandwidth-efficient and replace the second and fourth instructions with an instruction that is more bandwidth-efficient.[0037] According to aspects of the present disclosure, any of the previously described patterns may be detected by a processor examining a set of instructions in an instruction set window of a given width of instructions. That is, a processor operating according to aspects of the present disclosure may examine a number of instructions in an instruction set window to detect patterns of instructions that access adjacent memory locations and may be replaced with instructions that are more bandwidth-efficient.[0038] According to aspects of the present disclosure, any of the previously described patterns of instructions may be detected by a processor and replaced with more bandwidt -efficient (e.g., "wider") instructions during program, execution. In some cases, the pattern recognition and command (e.g., instruction) replacement may be performed in a pipeline of a processor, such as pipelines 112 shown in Figure 1.[0039] Figure 3 illustrates an exemplary basic 3 -stage processor pipeline 300 that may be included in a processor operating according to aspects of the present disclosure. The three stages of the exemplary processor pipeline are a Fetch stage 302, a Decode stage 304, and an Execute stage 306. During execution of a program by a processor (e.g., processor 101 in Figure 1), instructions are fetched from memory and/or a cache by the Fetch stage, passed to the Decode stage and decoded, and the decoded instructions are passed to the Execute stage and executed. The pipeline 300 is three- wide; that is, each stage can contain up to three instructions. However, the present disclosure is not so limited and applies to pipelines of other widths.[0040] The group of instructions illustrated in the Fetch stage is passed to the Decode stage, where the instructions are transformed, via the logic "xform" 310. After being transformed, the instructions are pipelined into the Execute stage. The logic "xform" recognizes the paired load commands 320, 322 can be replaced by a more bandwidth-efficient command, in this case a single double-load (LORD) command 330. As illustrated, the two original load commands 320, 322, are not passed to the Execute stage. The replacement command 330 that replaced the two original load commands is illustrated with italic text. Another command 340 that was not altered is also shown.[Θ041] According to aspects of the present disclosure, a table, referred to as a Storage Instruction Table (SIT) 308 may be associated with the Decode stage and used to maintain certain attributes of reads/writes that pass through the Decode stage,[0042] Figure 4 illustrates an exemplary SIT 400. SIT 400 is illustrated as it would be populated for the group of instructions shown in Figure 3 when the instructions reach the Decode stage. Information regarding each instruction that passes through the Decode stage is stored in one row of the SIT. The SIT includes four columns. The Index column 402 identifies the instruction position relative to other instractions currently in the SIT, The Type column 404 identifies the type of the instruction as one of "Load," "Store," or "Other." "Other" is used for instructions that neither read from nor write to memory or cache. The Base Register column 406 indicates the register used as the base address by the load or store command. The Offset column 408 stores the immediate value added to the base register when the command is executed.[0043] Although the SIT is illustrated as containing only information about instructions from the Decode stage, the disclosure is not so limited. A SIT may contain information about instructions in other stages. In a processor with a longer pipeline, a SIT could have information about instractions that have already passed through the Decode stage.[0044] A processor operating according to aspects of the present disclosure applies logic to recognize sequences (e.g., patterns) of instructions that may be replaced by other instructions, such as the sequences described above. If a sequence of instractions that may be replaced is recognized, then the processor transforms the recognized instructions into another instruction as the instractions flow towards the Execute stage.[0045] To detect patterns and consolidate instractions as described herein, the pattern detection circuit that acts on the SIT and the pipeline may recognize the previously described sequences of load or store commands that access adjacent memory locations. In particular, the pattern detection circuit may compare the Base Register and Offset of each instruction of Type "Load'1with the Base Register and Offset of ever other instruction of Type "Load" and determine whether any two "Load" instructions have a same Base Register and Offsets that cause the two "Load" instructions to access adjacent memory locations. The pattern detection circuit may also determine if changes to a Base Register that occur between compared "Load" instructions cause two instructions to access adjacent memory locations. When the pattern detection circuit determines that two "Load" instructions access adjacent memory locations, then the pattern detection circuit replaces the two "Load" instructions with an equivalent, more bandwidth-efficient replacement command. The pattern detection circuit then passes the replacement command to the Execute stage. The pattern detection circuit may also perform similar comparisons and replacements for instructions of Type "Store." The pattern detection circuit may also determine PC values that will be used for "Load" instructions affecting PC-relative memory locations and then use the determined PC values (and any offsets included in the instructions) to determine if any two "Load" instructions access adjacent memory locations. The pattern detection circuit may perform similar PC value determinations for "Store" instructions affecting PC-relative memory locations and use the determined PC values to determine if any two "Store" instructions access adjacent memory locations.[Θ046] Figure 5 is a block diagram illustrating a computing device 501 integrating the processor 101 configured to detect patterns of instructions accessing memory using a small portion of bandwidth (e.g. bus-width) and replace the patterns with instructions using a larger portion of bandwidth, according to one aspect. All of the apparatuses and methods depicted in Figures 1 -4 may be included in or performed by the computing device 501. The computing device 501 may also be connected to oilier computing devices via a network 530. In general, the network 530 may be a telecommunications network and/or a wide area network (WAN). In a particular aspect, the network 530 is the Internet. Generally, the computing device 501 may be any device which includes a processor configured to implement detecting patterns of instructions accessing memory using a small portion of bandwidth and replacing the patterns with instructions using a larger portion of bandwidth, including, without limitation, a desktop computer, a server, a laptop computer, a tablet computer, and a smart phone. [0047] The computing device 501 generally includes the processor 101 connected via a bus 520 to a memory 508, a network interface device 518, a storage 509, an input device 522, and an output device 524. The computing device 501 generally operates according to an operating system (not shown). Any operating system, supporting the functions disclosed herein may be used. The processor 101 is included to be representative of a single processor, multiple processors, a single processor having multiple processing cores, and the like. The network interface device 518 may be any type of netw ork communications device allowing the computing device 501 to communicate with other computing devices via the network 530.[0048] The storage 509 may be a persistent storage device. Although the storage 509 is shown as a single unit, the storage 509 may be a combination of fixed and/or removable storage devices, such as fixed disc drives, solid state drives, SAN storage, NAS storage, removable memory cards or optical storage. The memory 508 and the storage 509 may be part of one virtual address space spanning multiple primary and secondary storage devices.[0049] The input device 522 may be any device operable to enable a user to provide input to the computing device 501. For example, the input device 522 may be a keyboard and/or a mouse. The output device 524 may be any device operable to provide output to a user of the computing device 501. For example, the output device 524 may be any conventional display screen and/or set of speakers. Although shown separately from the input device 522, the output device 524 and input device 52,2 may be combined. For example, a display screen with an integrated touch-screen may be a combined input device 522 and output device 524.[0050] A number of aspects have been described. However, various modifications to these aspects are possible, and the principles presented herein may be applied to other aspects as well. The various tasks of such methods may be implemented as sets of instructions executable by one or more arrays of logic elements, such as microprocessors, embedded controllers, or IP cores.[0051] The foregoing disclosed devices and functionalities may be designed and configured into computer files (e.g. RTL, GDSII, GERBER, etc.) stored on computer readable media. Some or all such files may be provided to fabrication handlers who fabricate devices based on such files. Resulting products include semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. Some or all such files may be provided to fabrication handlers who configure fabrication equipment using the design data to fabricate the devices described herein. Resulting products formed from the computer files include semiconductor wafers that are then cut into semiconductor die (e.g., the processor 101) and packaged, and may be further integrated into products including, but not limited to, mobile phones, smart phones, laptops, netbooks, tablets, ultrabooks, desktop computers, digital video recorders, set- top boxes, servers, and any other devices where integrated circuits are used.[0052] In one aspect, the computer files form a design structure including the circuits described above and shown in the Figures in the form of physical design layouts, schematics, a hardware-description language (e.g., Veriiog, VHDL, etc.). For example, design structure may be a text file or a graphical representation of a circuit as described above and shown in the Figures. Design process preferably synthesizes (or translates) the circuits described below into a netlist, where the netiist is, for example, a list of wires, transistors, logic gates, control circuits, I/O, models, etc. that describes the connections to other elements and circuits in an integrated circuit design and recorded on at least one of machine readable medium. For example, the medium may be a storage medium such as a CD, a compact flash, other flash memory, or a hard-disk drive. In another embodiment, the hardware, circuitry, and method described herein may be configured into computer files that simulate the function of the circuits described above and shown in the Figures when executed by a processor. These computer files may be used in circuitry simulation tools, schematic editors, or other software applications.[0053] As used herein, a phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-h, a-a-c, a-b-b, a- c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).[0054] The previous description of the disclosed aspects is provided to enable a person skilled in the art to make or use the disclosed aspects. Various modifications to these aspects will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims. |
The invention relates to a boost switch driver for high speed signal switching. An example boost switch driver circuit includes two branches. The first branch includes a first transistor. The second branch includes a second transistor and a level shifter circuit. One of the transistors is an N-type transistor and the other is a P-type transistor. The circuit is configured to divide the input clock signal between the first branch and the second branch such that a portion of the input clock signal divided into the first branch is provided to the first transistor, and a portion of the input clock signal divided to the second branch is level shifted by the level shifter circuit to produce a level shifted input clock signal, and the level shifted input clock signal is provided to the second transistor. The circuit is further configured to combine the output of the first transistor with the output of the second transistor to produce an output clock signal. |
1.An electronic assembly comprising:one or more switches; andA switch driver circuit for driving the one or more switches, the switch driver circuit comprising:Input to receive the input clock signal;output to provide the output clock signal;a first transistor and a second transistor, each including a first terminal and a second terminal;a third transistor coupled to the first transistor in a cascode arrangement; anda level shifter circuit to level shift the input clock signal to generate a level shifted input clock signal,in:the first terminal of the first transistor to receive a signal indicative of the input clock signal,the first terminal of the second transistor to receive a signal indicative of the level-shifted input clock signal,The second terminal of the first transistor is coupled to the third terminal of the third transistor, and the second terminal of the third transistor is coupled to the output, andThe second terminal of the second transistor is coupled to the output.2.The electronic assembly of claim 1, wherein:the input clock signal has a low voltage value and a high voltage value, andLevel shifting the input clock signal includes the level shifter circuit changing each of the low voltage value and the high voltage value of the input clock signal to generate the level shifted input clock signal.3.3. The electronic assembly of claim 1, wherein the level shifter circuit includes a coupling capacitor and a voltage controller circuit, and the first terminal of the second transistor is configured to couple to the second transistor through a first capacitor electrode Each of the input and second capacitor electrodes coupled to the voltage controller circuit and the first terminal of the second transistor receives the signal indicative of the level-shifted input clock signal.4.The electronic assembly of claim 3, whereinThe voltage controller circuit includes a pair of cross-coupled transistors, each transistor including a first terminal, a second terminal, and a third terminal,the first terminal of a first transistor of the pair of cross-coupled transistors is coupled to the second terminal of a second transistor of the pair of cross-coupled transistors,the first terminal of the second transistor of the pair of cross-coupled transistors is coupled to the second terminal of the first transistor of the pair of cross-coupled transistors,the third terminal of each of the first transistor of the pair of cross-coupled transistors and the second transistor of the pair of cross-coupled transistors is coupled to a reference voltage, andThe value of the reference voltage corresponds to the high voltage of the level-shifted input clock signal.5.The electronic assembly of claim 3, wherein:The level shifter circuit is a first level shifter circuit,the switch driver circuit further includes a second level shifter circuit, andThe second level shifter circuit is used to control the low voltage level of the output clock signal.6.The electronic assembly of claim 3, wherein:The voltage controller circuit includes a pair of cross-coupled transistors, each transistor including a first terminal, a second terminal, and a third terminal,the first terminal of a first transistor of the pair of cross-coupled transistors is coupled to the second terminal of a second transistor of the pair of cross-coupled transistors,the first terminal of the second transistor of the pair of cross-coupled transistors is coupled to the second terminal of the first transistor of the pair of cross-coupled transistors,the third terminal of each of the first transistor of the pair of cross-coupled transistors and the second transistor of the pair of cross-coupled transistors is coupled to a reference voltage, andThe value of the reference voltage corresponds to the low voltage of the level shifted input clock signal.7.The electronic component of claim 1, wherein each of the first transistor and the second transistor is a field effect transistor, and wherein the first terminal is a gate terminal and the second terminal is a drain terminal, and the third terminal is a source terminal.8.The electronic assembly of claim 1, wherein the electronic assembly is an analog-to-digital converter.9.An electronic assembly comprising:one or more switches; andA switch driver circuit for driving the one or more switches, the switch driver circuit comprising:Input to receive the input clock signal;output to provide the output clock signal;a first transistor and a second transistor, each including a first terminal and a second terminal;a first level shifter circuit to level shift the input clock signal to generate a level shifted input clock signal; anda second level shifter circuit;in:the first terminal of the first transistor to receive a signal indicative of the input clock signal,the first terminal of the second transistor to receive a signal indicative of the level-shifted input clock signal,the second terminal of the second transistor is coupled to the second level shifter circuit,each of the second terminal of the first transistor and the second terminal of the second transistor is coupled to the output,the first level shifter circuit to control the high voltage value of the level shifted input clock signal, andThe second level shifter circuit is used to control the high voltage level of the output clock signal.10.9. The electronic assembly of claim 9, further comprising a third transistor coupled to the first transistor in a cascode arrangement, wherein the second terminal of the first transistor passes through The second terminal of the first transistor is coupled to the third terminal of the third transistor and the second terminal of the third transistor is coupled to the output to be coupled to the output.11.9. The electronic assembly of claim 9, wherein the level shifter circuit includes a coupling capacitor and a voltage controller circuit, and the first terminal of the second transistor is adapted to couple to the said second transistor through a first capacitor electrode Each of the input and second capacitor electrodes coupled to the voltage controller circuit and the first terminal of the second transistor receives the signal indicative of the level-shifted input clock signal.12.11. The electronic assembly of claim 11, wherein a third terminal of the second transistor is operative to couple to a power supply voltage, and the value of the power supply voltage corresponds to a high voltage of the level shifted input clock signal.13.The electronic assembly of claim 9, wherein the electronic assembly is an analog-to-digital converter.14.A switch driver circuit comprising:Input to receive the input clock signal;output to provide the output clock signal;a first transistor and a second transistor, each including a first terminal and a second terminal;a first level shifter circuit to level shift the input clock signal to generate a level shifted input clock signal; anda second level shifter circuit;in:the first terminal of the first transistor to receive a signal indicative of the input clock signal,the first terminal of the second transistor to receive a signal indicative of the level-shifted input clock signal,The second terminal of the second transistor is coupled to the second level shifter circuit, andEach of the second terminal of the first transistor and the second terminal of the second transistor is coupled to the output.15.The switch driver circuit of claim 14, wherein:the first level shifter circuit to control the high voltage value of the level shifted input clock signal, andThe second level shifter circuit is used to control the low voltage level of the output clock signal.16.The switch driver circuit of claim 15, wherein:the first level shifter circuit includes a voltage controller circuit and a coupling capacitor having a first capacitor electrode and a second capacitor electrode,the first capacitor electrode is coupled to the input, andThe second capacitive electrode is coupled to each of the voltage controller circuit and the first terminal of the second transistor.17.The switch driver circuit of claim 14, wherein:the first level shifter circuit to control the high voltage value of the level shifted input clock signal, andThe second level shifter circuit is used to control the high voltage level of the output clock signal.18.The switch driver circuit of claim 14, wherein:the first level shifter circuit to control the low voltage value of the level shifted input clock signal, andThe second level shifter circuit is used to control the high voltage level of the output clock signal.19.The switch driver circuit of claim 14, wherein:the first level shifter circuit to control the low voltage value of the level shifted input clock signal, andThe second level shifter circuit is used to control the low voltage level of the output clock signal.20.15. The switch driver circuit of claim 14, further comprising a third transistor coupled to the first transistor in a cascode arrangement, wherein the second terminal of the first transistor is provided by using The second terminal of the first transistor is coupled to the third terminal of the third transistor and the second terminal of the third transistor is coupled to the output to be coupled to the output. |
Boost Switch Driver for High Speed Signal SwitchingCROSS-REFERENCE TO RELATED APPLICATIONSThis application is related to US Patent Application Serial No. 63/065,590, filed August 14, 2020, entitled "BOOSTED SWITCH DRIVERS FOR HIGH-SPEED SIGNAL SWITCHING," the disclosure of which is Incorporated herein by reference in its entirety.technical fieldThe present disclosure relates generally to electronic devices and systems, and more particularly to switch drivers.Background techniqueIn electronics and signal processing, a switch driver is a device that controls a switch. For example, a sample-and-hold analog-to-digital converter (ADC) includes multiple switches and multiple switch drivers configured to control the different switches. A switching driver can be described as "boosted" when the driver's output voltage swing can exceed the core supply rails in a given circuit. For example, boost switch drivers can be used in radio frequency (RF) sampling ADCs to control switches with high gate voltages while being able to handle large signal swings.The cost, quality, and robustness of a boost switch driver can be affected by various factors. Physical constraints such as space/surface area can place further constraints on boost switch driver requirements or specifications, so compromises and ingenuity must be applied when designing the best boost switch driver for a given application. Boost switch drivers designed for high-speed signal switching (eg, for RF ADCs) are particularly challenging.SUMMARY OF THE INVENTIONAccording to one aspect of the present disclosure, there is provided an electronic assembly comprising: one or more switches; and a switch driver circuit to drive the one or more switches, the switch driver circuit comprising: an input to receiving an input clock signal; an output to provide an output clock signal; a first transistor and a second transistor, each including a first terminal and a second terminal; a third transistor coupled to the first transistor in a cascode arrangement and a level shifter circuit for level shifting the input clock signal to generate a level shifted input clock signal, wherein: the first terminal of the first transistor is for receiving an indication of the input clock signal , the first terminal of the second transistor to receive a signal indicative of the level-shifted input clock signal, the second terminal of the first transistor coupled to the first terminal of the third transistor Three terminals, and the second terminal of the third transistor is coupled to the output, and the second terminal of the second transistor is coupled to the output.According to another aspect of the present disclosure, there is provided an electronic assembly comprising: one or more switches; and a switch driver circuit to drive the one or more switches, the switch driver circuit comprising: an input for to receive an input clock signal; an output to provide an output clock signal; a first transistor and a second transistor, each including a first terminal and a second terminal; a first level shifter circuit to level shift the input clock signal to generate a level shifted input clock signal; and a second level shifter circuit; wherein: the first terminal of the first transistor is to receive a signal indicative of the input clock signal, the second transistor the first terminal of the to receive a signal indicative of the level shifted input clock signal, the second terminal of the second transistor is coupled to the second level shifter circuit, the first transistor Each of the second terminal of the second transistor and the second terminal of the second transistor are coupled to the output, the first level shifter circuit to control the level shifted input clock signal and the second level shifter circuit is used to control the high voltage level of the output clock signal.According to yet another aspect of the present disclosure, there is provided a switch driver circuit comprising: an input to receive an input clock signal; an output to provide an output clock signal; a first transistor and a second transistor, each including a first terminal and a second terminal; a first level shifter circuit to level shift the input clock signal to generate a level shifted input clock signal; and a second level shifter circuit; wherein: the first transistor's The first terminal is for receiving a signal indicative of the input clock signal, the first terminal of the second transistor is for receiving a signal indicative of the level-shifted input clock signal, the second transistor's The second terminal is coupled to the second level shifter circuit, and each of the second terminal of the first transistor and the second terminal of the second transistor is coupled to the output.Description of drawingsIn order to provide a more complete understanding of the present disclosure and its features and advantages, reference is made to the following description taken in conjunction with the accompanying drawings, wherein like reference numerals refer to like parts, in the accompanying drawings:1 provides a circuit diagram of an example circuit in which a boost switch driver with level shifting in branches of P-type transistors may be used, according to some embodiments of the present disclosure;2 provides a circuit diagram of an example circuit having a boost switch driver with level shifting in branches of P-type transistors in accordance with some embodiments of the present disclosure;3 provides a circuit diagram of an example level shifter circuit configured to perform level shifting while controlling a maximum/high signal level in accordance with some embodiments of the present disclosure;4 provides a circuit diagram of an example circuit having a boost switch driver with level shifting in branches of P-type transistors implemented using the level shifter circuit of FIG. 3 in accordance with some embodiments of the present disclosure;5 provides a circuit diagram of an example circuit having a boost switch driver with level shifting in branches of P-type transistors as a common source and having additional transistors in accordance with some embodiments of the present disclosure A common gate transistor is provided to an N-type transistor;6 provides a circuit diagram of an example circuit having a boost switch driver with level shifting in the branches of a P-type transistor with an additional level shifter in accordance with some embodiments of the present disclosure, the additional electrical The pan shifter is configured to control the min/low signal level;7 provides a circuit diagram of an example circuit in which a boost switch driver with level shifting in a branch of an N-type transistor may be used, according to some embodiments of the present disclosure;8 provides a circuit diagram of an example circuit having a boost switch driver with level shifting in branches of N-type transistors in accordance with some embodiments of the present disclosure;9 provides a circuit diagram of an example level shifter circuit configured to perform level shifting while controlling a minimum/low signal level, according to some embodiments of the present disclosure;10 provides a circuit diagram of an example circuit with a boost switch driver having level shifting implemented using the level shifter circuit of FIG. 9 in branches of N-type transistors in accordance with some embodiments of the present disclosure;11 provides a circuit diagram of an example circuit with a boost switch driver with level shifting in branches of N-type transistors as a common source and with additional transistors in accordance with some embodiments of the present disclosure A common gate transistor is provided to a P-type transistor;12 provides a circuit diagram of an example circuit having a boost switch driver with level shifting in branches of N-type transistors with additional level shifters in accordance with some embodiments of the present disclosure, the additional electrical The pan shifter is configured to control the min/low level;13 provides a schematic illustration of example components in which one or more boost switch drivers may be implemented, according to some embodiments of the present disclosure;14 is a block diagram of an example system that may include one or more boost switch drivers, according to some embodiments of the present disclosure;15 is a block diagram of an example RF device that may include one or more boost switch drivers, according to some embodiments of the present disclosure; and16 provides a block diagram illustrating an example data processing system configurable to control the operation of one or more boost switch drivers in accordance with some embodiments of the present disclosure.detailed descriptionOverviewThe systems, methods, and apparatuses of the present disclosure each have several innovative aspects, no single aspect of which is responsible for all desirable attributes disclosed herein. The details of one or more implementations of the subject matter described in this disclosure are set forth in the description below and in the accompanying drawings.Embodiments of the present disclosure relate to switch driver circuits, and devices and systems in which such circuits may be implemented. In one aspect of the present disclosure, an example switch driver circuit includes two branches. The first branch includes a first transistor (eg, transistor m5 shown in the figures of the present disclosure). The second branch includes a second transistor (eg, transistor m6 shown in the figures of this disclosure) and a level shifter circuit. One of these transistors is an N-type transistor and the other is a P-type transistor. The circuit is configured to divide the input clock signal between the first branch and the second branch such that a portion of the input clock signal divided into the first branch is provided to the first transistor, and a portion of the input clock signal divided into the second branch is provided by The level shifter circuit level shifts to generate a level shifted input clock signal, and the level shifted input clock signal is provided to the second transistor. In a context where the input clock signal has a low voltage value and a high voltage value, level shifting the input clock signal includes a level shifter circuit changing each of the low and high voltage values of the input signal to generate a Level shift input signal. The switch driver circuit is further configured to combine the output of the first transistor with the output of the second transistor to generate an output clock signal. Various embodiments of this circuit are hereinafter described as "boost switch driver circuits" (or simply as "boost switch drivers") because they may allow for output voltage swings that exceed the core supply rails to be provided. The boost switch drivers described herein can advantageously allow for extremely fast boost edges to be provided where high speed signal processing requires additional swing, which can help maximize both clock speed and dynamic range. Other aspects of the present disclosure provide systems (eg, RF transceivers) that may include one or more boost switch drivers as described herein, and methods for providing such boost switch drivers.The precise design of the boost switch driver described herein can be implemented in many different ways, all of which are within the scope of this disclosure.In one example of design variations according to various embodiments of the present disclosure, a selection may be made individually for each of the transistors of a boost switch driver according to any of the embodiments described herein to Bipolar transistors (eg, where the various transistors may be NPN or PNP transistors), field effect transistors (FETs), such as metal oxide semiconductor (MOS) technology transistors (eg, where the various transistors may be N type MOS (NMOS) or P-type MOS (PMOS) transistors), or a combination of one or more FETs and one or more bipolar transistors, as long as the first and second branches of the boost switch driver circuit are One of the transistors is an N-type transistor (eg, if the transistor is a bipolar transistor, one of the transistors is an NPN transistor, or if the transistor is a FET, one of the transistors is an NMOS transistor), and the other A P-type transistor (eg, if the transistor is a bipolar transistor, the other is a PNP transistor, or if the transistor is a FET, the other is a PMOS transistor). In the figures of the present disclosure, the transistors are shown as FETs, and therefore the description will refer to their terminals as gate, drain, and source terminals. However, in further embodiments of the present disclosure, any of the FETs shown in the figures may be replaced with corresponding bipolar transistors. Accordingly, the description provided below with reference to the "gate terminal" may be considered to refer to the "first terminal", wherein the term "first terminal" of the transistor is used to refer to the gate terminal if the transistor is a FET, or if the transistor is a FET The transistor is a bipolar transistor and is used to refer to the base terminal. Similarly, the description provided below with reference to the "drain terminal" may be considered to refer to the "second terminal", where the term "second terminal" of the transistor is used to refer to the drain terminal if the transistor is a FET, or If the transistor is a bipolar transistor, it is used to refer to the collector terminal, and the description provided below with reference to the "source terminal" may be considered to refer to the "third terminal", where, if the transistor is a FET, the term transistor's "Third terminal" is used to refer to the source terminal or, if the transistor is a bipolar transistor, to the emitter terminal. These terms remain the same whether the transistors of a given technology are N-type transistors or P-type transistors.In another example, in various embodiments, the choice as to which type of transistor architecture to employ may be made individually for each of the transistors of any of the boost switch drivers as described herein . For example, any of the transistors of a boost switch driver as described herein implemented as FETs may be planar transistors or may be non-planar transistors (some examples of the latter include FinFETs, nanowire transistors and nanoribbon transistors).As will be appreciated by those skilled in the art, aspects of the present disclosure, in particular boost switch drivers as set forth herein, may be embodied in various ways, eg, as a method, system, computer program product, or computer-readable storage medium. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.), or an embodiment combining software and hardware aspects, all of which may be generally used herein Called a "circuit," "module," or "system." The functions described in this disclosure may be implemented as algorithms executed by one or more hardware processing units (eg, one or more microprocessors) of one or more computers. In various embodiments, different steps and portions of steps of each of the methods described herein may be performed by different processing units. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media, preferably non-transitory, on which are embodied (eg, stored). ) has computer readable program code. In various embodiments, this computer program may, for example, be downloaded (updated) to existing devices and systems (eg, existing RF ADCs, transceivers and/or controllers thereof, etc.), or stored at the time of manufacture of such devices and systems.The following detailed description presents various illustrations of certain embodiments. However, the innovations described herein can be embodied in numerous different ways, eg, as defined and encompassed by the selected examples.In the following description, reference is made to the drawings, wherein like reference numbers may indicate identical or functionally similar elements. It will be understood that elements shown in the figures are not necessarily drawn to scale. Furthermore, some embodiments may incorporate any suitable combination of features from two or more figures. Furthermore, it will be understood that certain embodiments may include more elements than and/or a subset of the elements shown in the figures. In general, although some of the figures provided herein illustrate various aspects of boost switch drivers, and systems in which such circuits may be implemented, the details of these systems may vary in different embodiments. For example, the various components of a boost switch driver presented herein may have other components included therein or coupled to them not specifically shown in the figures, such as logic, storage, passive elements (eg, resistors) capacitors, inductors, etc.) or other elements (eg, transistors, etc.). In another example, details shown in some of the figures, such as the specific arrangements and example implementation details of various components of a boost switch driver presented herein (eg, details of level shifter circuits) and/or Or the specific arrangement of the coupling connections may vary in different embodiments, wherein the illustrations in the figures of the present disclosure merely provide some examples of how these components may be used together to implement a boost switch driver. In yet another example, although some embodiments shown in the figures of the present disclosure show a particular number of components (eg, a particular number of level shifter circuits in a boost switch driver), it should be understood that according to the As illustrated, these embodiments may be implemented in a boost switch driver or in any other device or system having any number of these components. Additionally, although certain elements, such as various elements of the boost switch driver presented herein, may be depicted in the figures as being communicatively coupled using a single depicted wire, in some embodiments any of these elements Elements can each be coupled by multiple conductive lines such as those that may be present in a bus or when differential signaling is involved.The description may use the phrases "in one embodiment" or "in an embodiment," which may each refer to one or more of the same or different embodiments. Unless otherwise specified, the use of the ordinal adjectives "first," "second," "third," etc. to describe a common object merely indicates that a reference to a different instance of a similar object is not intended to imply that the object so described Must be in a given sequence in time, space, rank, or in any other way. Furthermore, for the purposes of this disclosure, the phrase "A and/or B" or the notation "A/B" means (A), (B) or (A and B), while the phrase "A, B and/or C" "means (A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C). As used herein, the notation "A/B/C" means (A, B and/or C). When used with reference to a measurement range, the term "between" includes the ends of the measurement range.The various aspects of the illustrative embodiments are described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. For example, the term "connected" means a direct electrical connection between connected things without any intervening devices/components, while the term "coupled" means a direct electrical connection between connected things, Or indirect electrical connection through one or more passive or active intermediate devices/components. In another example, the terms "circuitry" or "circuitry" (used interchangeably) refer to one or more passive and/or active components arranged to cooperate with each other to provide a desired function. Sometimes, in this description, the term "circuit" may be omitted (eg, a boost switch driver circuit may be simply referred to as a "boost switch driver"; a level shifter circuit may be simply referred to as a "level shifter", etc.). If used, the terms "substantially", "substantially", "about", etc. may generally be used to mean within +/ of the target value based on the context of the particular value as described herein or as known in the art Within -20%, eg, within +/- 10% of the target value.Boost switch driver circuit with two branches and level shifting in one branchAll embodiments of the boost switch driver proposed herein are based on splitting the input signal to the boost switch driver circuit between two branches and then combining the outputs of the two branches to generate the output signal from the boost switch driver circuit . Furthermore, all embodiments include a level shifter circuit in at least one of the branches that allows the output signal from the boost switch driver circuit to have a larger signal swing than the input signal, and is carefully controlled Maximum/high signal level, minimum/low signal level, or both of the output signal. One of the branches includes P-type transistors and the other branch includes N-type transistors. When only one of the branches includes a level shifter circuit, the embodiments of the boost switch driver presented herein can be broadly divided into the following two groups of embodiments: to 6) with level shifting on one side (ie, in the branch containing the P-type transistors), and a set of embodiments on one side of the N-type transistors (as shown in Figures 7 to 12) There is a level shift on the side (ie, in the branch containing the N-type transistor). However, further embodiments are possible and within the scope of the present disclosure, wherein level shifting is performed on both one side of the P-type transistor and one side of the N-type transistor. Such embodiments may be viewed as a combination of level shifting as described with reference to Figures 1-6 and level shifting as described with reference to Figures 7-12, all of which are within the scope of the present disclosure.Example boost switch driver circuit with level shifting in branches of P-type transistors1 provides a circuit diagram of an example circuit 100 in which a boost switch driver with level shifting in branches of P-type transistors may be used, according to some embodiments of the present disclosure. As shown in FIG. 1 , circuit 100 may include a series of first inverter 102 and second inverter 104 coupled to switch 106 . The first inverter 102 may be a fast inverter configured to drive the second inverter 104 (eg, a basic two-transistor inverter with one PMOS and one NMOS transistor, using a core MOS transistor with a core supply voltage) , the second inverter may comprise any of the boost switch drivers described herein with level shifting performed in the branches of the P-type transistors. The first inverter 102 may be configured to receive a digital signal (eg, a clock signal) having a range of voltage values and provide an inverted version of the signal to the second inverter 104 . For example, the digital signal provided to the first inverter 102 may be a range of voltage values between a low voltage value (eg, 0 volts (V)) and a high voltage value (eg, 1 V), the digital An example of a signal is shown in Figure 1 as signal 112 (schematically shown within the dashed box labeled "112"). The output of the first inverter 102 corresponding to this input is shown in FIG. 1 as signal 114 (schematically shown within the dashed box labeled "114"). Therefore, the output of the first inverter 102 swings to 1V. The second inverter 104 may be a boost switch driver that is powered boosted (ie, configured to boost/increase output swing). This is shown in FIG. 1 with a second inverter 104 that receives a signal with an output swing of 1V (ie, panel 114 ) and outputs an inverted and boosted version of the signal, shown in FIG. 1 as still having 0V , but now has a high voltage value of 1.4V for signal 116 (schematically shown within the dashed box labeled "116"). The output of the second inverter 104 may then be used to drive a switch 106, which may be implemented as the transistor shown in FIG. 1 in some embodiments.In various embodiments, the inverter 104 may include any of the boost switch driver circuits presented herein with level shifting in the branches of the P-type transistors. In various embodiments, the signals 112, 114, 116 may be different, eg, the signals may have different high and low values. Similarly, the low and high supply voltages coupled to each of inverter 102 and inverter 104 are boost switch drivers with level shifting in the branches of the P-type transistors presented herein Either of the circuits used to implement the inverter 104 may vary in different embodiments. Furthermore, in further embodiments, inverters 104 including any of the boost switch driver circuits presented herein with level shifting in the branches of the P-type transistors may be included in addition to circuit 100 , and in particular, may not necessarily be implemented with inverter 102, as shown in FIG. 1 .2 provides a circuit diagram of an example circuit 200 having a boost switch driver with level shifting on one side of a P-type transistor in accordance with some embodiments of the present disclosure. Circuit 200 may be considered an example of circuit 100, wherein reference numerals used for circuit 100 of FIG. 1 are used to refer to elements that are identical or functionally similar to those shown for circuit 200 of FIG. The description of these elements is not repeated for another figure, and only the differences are described (the same is true for other figures of the present disclosure).As shown in FIG. 2, in some embodiments, the first inverter 102 may be implemented as a pair of complementary transistors m3, m4 (ie, one of the transistors is N-type and the other is P-type). For example, the gate terminal of transistor m3 may be coupled to the gate terminal of transistor m4, and both may be coupled to the input clock signal 112, the drain terminal of transistor m3 may be coupled to the drain terminal of transistor m4, and both Either can be coupled to the output 114 , and the source terminals of transistors m3 and m4 can be coupled to the low and high supply voltages for the first inverter 102 . For example, transistor m3 may be an N-type transistor (eg, an NMOS transistor, as shown in FIG. 2 with the corresponding circuit representation for transistor m3) with its source terminal coupled to a low supply voltage (eg, 0V), And transistor m4 may be a P-type transistor (eg, a PMOS transistor, as shown in FIG. 2 with the corresponding circuit representation for transistor m4 ) with its source terminal coupled to a high supply voltage (eg, 1V). As described above, the output clock signal 114 of the first inverter 102 may be used as the basis for the input clock signal to the second inverter 104 .FIG. 2 further shows a boost switch driver circuit 204, which may, for example, be an example of the second inverter 104 described above. As shown in FIG. 2 , the circuit 204 may include an input 222 , an output 224 , and two branches 226 between the input 222 and the output 224 . The first branch, indicated schematically in dash-dotted line 226-1 in FIG. 2, may include transistor m5. The second branch, indicated schematically in dashed line 226 - 2 in FIG. 2 , may include transistor m6 and level shifter circuit 230 . One of the transistors m5 and m6 may be an N-type transistor and the other may be a P-type transistor. Since FIG. 2 shows an embodiment with level shifting done on one side of the P-type transistor, transistor m6 is a P-type transistor (eg, PMOS) with its source terminal coupled to a high supply voltage (eg, 1.4V) transistor, as shown in FIG. 2 with the corresponding circuit representation for transistor m6), while transistor m5 is an N-type transistor (eg, an NMOS transistor, as shown in FIG. 2 with the corresponding circuit representation for transistor m5) out). Circuitry 204 may be configured to split an input clock signal (eg, signal 114) between first branch 226-1 and second branch 226-2 such that a portion of input clock signal 114 split into first branch 226-1 is provided to branched transistor m5, and a portion of the input clock signal divided into the second branch 226-2 is level shifted by the level shifter circuit 230 to generate the level shifted signal 214-2, and the level shifted signal 214-2 is supplied to transistor m6. 2 shows a signal 214-1 provided to transistor m5 (ie, the gate terminal of transistor m5), and a signal 214-2 provided to transistor m6 (ie, the gate terminal of transistor m6). Signal 214-1 may be substantially the same as signal 114 in terms of its voltage swing (eg, from about 0 to about 1 V, as shown for the example of FIG. 2), whereas signal 214-2 may have A level shifted signal with a voltage swing from about 0.5V to about 1.4V. Circuit 204 is further configured to combine the output of transistor m5 (eg, from the drain terminal of transistor m5 ) with the output of transistor m6 (eg, from the drain terminal of transistor m6 ) to generate output clock signal 116 . Accordingly, the gate terminal of each of transistors m5 and m6 may be coupled to input 222, except that the gate terminal of transistor m6 is via a level shifter circuit configured to perform level shifting prior to providing the signal to transistor m6 230 is coupled out of input 222. Similarly, the drain terminal of each of transistors m5 and m6 may be coupled to output 224 , which may be coupled to switch 106 to be driven by switch driver circuit 204 .In some embodiments, level shifter circuit 230 may include voltage controller circuit 232 that may be configured to receive reference signal 234 as input and generate output 236 , as shown in FIG. 2 . Additionally, the level shifter circuit 230 may further include a coupling capacitor 238 coupled to the voltage controller circuit 232 . For example, a first capacitor electrode of coupling capacitor 238 may be coupled to input 222, and a second capacitor electrode of coupling capacitor 238 may be coupled to each of voltage controller circuit 232 and the gate terminal of transistor m6. In other words, a portion of the input clock signal 114 split into the second branch 226-2 of the circuit 204 may be configured to be applied to the first capacitor electrode of the coupling capacitor 238, and the second capacitor electrode of the coupling capacitor 238 may be coupled to the voltage control each of the output 236 of the transistor circuit 232 and the gate terminal of the transistor m6. The input 234 to the voltage controller circuit 232 may be a reference voltage or any other control signal configured to control the maximum voltage level set by the voltage controller circuit 232 . The output 236 from the voltage controller circuit 232 may form the basis of the level shifted input clock signal 214-2 to be provided to the gate terminal of the transistor m6.When transistor m6 is a P-type transistor (as shown in the embodiment of FIG. 2), voltage controller circuit 232 may be configured to control the level-shifted input clock signal 214-2 provided to transistor m6, and output the signal accordingly The maximum voltage value in 116. To this end, the input to NMOS transistor m5 may come directly from 1V domain inverter output 114 (swing 0V to 1V) as input 214-1 as shown in FIG. 2, while the input to PMOS transistor m6 may be via a coupling capacitor 238 is level shifted with the voltage controller circuit 232 to provide a maximum switching voltage of eg 1.4V, and ideally a minimum switching voltage of 1.4V-1V=0.4V. Due to the capacitor charge redistribution, the capacitor coupling signal swing at the output Nout of the level shifter circuit 230 can be determined by a ratio related to the size of the coupling capacitor 228 to the size of the capacitance on the gate of the PMOS transistor m6 plus the wiring parasitic capacitance attenuation. Thus, the signal at the output Nout of the level shifter circuit 230 can swing from 1.4V to 1.4V - 0.9V = 0.5V (rather than the ideal 0.4V, which would result in a voltage from the input Nin of the level shifter circuit 230 The output to level shifter circuit 230 maintains a 1V swing), as shown with signal 214-2 shown in FIG. Voltage controller circuit 232 or level shifter circuit 230 as shown in FIG. 2 may be referred to as a "maximum level controller" because the voltage controller circuit or level shifter circuit is configured to set the shifted level The maximum voltage value of the bit input clock signal 214-2. The minimum voltage value of the level-shifted input clock signal 214 - 2 may then be automatically adjusted based on the ratio between the capacitance of the coupling capacitor 238 and the capacitance for the load of the voltage controller circuit 232 .In some embodiments of a level shifter circuit (eg, as shown in FIG. 2 ) coupled to a P-type transistor of the boost switch driver circuit, coupled to a P-type transistor (ie, transistor m6 in this case) The high supply voltage of the source terminal of , may be substantially the same as the maximum voltage value controlled by the voltage controller circuit 232 (eg, both may be 1.4V), although in other embodiments these voltages may be different. In general, the value of the supply voltage to the source terminal of the P-type transistor (ie, transistor m6 in this case) coupled to the branch that also includes the maximum level controller may correspond to (eg, be substantially equal to) via the electrical The high voltage in the input clock signal 214-2 is shifted. In some embodiments of circuit 204 , the low supply voltage to the source terminal of an N-type transistor (ie, transistor m5 in this case) that is coupled to a branch that does not include a level shifter circuit may be the same as the smallest in output signal 116 The voltage values are substantially the same (eg, both can be 0V), although in other embodiments these voltages may be different (eg, where a second level shifter circuit is included, as shown in FIG. 6 ) .In various embodiments, level shifter circuit 230 may be implemented in any manner that allows careful control of the maximum value of level shifted input clock signal 214-2 to be provided to P-type transistor m6. An example is shown in FIG. 3, however, in other embodiments of circuit 204, level shifter circuit 230 may be implemented differently. 3 provides a circuit diagram of an example level shifter circuit 300 that may be used with or in a boost switch driver, eg, as a boost switch driver, according to some embodiments of the present disclosure. The level shifter circuit 230 of the switch driver 204 is pressed to perform level shifting while controlling the maximum/high signal level.As shown in FIG. 3, level shifter circuit 300 may include a pair of cross-coupled transistors m1 and m2. Transistors m1 and m2 are cross-coupled in that the gate terminal of transistor m1 is coupled to the drain terminal of transistor m2 and the gate terminal of transistor m2 is coupled to the drain terminal of transistor m1. The source terminals of each of transistors m1 and m2 are coupled to reference voltage 234 , which is provided to level shifter circuit 300 to control the maximum voltage level set by level shifter circuit 300 . When the level shifter circuit 300 is a maximum level controller (the embodiment shown in FIG. 3 ), the transistors m1 and m2 may be P-type transistors, eg, as in FIG. 3 and the subsequent appendix of the maximum level controller. The PMOS transistor shown in the figure. Furthermore, when the level shifter circuit 300 is a maximum level controller, the value of the reference voltage 234 may be configured to correspond to the high voltage in the level shifted input clock signal 214-2 output by the circuit. In some embodiments, reference voltage 234 may be substantially the same as the supply voltage to which the source terminal of transistor m6 is coupled (eg, for the examples shown in FIGS. 2 and 3, both may be about 1.4V) , and can be from the same voltage source.As further shown in FIG. 3, the level shifter circuit 300 may also include a pair of capacitors C1 and C2, and the input of the inverter Inv is coupled to the first capacitor electrode of the capacitor C1, for example, by coupling the input of the inverter Inv to the first capacitor electrode of the inverter Inv. The output is coupled to the first capacitor electrode of capacitor C2 and the inverter Inv between the capacitors. The second capacitor electrode of capacitor C1 may be coupled to the drain terminal of transistor m1, and the second capacitor electrode of capacitor C2 may be coupled to the drain terminal of transistor m2. The input to level shifter circuit 300 may be applied/provided to node Nin, which is coupled to the first capacitor electrode of capacitor C1 and the input of inverter Inv, as shown in FIG. 3, and may be input signal 114 . Level shifter circuit 300 may generate an output signal from an output node Nout coupled to one or more of the second capacitor electrode of capacitor C1 , the drain terminal of transistor m1 , and the gate terminal of transistor m2 . The output signal from output node Nout may be a level shifted input clock signal 214-2 as described above, and may drive a load (represented in FIG. 3 by capacitor CLoad) coupled to output node Nout.Since level shifter circuit 300 is a maximum level controller, reference voltage 234 applied to the source terminals of transistors m1 and m2 can accurately control the maximum/high level of the output voltage of signal 214-2 (eg, ensuring that level is about 1.4V). The minimum/low level of the output voltage of signal 214-2 may then be established based on the redistribution of capacitor charge between capacitor C1 and load capacitance CLoad based on the ratio of these capacitances. Accordingly, the voltage swing of the output signal 214-2 may be based on the value of the reference voltage 234 provided to the level shifter circuit 300 and on the redistribution of the capacitor charge between the capacitor C1 and the load capacitance CLoad.4 provides a circuit diagram of an example circuit 400 having a boost switch driver with level shifting implemented using the level shifter circuit 300 of FIG. 3 on one side of transistor m6 in accordance with some embodiments of the present disclosure bit. Circuit 400 may be seen as an example of circuit 200 in which level shifter circuit 230 is implemented as level shifter circuit 300 and thus (in the schematic representation of FIG. 4 ) is replaced by level shifter circuit 300 . In FIG. 4 , the reference numbers used for the circuits of FIGS. 1-3 are used to refer to the same or functionally similar elements as those shown in FIGS. 1-3 so that descriptions of those elements are not repeated for FIG. 4 . In addition, various components of the level shifter circuit 300 shown in FIG. 3 and described above, such as transistors m1 and m2, capacitors C1 and C2, and inverter Inv, are also shown in FIG. 4 for clarity How level shifter circuit 300 replaces level shifter circuit 230 shown in FIG. 2 for this embodiment. It should be noted that in the embodiment of FIG. 4 , coupling capacitor 238 of level shifter circuit 230 is implemented by capacitor C1 of level shifter circuit 300 . In the context of circuit 400, input node Nin and output node Nout of level shifter circuit 300 are also shown in FIG. 4 .5 provides a circuit diagram of an example circuit 500 with a boost switch driver in a branch of a P-type transistor (ie, transistor m6 for the example shown) with additional transistors in accordance with some embodiments of the present disclosure With level shifting, the additional transistor is provided as a cascode transistor to the N-type transistor of the other branch (ie, transistor m5 for the example shown). Circuit 500 may be viewed as a further embodiment of circuit 400, wherein, similar to circuit 400, level shifter circuit 230 is implemented as level shifter circuit 300, and is thus in the schematic representation of FIG. 5 by level shifting The digitizer circuit 300 is replaced. In FIG. 5, the reference numbers used for the circuits of FIGS. 1-4 are used to refer to the same or functionally similar elements as those shown in FIGS. 1-4, so that the description of those elements is not repeated for FIG. 5 , and describe only the differences. Circuit 500 differs from circuit 400 in that circuit 500 further includes an additional transistor m7 provided as a cascode transistor to transistor m5 described above. To this end, transistors m5 and m7 may be the same type of transistor (eg, both are N-type transistors, as shown in the illustration of FIG. 5). Cascode transistor m7 may be considered part of the first branch 226-1. As shown in FIG. 5 , the drain terminal of transistor m5 may be obtained by coupling the drain terminal of transistor m5 to the source terminal of cascode transistor m7 and coupling the drain terminal of cascode transistor m7 to output 224 and coupled to output 224 . In various embodiments, the gate terminal (eg, gate terminal) of cascode transistor m7 may be coupled to a suitable reference voltage 534 . For example, for embodiments where cascode transistor m7 is an N-type transistor as shown (ie, for embodiments where the level shifter circuit included in circuit 500 is a maximum level controller), The reference voltage 534 may be approximately 1V. However, in other embodiments, the value of the reference voltage 534 may be different.Although only one cascode transistor m7 is shown in FIG. 5, in other embodiments of the circuit shown in FIG. 5, more than one cascode transistor m7 may be included in the first branch 226-1. Although FIG. 5 shows cascode transistor m7 in conjunction with level shifter circuit 300 of FIG. 3, in further embodiments of circuit 200 shown in FIG. 2, first branch 226-1 of circuit 200 may include At least one such cascode transistor m7 where level shifter circuit 230 may, but need not, be implemented as level shifter circuit 300 . Furthermore, in any of the embodiments of the boost switch driver with level shifting in the branches of the P-type transistors (eg, any of the embodiments described with reference to FIGS. 1-6 ), one or more An additional transistor m8 (not specifically shown in the figures) may be provided as a cascode transistor to transistor m6 (ie, provided as a cascode transistor to a P-type transistor of the level shifter circuit). To this end, transistors m6 and m8 may be the same type of transistors (eg, both are P-type transistors), and one or more cascode transistors m8 may be part of the second branch 226-2. For example, the gate terminal of such cascode transistor m8 may be coupled to ground potential; the source terminal of transistor m8 may be coupled to the drain terminal of transistor m6; and the drain terminal of cascode transistor m8 may be coupled to output 224.6 provides a circuit diagram of an example circuit 600 having a boost switch driver with level shifting in the branches of a P-type transistor and having an additional level shifter circuit in accordance with some embodiments of the present disclosure, the An additional level shifter circuit is configured to control the min/low level. Circuit 600 may be viewed as a further embodiment of circuit 500, wherein, similar to circuit 500, level shifter circuit 230 is implemented as level shifter circuit 300, and thus (in the schematic representation of FIG. 6) is represented by The level shifter circuit 300 is replaced. Also similar to circuit 500, circuit 600 further includes a cascode transistor m7. In FIG. 6, the reference numbers used for the circuits of FIGS. 1-5 are used to refer to elements that are identical or functionally similar to those shown in FIGS. 1-5, so that descriptions of those elements are not repeated for FIG. 6 , and describe only the differences. Circuit 600 differs from circuit 500 in that circuit 600 further includes an additional level shifter circuit 630 configured to control the min/low level of the output signal from circuit 600 . Accordingly, circuit 630 may be referred to as a "minimum level controller."In some embodiments, additional level shifter circuit 630 may be implemented in a manner similar to level shifter circuit 300 , except that P-type transistors m1 and m2 of level shifter circuit 300 are controlled by N in level shifter circuit 630 Type transistors are replaced in order to control the output signal from the level shifter circuit 630 beyond the minimum/low level. A more detailed description of this circuit is provided with reference to FIG. 9 (ie, additional level shifter circuit 630 may be implemented as level shifter circuit 900 shown in FIG. 9).As shown in FIG. 6, level shifter circuit 630 may include a pair of cross-coupled transistors m1 and m2. Transistors m1 and m2 are cross-coupled in that the gate terminal of transistor m1 is coupled to the drain terminal of transistor m2 and the gate terminal of transistor m2 is coupled to the drain terminal of transistor m1. The source terminals of each of transistors m1 and m2 are coupled to a reference voltage 634 that is provided to level shifter circuit 630 to control the minimum voltage level set by level shifter circuit 630 . When the level shifter circuit 630 is the minimum level controller, the transistors m1 and m2 may be N-type transistors, eg, NMOS transistors as shown in FIG. 6 and subsequent figures of the minimum level controller. Furthermore, when the level shifter circuit 630 is the minimum level controller, the value of the reference voltage 634 may be configured to correspond to the low voltage in the level shifted clock signal 616 output by the circuit 630 .As further shown in FIG. 6, level shifter circuit 630 may also include a pair of capacitors C1 and C2, and the input of inverter Inv is coupled to the first capacitor electrode of capacitor C1, for example, by coupling the input of inverter Inv to the first capacitor electrode of inverter Inv The output is coupled to the first capacitor electrode of capacitor C2 and the inverter Inv between the capacitors. The second capacitor electrode of capacitor C1 may be coupled to switch 106 to be driven by switch driver 600, while the second capacitor electrode of capacitor C2 may be coupled to the drain terminal of transistor m2 (and correspondingly to the gate terminal of transistor m1, since the drain terminal of transistor m2 is coupled to the gate terminal of transistor m1).The input to the level shifter circuit 630 may be applied/provided to a node Nin of the circuit 630, which is coupled to the first capacitor electrode of capacitor C1 and, in some embodiments, to the input of inverter Inv, such as shown in Figure 6. In some embodiments, the input to the level shifter circuit 630 may be based on the output signal 116 from the boost switch driver circuit 204, ie, for the example shown in FIG. 6, on a signal from 0V to 1.4V. Level shifter circuit 630 may generate an output signal from an output node Nout of circuit 630 coupled to one or more of the second capacitor electrode of capacitor C1 , the drain terminal of transistor m1 , and the gate terminal of transistor m2 By. The output signal 616 from the output node Nout of the level shifter circuit 630 may be a level shifted version of the input signal (eg, signal 116 ) provided at the input node Nin of the level shifter circuit 630 , where based on the reference signal The 634 carefully controls the minimum/low value of the signal. For example, if the input signal provided at the input node Nin of the level shifter circuit 630 is the signal 116 having minimum and maximum voltage values of 0V and 1.4V, respectively, as described above, then the level shifter Circuit 630 may shift these values by approximately 0.5V (ie, the value of reference voltage 634). In particular, the level shifter circuit 630 is configured to carefully control the value by which the minimum voltage value is shifted based on the reference voltage 634, ie, for the example shown, the minimum voltage value is shifted from 0V to 0.5V. Ideally, the maximum switching voltage would be 1.4V+0.5V=1.9V. However, similar to the maximum voltage controller, due to capacitor charge redistribution, the capacitor-coupled signal swing at the output Nout of the level shifter circuit 630 can be implemented by the size and size of the coupling capacitor C1 of the level shifter circuit 630 The ratio of the capacitance on the gate of the transistor of switch 106 plus the wiring parasitic capacitance decays. Thus, the signal at the output Nout of the level shifter circuit 630 can swing from 0.5V to 0.5V + 1.3V = 1.8V (instead of the ideal 1.9V, which will result in the input Nin from the level shifter circuit 630 The output Nout to level shifter circuit 630 maintains a 1.4V swing), as shown with signal 616 shown in FIG. 6 . Thus, the level shifter circuit 630 can carefully control the minimum/low value of the level shifted clock signal 616 based on the reference signal 634, while the maximum/high voltage value of the level shifted clock signal 616 can then be based on the level shifter circuit The ratio between the capacitance of coupling capacitor C1 of 630 and the capacitance (including parasitic capacitance) of the load for level shifter circuit 630 is automatically adjusted. In other words, since the level shifter circuit 630 is a minimum level controller, the reference voltage 634 applied to the source terminals of the transistors m1 and m2 of the level shifter circuit 630 can accurately control the output voltage of the signal 616 Min/low level (eg, make sure the level is about 0.5V). The maximum/high level of the output voltage of the signal 616 may then be established based on the redistribution of the capacitor charge between the capacitor C1 of the level shifter circuit 630 and the load capacitance CLoad for the level shifter circuit 630 , the capacitor charge The redistribution is based on the ratio of these capacitances. Accordingly, the voltage swing of the output signal 616 may be based on the value of the reference voltage 234 provided to the level shifter circuit 300 , the value of the reference voltage 634 provided to the level shifter circuit 630 , and the capacitor C1 of the level shifter circuit 630 Capacitor charge redistribution with the load capacitance CLoad of the level shifter circuit 630 .Although not specifically shown in FIG. 6, in other embodiments of circuit 630, the input to inverter Inv of level shifter circuit 630 may be based on an inverted version of input signal 114 provided to input 222, Instead of output signal 116 from output 224 as shown in FIG. 6 . In such embodiments, the inverters of level shifter circuit 630 may be decoupled from signal 116, but instead coupled to an inverted version of signal 114, which is driven using signals with signal swings greater than about 1V This may be advantageous in situations where the inverters of the level shifter circuit 630 may be unreliable. In such an embodiment, one of the capacitor electrodes of capacitor C1 of level shifter circuit 630 is still driven by signal 116 so that level shifter circuit 630 can then generate output node Nout at level shifter circuit 630 The output signal is provided at as a level-shifted version of the signal 116 provided at the input node Nin of the level shifter circuit 630 , where the minimum/low value of the signal is carefully controlled based on the reference signal 634 .6 shows one way of how the level shifter circuit 630 may be implemented to provide control of the minimum/low level of the output voltage of the signal 616 as described above. In other embodiments of circuit 600, level shifter circuit 630 may be implemented in any other manner than that shown in FIG. 6 as long as it provides a sufficiently accurate minimum/low level of the output voltage of signal 616 The controls do, with the max/high level adjusted accordingly.Although FIG. 6 shows additional level shifter circuit 630 in conjunction with level shifter circuit 300 of FIG. 3, in further embodiments of circuit 200 shown in FIG. 2, additional level shifter circuit 630 may be included to Signal 116 is received as input and produces output 616, as described with reference to FIG. 6, where level shifter circuit 230 may, but need not, be implemented as level shifter circuit 300. Furthermore, although FIG. 6 shows an additional level shifter circuit 630 in conjunction with the cascode transistor m7 of FIG. 5 , in further embodiments of the circuit 200 shown in FIG. The additional level shifter circuit 630 without cascode transistor m7. In still further embodiments of the circuit 200 shown in FIG. 2, an additional level shifter circuit 630 as described with reference to FIG. 6 may be included, without the cascode transistor m7, and in which the level shifter Circuit 230 is implemented differently from level shifter circuit 300 . In any of these embodiments, the level shifter circuit 630 may be implemented in any other manner than that shown in FIG. 6 as long as it provides a minimum output voltage to the signal 616 as described above /Low level can be controlled accurately enough.Still further, although FIG. 6 shows a level shifter circuit 630 configured to provide control of the min/low level of the output voltage of signal 616 as described above, in other embodiments the level shifter Circuit 630 may be replaced with a level shifter circuit 630' configured to provide control of the max/high level of the output voltage of signal 616 (not shown in FIG. 6, but reference numerals are used here for ease of illustration). In some such embodiments, to implement the level shifter circuit 630', the level shifter circuit 630 shown in FIG. 6 may be replaced with other examples of the level shifter circuit 300, or equivalently, FIG. The NMOS transistors of the level shifter circuit 630 shown in 6 may be replaced with PMOS transistors, and the reference voltage 634 may be replaced with a reference voltage 634' configured to accurately control the maximum/high level of the output voltage of the signal 616 (Also not shown in Figure 6, but reference numbers are used here for ease of illustration).The input to level shifter circuit 630' may be applied/provided to a node Nin of circuit 630', which is coupled to the first capacitor electrode of capacitor C1, and in some embodiments, to the input of inverter Inv , as shown in Figure 6. In some embodiments, the input to the level shifter circuit 630&apos; may be based on the output signal 116 from the boost switch driver circuit 204, that is, for the example shown in FIG. 6, on a signal from 0V to 1.4V. Level shifter circuit 630' may generate an output signal from an output node Nout of circuit 630' coupled to one of the second capacitor electrode of capacitor C1, the drain terminal of transistor m1, and the gate terminal of transistor m2 or more. The output signal 616 from the output node Nout of the level shifter circuit 630' may be a level shifted version of the input signal (eg, signal 116) provided at the input node Nin of the level shifter circuit 630', where based on The reference signal 634' carefully controls the maximum/high value of the signal. For example, if the input signal provided at the input node Nin of the level shifter circuit 630' is the signal 116 having minimum and maximum voltage values of 0V and 1.4V, respectively, as described above, then the level shift The counter circuit 630' may shift these values such that the maximum voltage value is 1.8V, which would be the value of the reference voltage 634' for this example. In particular, the level shifter circuit 630' is configured to carefully control the value by which the maximum voltage value is shifted based on the reference voltage 634', ie, for the example shown, the maximum voltage value is shifted by 0.4V, from 1.4V to 1.8V . Ideally, the minimum switching voltage would be 0V+0.4V=0.4V. However, as described above for the maximum voltage controller 300, due to capacitor charge redistribution, the capacitor-coupled signal swing at the output Nout of the level shifter circuit 630' may pass through the coupling capacitors associated with the level shifter circuit 630' The ratio of the size of C1 to the size of the capacitance on the gate of the transistor that may implement switch 106 plus the wiring parasitic capacitance decays. Thus, the minimum voltage value at the output Nout of the level shifter circuit 630' may be 1.8V-1.3V=0.5V (rather than the ideal 0.4V, which would result in the input Nin from the level shifter circuit 630' The output Nout to level shifter circuit 630' maintains a 1.4V swing), as shown with signal 616 shown in FIG. Thus, the level shifter circuit 630' can carefully control the maximum/high value of the level shifted clock signal 616 based on the reference signal 634', while the minimum/low voltage value of the level shifted clock signal 616 can then be based on the level shift The ratio between the capacitance of the coupling capacitor C1 of the shifter circuit 630' and the capacitance (including parasitic capacitance) of the load for the level shifter circuit 630' is automatically adjusted. In other words, since the level shifter circuit 630' is the maximum level controller, the reference voltage 634' applied to the source terminals of the transistors m1 and m2 of the level shifter circuit 630' can accurately control the voltage of the signal 616. The maximum/high level of the output voltage (eg, make sure the level is about 1.8V). The minimum/low level of the output voltage of signal 616 may then be established based on capacitor charge redistribution between capacitor C1 of level shifter circuit 630' and load capacitance CLoad for level shifter circuit 630', which Capacitor charge redistribution is based on the ratio of these capacitances. Accordingly, the voltage swing of the output signal 616 may be based on the value of the reference voltage 234 provided to the level shifter circuit 300, the value of the reference voltage 634' provided to the level shifter circuit 630', and the value of the level shifter circuit 630' Capacitor charge redistribution between capacitor C1 and load capacitance CLoad of level shifter circuit 630'.Similar to the variant of the level shifter circuit 630 shown in FIG. 6 described above, in other embodiments of the circuit 630', the input to the inverter Inv of the level shifter circuit 630' may be based on An inverted version of input signal 114 is provided to input 222 instead of output signal 116 from output 224 as shown in FIG. 6 . In such embodiments, the inverter of level shifter circuit 630' may be decoupled from signal 116 and instead coupled to an inverted version of signal 114, which is useful when using signals with signal swings greater than about 1V This may be advantageous in situations where the inverters driving the level shifter circuit 630' may be unreliable. In such an embodiment, one of the capacitor electrodes of capacitor C1 of level shifter circuit 630' is still driven by signal 116, so that level shifter circuit 630' may then generate at level shifter circuit 630' The output signal provided at output node Nout is provided as a level shifted version of signal 116 provided at input node Nin of level shifter circuit 630', where the maximum/high value of the signal is carefully controlled based on reference signal 634'.In other embodiments of circuit 600 , level shifter circuit 630 ′ may be implemented in any other manner than level shifter circuit 300 as long as it provides a sufficiently accurate maximum/high level of the output voltage of signal 616 Controls do, with min/low levels adjusted accordingly.Example Boost Switch Driver Circuit with Level Shifting in Branches of N-Type Transistors7 provides a circuit diagram of an example circuit 700 in which a boost switch driver with level shifting in branches of N-type transistors may be used, according to some embodiments of the present disclosure. As shown in FIG. 7 , circuit 700 may include a series of first inverters 702 and second inverters 704 coupled to switch 706 . The first inverter 702 may be substantially similar to the first inverter 102, configured to receive a signal 712 (similar to the signal 112) as an input and produce a signal 714 (similar to the signal 714) as an output. The descriptions provided above with reference to inverter 102, input signal 112, and output signal 114 apply to inverter 702, input signal 712, and output signal 714, respectively, and are therefore not repeated for the sake of brevity.The first inverter 704 may be configured to drive the second inverter 104, which may be included in the boost switch driver described herein with level shifting performed in the branches of the N-type transistors any of . The second inverter 704 may be a boost switch driver that supplies boost (ie, is configured to increase output swing). This is shown in FIG. 7 with a second inverter 704 receiving a signal with an output swing of 1V (ie, panel 714 ) and outputting a version of the signal that has been inverted and has a larger output swing, in FIG. 7 Signal 716 (schematically shown within the dashed box labeled "716") is shown as still having a high voltage value of 1V, but now having a low voltage value of -0.4V. The output of the second inverter 704 can then be used to drive a switch 706, which in some embodiments can be implemented as the transistor shown in FIG. 7 .In various embodiments, inverter 104 may include any of the boost switch driver circuits presented herein with level shifting in the branches of N-type transistors. In various embodiments, the signals 712, 714, 716 may be different, eg, the signals may have different high and low values. Similarly, the low and high supply voltages coupled to each of inverter 702 and inverter 704 are boost switch drivers with level shifting in the branches of the N-type transistors presented herein The use of any of the circuits to implement inverter 704 may vary in different embodiments. Furthermore, in further embodiments, inverters 704 including any of the boost switch driver circuits presented herein with level shifting in the branches of the N-type transistors may be included in addition to circuit 700 , and in particular, may but need not be implemented with inverter 702 as shown in FIG. 7 .8 provides a circuit diagram of an example circuit 800 having a boost switch driver with level shifting on one side of an N-type transistor in accordance with some embodiments of the present disclosure. Circuit 800 may be viewed as an example of circuit 700, wherein reference numbers used for circuit 700 of FIG. 7 are used to refer to elements that are identical or functionally similar to those shown for circuit 800 of FIG. The description of these elements is not repeated for another figure, and only the differences are described.As shown in FIG. 8, in some embodiments, the first inverter 702 may be implemented as a pair of complementary transistors m3, m4 (ie, one of the transistors is N-type and the other is P-type), like In the implementation of the first inverter 102 shown in FIG. 2, the description of the first inverter shown in FIG. 2 applies to the first inverter 702 and is therefore not repeated for the sake of brevity. The output clock signal 714 of the first inverter 702 may be used as the basis for the input clock signal to the second inverter 704 .FIG. 8 further shows a boost switch driver circuit 804, which may be, for example, an example of the second inverter 704 described above. As shown in FIG. 8 , circuit 804 may include input 822 , output 824 , and two branches 826 between input 822 and output 824 . The first branch, indicated schematically in FIG. 8 by a dash-dotted line 826-1, may include transistor m5. The second branch, indicated schematically in dashed line 826 - 2 in FIG. 8 , may include transistor m6 and level shifter circuit 830 . Again, one of the transistors m5 and m6 may be an N-type transistor and the other may be a P-type transistor. Since FIG. 8 shows an embodiment with level shifting done on one side of the N-type transistor, transistor m6 is an N-type transistor (eg, having its source terminal coupled to a low supply voltage (eg, -0.4V)) , an NMOS transistor, as shown in FIG. 8 with the corresponding circuit representation for transistor m6), while transistor m5 is a P-type transistor (eg, a PMOS transistor, as shown in FIG. 8 with the corresponding circuit representation for transistor m5) shown). Circuit 804 may be configured to split an input clock signal (eg, signal 714) between first branch 826-1 and second branch 826-2 such that a portion of input clock signal 714 split into first branch 826-1 is provided to Branched transistor m5 and a portion of the input clock signal divided into second branch 826-2 is level shifted by level shifter circuit 830 to generate level shifted signal 814-2, and level shifted signal 814-2 is supplied to transistor m6. 8 shows a signal 814-1 provided to transistor m5 (ie, the gate terminal of transistor m5) and a signal 814-2 provided to transistor m6 (ie, the gate terminal of transistor m6). Signal 814-1 and signal 714 may be substantially the same in terms of their voltage swings (eg, from about 0V to about 1V, as shown for the example of FIG. 8), whereas signal 814-2 may have A level shifted signal with a voltage swing from about -0.4V to about 0.5V. Circuit 804 is further configured to combine the output of transistor m5 (eg, from the drain terminal of transistor m5 ) with the output of transistor m6 (eg, from the drain terminal of transistor m6 ) to generate output clock signal 716 . Accordingly, the gate terminal of each of transistors m5 and m6 may be coupled to input 822, except that the gate terminal of transistor m6 is via a level shifter circuit 830 configured to perform level shifting prior to providing the signal to transistor m6 coupled out of input 822. Similarly, the drain terminal of each of transistors m5 and m6 may be coupled to output 824 , which may be coupled to switch 706 to be driven by switch driver circuit 804 .In some embodiments, the level shifter circuit 830 can include a voltage controller circuit 832 that can be configured to receive a reference signal 834 as an input and generate an output 836, as shown in FIG. 8 . Additionally, the level shifter circuit 830 may further include a coupling capacitor 838 coupled to the voltage controller circuit 832 . For example, a first capacitor electrode of coupling capacitor 838 may be coupled to input 822, while a second capacitor electrode of coupling capacitor 838 may be coupled to each of voltage controller circuit 832 and the gate terminal of transistor m6. In other words, a portion of the input clock signal 714 split into the second branch 826-2 of the circuit 804 may be configured to be applied to the first capacitor electrode of the coupling capacitor 838, and the second capacitor electrode of the coupling capacitor 838 may be coupled to the voltage control each of the output 836 of the transistor circuit 832 and the gate terminal of the transistor m6. Input 834 to voltage controller circuit 832 may be a reference voltage or any other control signal configured to control the minimum voltage level set by voltage controller circuit 832 . The output 836 from the voltage controller circuit 832 may form the basis of the level shifted input clock signal 814-2 to be provided to the gate terminal of the transistor m6.When transistor m6 is an N-type transistor, as shown in the embodiment of FIG. 8 , voltage controller circuit 832 may be configured to control the level-shifted input clock signal 814 - 2 provided to transistor m6 , and thus output signal 716 The minimum voltage value in . For this, the input to PMOS transistor m5 may come directly from the 1V domain inverter output 114 (swing 0V to 1V) as input 814-1 as shown in Figure 8, while the input to NMOS transistor m6 may be via a coupling capacitor 838 is level shifted with the voltage controller circuit 832 to provide a minimum switching voltage of, eg, -0.4V, and ideally a maximum switching voltage of -0.4V+1V=0.6V. Due to the capacitor charge redistribution, the capacitor coupling signal swing at the output Nout of the level shifter circuit 830 can be determined by the ratio of the size of the coupling capacitor 828 to the size of the capacitance on the gate of the NMOS transistor m6 plus the wiring parasitic capacitance attenuation. Thus, the signal at the output Nout of the level shifter circuit 830 can swing from -0.4V to -0.4V + 0.9V = 0.5V (rather than the ideal 0.6V, which will result in a Input Nin to level shifter circuit 830 output Nout maintains a 1V swing), as shown with signal 814-2 shown in FIG. 8 . The voltage controller circuit 832 or the level shifter circuit 830 as shown in FIG. 8 may be referred to as a "minimum level controller" because the voltage controller circuit or the level shifter circuit is configured to set the Level shift the maximum voltage value of the input clock signal 814-2. The maximum voltage value of the level-shifted input clock signal 814 - 2 may then be automatically adjusted based on the ratio between the capacitance of the coupling capacitor 838 and the capacitance for the load of the voltage controller circuit 832 .In some embodiments of a level shifter circuit (eg, as shown in Figure N) coupled to an N-type transistor of the boost switch driver circuit, coupled to an N-type transistor (ie, transistor m6 in this case) The low supply voltage of the source terminal of the can be substantially the same as the minimum voltage value controlled by the voltage controller circuit 832 (eg, both can be -0.4V), although in other embodiments these voltages can be different. In general, the value of the supply voltage to the source terminal of the N-type transistor (ie, transistor m6 in this case) coupled to the branch that also includes the minimum level controller may correspond to (eg, substantially equal to) via The low voltage in the input clock signal 814-2 is level shifted. In some embodiments of circuit 804 , the high supply voltage to the source terminal of a P-type transistor (ie, transistor m5 in this case) that is coupled to a branch that does not include a level shifter circuit may be the same as the voltage in output signal 716 . The maximum voltage values are substantially the same (eg, both can be 1V), although in other embodiments these voltages may be different (eg, where a second level shifter circuit is included, eg, as in FIG. 12 ) displayed).In various embodiments, level shifter circuit 830 may be implemented in any manner that allows careful control of the minimum value of level shifted input clock signal 814-2 to be provided to N-type transistor m6. An example is shown in FIG. 9, however, in other embodiments of circuit 804, level shifter circuit 830 may be implemented differently. 9 provides a circuit diagram of an example level shifter circuit 900 that may be used with or in a boost switch driver, such as as a boost, in accordance with some embodiments of the present disclosure The level shifter circuit 830 of the switch driver 804 to perform level shifting while controlling the minimum/high signal level.As shown in FIG. 9, level shifter circuit 900 may include a pair of cross-coupled transistors m1 and m2, a pair of capacitors C1 and C2, and an inverter Inv. The arrangement of level shifter circuit 900 is substantially the same as that of level shifter circuit 300, except that transistors m1 and m2 are N-type transistors in level shifter circuit 900, since level shifter circuit 900 is the smallest level controller. The descriptions regarding the coupling between the various elements of level shifter circuit 300 apply to level shifter circuit 900 and are therefore not repeated for the sake of brevity.When the level shifter circuit 900 is a minimum level controller, the value of the reference voltage 834 may be configured to correspond to the low voltage in the level shifted input clock signal 814-2 output by the circuit. In some embodiments, reference voltage 834 may be substantially the same as the supply voltage to which the source terminal of transistor m6 is coupled (eg, for the examples shown in FIGS. 8 and 9, both may be about -0.4V) and can be from the same voltage source.Because level shifter circuit 900 is a minimum level controller, reference voltage 834 applied to the source terminals of transistors m1 and m2 can accurately control the minimum/low level of the output voltage of signal 814-2 (eg, ensuring that level is about -0.4V). The maximum/high level of the output voltage of signal 814-2 may then be established based on the redistribution of capacitor charge between capacitor C1 and load capacitance CLoad based on the ratio of these capacitances. Accordingly, the voltage swing of the output signal 814-2 may be based on the value of the reference voltage 834 provided to the level shifter circuit 900 and based on the redistribution of the capacitor charge between the capacitor C1 and the load capacitance CLoad.10 provides a circuit diagram of an example circuit 1000 having a boost switch driver with level shifting implemented using the level shifter circuit 900 of FIG. 9 in the branches of N-type transistors in accordance with some embodiments of the present disclosure bit. Circuit 1000 may be viewed as an example of circuit 800 in which level shifter circuit 830 is implemented as level shifter circuit 900 and is thus (in the schematic illustration of FIG. 10 ) replaced by level shifter circuit 900 . In FIG. 10, the reference numbers used for the circuits of FIGS. 7-9 are used to refer to the same or functionally similar elements as those shown in FIGS. 7-9, so that the description of those elements is not repeated for FIG. 10 . In addition, various components of the level shifter circuit 900 shown in FIG. 9 and described above, such as transistors m1 and m2, capacitors C1 and C2, and inverter Inv are also shown in FIG. 10 to make it clear that this How an embodiment level shifter circuit 900 replaces the level shifter circuit 830 shown in FIG. 8 . It should be noted that in the embodiment of FIG. 10 , coupling capacitor 838 of level shifter circuit 830 is implemented by capacitor C1 of level shifter circuit 900 . In the context of circuit 1000 , the input node Nin and output node Nout of level shifter circuit 900 are also shown in FIG. 10 .11 provides a circuit diagram of an example circuit 1100 with a boost switch driver in a branch of an N-type transistor (ie, transistor m6 for the example shown) with additional transistors in accordance with some embodiments of the present disclosure With level shifting, the additional transistor is provided as a cascode transistor to the P-type transistor of the other branch (ie, transistor m5 for the example shown). Circuit 1100 may be viewed as a further embodiment of circuit 1000, wherein, similar to circuit 1000, level shifter circuit 830 is implemented as level shifter circuit 900 and is thus in the schematic representation of FIG. 11 by level shifting Replacement circuit 900. In FIG. 11 , the reference numbers used for the circuits of FIGS. 7-10 are used to refer to the same or functionally similar elements as those shown in FIGS. 7-10 so that the description of those elements is not repeated for FIG. 11 , and describe only the differences. Circuit 1100 differs from circuit 1000 in that circuit 1100 further includes an additional transistor m7 provided as a cascode transistor to transistor m5 as described above. To this end, transistors m5 and m7 may be the same type of transistor (ie, both are P-type transistors, as shown in the illustration of Figure 11). Cascode transistor m7 may be considered part of first branch 826-1. As shown in FIG. 11 , the drain terminal of transistor m5 may be obtained by coupling the drain terminal of transistor m5 to the source terminal of cascode transistor m7 and coupling the drain terminal of cascode transistor m7 to output 224 and coupled to output 824. In various embodiments, the gate terminal of cascode transistor m7 may be coupled to a suitable reference voltage 1134 . For example, for embodiments where cascode transistor m7 is a P-type transistor as shown (ie, for embodiments where the level shifter circuit included in circuit 1100 is a minimum level controller), The reference voltage 1134 may be substantially 0V. However, in other embodiments, the value of the reference voltage 1134 may be different.Although only one cascode transistor m7 is shown in FIG. 11, in other embodiments of the circuit shown in FIG. 11, more than one cascode transistor m7 may be included in the first branch 826-1. Although FIG. 11 shows cascode transistor m7 in conjunction with level shifter circuit 900 of FIG. 9, in further embodiments of circuit 800 shown in FIG. 8, first branch 826-1 of circuit 800 may include At least one such cascode transistor m7 , where level shifter circuit 830 may, but need not, be implemented as level shifter circuit 900 . Furthermore, in any of the embodiments of the boost switch driver with level shifting in the branches of the N-type transistors (eg, any of the embodiments described with reference to FIGS. 7-12 ), one or more An additional transistor m8 (not specifically shown in the figures) may be provided as a cascode transistor to transistor m6 (ie, provided as a cascode transistor to an N-type transistor of the level shifter circuit). To this end, transistors m6 and m8 may be the same type of transistors (eg, both are N-type transistors), and one or more cascode transistors m8 may be considered part of the second branch 826-2. For example, the source terminal of such cascode transistor m8 may be coupled to the drain terminal of transistor m6 and the drain terminal of cascode transistor m8 may be coupled to output 224 . The gate terminal of this cascode transistor m8 may be coupled to a suitable reference voltage, eg, approximately 1V for embodiments in which cascode transistor m8 is an N-type transistor (due to the To the embodiments of 12, transistor m6 is an N-type transistor; ie, for the level shifter circuit included in circuit 1100, is a minimum level controller embodiment). However, in other embodiments, the value of the reference voltage coupled to the gate terminal of the additional cascode transistor m8 may be different.12 provides a circuit diagram of an example circuit 1200 having a boost switch driver with level shifting in the branches of the N-type transistors and having additional level shifter circuits in accordance with some embodiments of the present disclosure, the An additional level shifter circuit is configured to control the min/low level of the final output signal 1216 . Circuit 1200 may be viewed as a further embodiment of circuit 1100 in which, similar to circuit 1100, level shifter circuit 830 coupled to the input of transistor m5 is implemented as level shifter circuit 900, and thus (in the shown schematically) is replaced by a level shifter circuit 900 . Also similar to circuit 1100, circuit 1200 further includes a cascode transistor m7. In FIG. 12, the reference numbers used for the circuits of FIGS. 7-11 are used to refer to the same or functionally similar elements as those shown in FIGS. 7-11, so that the description of those elements is not repeated for FIG. 12 , and describe only the differences. Circuit 1200 differs from circuit 1100 in that circuit 1200 further includes another example of level shifter circuit 900 of FIG. /Low level shifter circuit 1230. Since the operation of level shifter circuit 900 has been described in detail above, this description is not repeated here with reference to circuit 1230. Reference signal 1234 is similar to reference signal 834 described above, except that its value may be different for level shifter circuit 1230. For example, if the input signal provided at the input node Nin of the level shifter circuit 1230 is the signal 716 having minimum and maximum voltage values of -0.4V and 1V, respectively, as described above, then the level shift The counter circuit 1230 may shift these values by about 0.9V (ie, the value of the reference voltage 1234 may be 0.5V, as shown in FIG. 12). In particular, the level shifter circuit 1230 may be configured to carefully control the value by which the minimum voltage value is shifted based on the reference voltage 1234, ie, for the example shown, the minimum voltage value is shifted from -0.4V to 0.5V. Ideally, the maximum switching voltage would be 1V+0.9V=1.9V. However, as described above for circuit 900, due to the capacitor charge redistribution, the capacitor-coupled signal swing at the output Nout of level shifter circuit 1230 can be determined by the size and availability of coupling capacitor C1 of level shifter circuit 1230. The ratio of the magnitude of the capacitance on the gate of the transistor implementing switch 706 plus the wiring parasitic capacitance decays. Thus, the signal at the output Nout of the level shifter circuit 1230 can swing from 0.5V to 0.5V + 1.3V = 1.8V (rather than the ideal 1.9V, which will result in Nin from the input of the level shifter circuit 1230 The output Nout to level shifter circuit 1230 maintains a 1.4V swing), as shown with signal 1216 shown in FIG. 12 . Thus, the level shifter circuit 1230 can carefully control the minimum/low value of the level shifted clock signal 1216 based on the reference signal 1234, while the maximum/high voltage value of the level shifted clock signal 1216 can then be based on the level shifter circuit The ratio between the capacitance of the coupling capacitor C1 of 1230 and the capacitance of the load (including parasitic capacitance) for the level shifter circuit 1230 is automatically adjusted. In other words, since the level shifter circuit 1230 is the minimum level controller, the reference voltage 1234 applied to the source terminals of the transistors m1 and m2 of the level shifter circuit 630 can accurately control the output voltage of the signal 1216 Min/low level (eg, make sure the level is about 0.5V). The maximum/high level of the output voltage of the signal 1216 may then be established based on the redistribution of the capacitor charge between the capacitor C1 of the level shifter circuit 1230 and the load capacitance CLoad for the level shifter circuit 1230 , the capacitor charge The redistribution is based on the ratio of these capacitances. Accordingly, the voltage swing of the output signal 1216 may be based on the value of the reference voltage 834 provided to the level shifter circuit 900, the value of the reference voltage 1234 provided to the level shifter circuit 1230, and the capacitors C1 and 1230 of the level shifter circuit 1230. Capacitor charge redistribution among the load capacitances CLoad of the level shifter circuit 1230.Although not specifically shown in FIG. 12, in other embodiments of circuit 1230, the input to inverter Inv of level shifter circuit 1230 may be based on an inverted version of input signal 714 provided to input 822, rather than Output signal 716 from output 824 as shown in FIG. 12 . In such embodiments, the inverter of level shifter circuit 1230 may be decoupled from signal 716 and instead coupled to an inverted version of signal 714, which is driven using a signal having a signal swing greater than about 1V This may be advantageous in situations where the inverters of the level shifter circuit 1230 may be unreliable. In such embodiments, one of the capacitor electrodes of capacitor C1 of level shifter circuit 1230 is still driven by signal 716 so that level shifter circuit 1230 can then generate Nout at the output node of level shifter circuit 1230 The output signal is provided at 1230 as a level shifted version of the signal 716 provided at the input node Nin of the level shifter circuit 1230 , where the minimum/low value of the signal is carefully controlled based on the reference signal 1234 .12 shows one way of how the level shifter circuit 1230 may be implemented to provide control of the minimum/low level of the output voltage of the signal 1216 as described above. In other embodiments of circuit 1200, level shifter circuit 1230 may be replaced by any circuit configured to provide sufficiently accurate control of the min/low level of the output voltage of signal 1216, with the max/high level adjusted accordingly .Although FIG. 12 shows additional level shifter circuit 1230 in conjunction with level shifter circuit 900 of FIG. 9, in further embodiments of circuit 800 shown in FIG. 8, additional level shifter circuit 1230 may be included to Signal 716 is received as input and produces output 1216, as described with reference to FIG. 12, where level shifter circuit 830 may, but need not, be implemented as level shifter circuit 900. Furthermore, although FIG. 12 shows an additional level shifter circuit 1230 in conjunction with the cascode transistor m7 of FIG. 11 , in further embodiments of the circuit 800 shown in FIG. 8 , may include as described with reference to FIG. 12 The additional level shifter circuit 1230 without the cascode transistor m7. In still further embodiments of the circuit 800 shown in FIG. 8, an additional level shifter circuit 1230 as described with reference to FIG. 12 may be included, without the cascode transistor m7 and in which the level shifter circuit 830 is implemented differently from level shifter circuit 900 . In any of these embodiments, the level shifter circuit 1230 may be implemented in any other manner than that shown in FIG. 12 as long as it provides a minimum output voltage for the signal 1216 as described above /Low level can be controlled accurately enough.Still further, although FIG. 12 shows a level shifter circuit 1230 configured to provide control of the min/low level of the output voltage of the signal 1216 as described above, in other embodiments the level shifter Circuit 1230 may be replaced with a level shifter circuit 1230' (not shown in FIG. 12, but reference numerals are used here for ease of illustration) configured to provide control of the max/high level of the output voltage of signal 1216. In some such embodiments, to implement level shifter circuit 1230', level shifter circuit 1230 shown in FIG. 12 may be replaced with level shifter circuit 300, or equivalently, the level shifter circuit 300 shown in FIG. 12 The NMOS transistors of the level shifter circuit 1230 shown may be replaced with PMOS transistors, and the reference voltage 1234 may use a reference voltage 1234' configured to accurately control the maximum/high level of the output voltage of the signal 1216 (also not shown in FIG. shown, but reference numbers are used here for ease of illustration) instead.The input to level shifter circuit 1230' may be applied/provided to a node Nin of circuit 1230', which is coupled to the first capacitor electrode of capacitor C1, and in some embodiments, to the input of inverter Inv , as shown in Figure 12. In some embodiments, the input to the level shifter circuit 1230&apos; may be based on the output signal 716 from the boost switch driver circuit 204, that is, for the example shown in FIG. 12, on a signal from -0.4V to 1V. Level shifter circuit 1230' may generate an output signal from an output node Nout of circuit 1230' coupled to one of the second capacitor electrode of capacitor C1, the drain terminal of transistor m1, and the gate terminal of transistor m2 or more. The output signal 1216 from the output node Nout of the level shifter circuit 1230' may be a level shifted version of the input signal (eg, signal 716) provided at the input node Nin of the level shifter circuit 1230' based on The reference signal 1234' carefully controls the maximum/high value of the signal. For example, if the input signal provided at the input node Nin of the level shifter circuit 1230' is the signal 716 having minimum and maximum voltage values of -0.4V and 1V, respectively, as described above, then the level shift Biter circuit 1230' may shift these values such that the maximum voltage value is 1.8V, which would be the value of reference voltage 1234' for this example. In particular, the level shifter circuit 1230' is configured to carefully control the value by which the maximum voltage value is shifted based on the reference voltage 1234', i.e., for the example shown, the maximum voltage value is shifted by 0.8V, from 1V to 1.8V. Ideally, the minimum switching voltage would be -0.4V+0.8V=0.4V. However, as described above for the maximum voltage controller 300, due to capacitor charge redistribution, the capacitor-coupled signal swing at the output Nout of the level shifter circuit 1230' may pass through the coupling capacitors associated with the level shifter circuit 1230' The ratio of the size of C1 to the size of the capacitance on the gate of the transistor that may implement switch 706 plus the wiring parasitic capacitance decays. Thus, the minimum voltage value at the output Nout of the level shifter circuit 1230' may be 1.8V-1.3V=0.5V (rather than the ideal 0.4V, which would result in the input Nin from the level shifter circuit 1230' The output Nout to level shifter circuit 1230' maintains a 1.4V swing), as shown with signal 1216 shown in FIG. Thus, the level shifter circuit 1230' can carefully control the maximum/high value of the level shifted clock signal 1216 based on the reference signal 1234', while the minimum/low voltage value of the level shifted clock signal 1216 can then be based on the level shift The ratio between the capacitance of the coupling capacitor C1 of the shifter circuit 1230' and the capacitance (including parasitic capacitance) of the load for the level shifter circuit 1230' is automatically adjusted. In other words, since the level shifter circuit 1230' is the maximum level controller, the reference voltage 1234' applied to the source terminals of the transistors m1 and m2 of the level shifter circuit 1230' can accurately control the output of the signal 1216 Maximum/high level of voltage (e.g. make sure the level is about 1.8V). The minimum/low level of the output voltage of signal 1216 may then be established based on the redistribution of capacitor charge between capacitor C1 of level shifter circuit 1230' and load capacitance CLoad for level shifter circuit 1230', which Capacitor charge redistribution is based on the ratio of these capacitances. Accordingly, the voltage swing of the output signal 1216 may be based on the value of the reference voltage 834 provided to the level shifter circuit 900, the value of the reference voltage 1234' provided to the level shifter circuit 1230', and the value of the level shifter circuit 1230' Capacitor charge redistribution between capacitor C1 and load capacitance CLoad of level shifter circuit 1230'.Similar to the variant of the level shifter circuit 1230 shown in FIG. 12 described above, in other embodiments of the circuit 1230', the input to the inverter Inv of the level shifter circuit 1230' may be based on An inverted version of input signal 714 is provided to input 822 instead of output signal 716 from output 824 as shown in FIG. 12 . In such embodiments, the inverter of level shifter circuit 1230' may be decoupled from signal 716 and instead coupled to an inverted version of signal 714, which is useful when using signals with signal swings greater than about 1V This may be advantageous in situations where the inverters driving the level shifter circuit 1230' may be unreliable. In such embodiments, one of the capacitor electrodes of capacitor C1 of level shifter circuit 1230' is still driven by signal 716, so that level shifter circuit 1230' may then generate a The output signal provided at output node Nout is provided as a level shifted version of signal 716 provided at input node Nin of level shifter circuit 1230', where the maximum/high value of the signal is carefully controlled based on reference signal 1234'.In other embodiments of circuit 1200, level shifter circuit 1230' may be implemented in any other manner than level shifter circuit 300, as long as it provides a sufficiently accurate maximum/high level of the output voltage of signal 1216 Controls do, with min/low levels adjusted accordingly.Example Systems and DevicesBoost switch driver circuits or portions thereof (eg, only portions of inverter circuits 104 and/or 704 as described herein) as described herein may be included in any suitable system, device, or apparatus. For example, in some embodiments, any of the boost switch drivers, or portions thereof, may be included in an ADC as shown in FIG. 13 . In other embodiments, any of the boost switch drivers, or portions thereof, may be included in a larger system or device configured to perform analog-to-digital conversion. Some examples of such systems and devices are shown in FIGS. 14 and 15 . Other examples of systems and apparatuses incorporating one or more of the boost switch drivers as described herein are possible and within the scope of the present disclosure.13 provides a schematic illustration of an example component 1300 (eg, ADC) in which one or more boost switch drivers 1310 may be implemented, according to some embodiments of the present disclosure. The one or more boost switch drivers 1310 may include any of the boost switch driver circuits described above, eg, any of the embodiments of the boost switch drivers described with reference to FIGS. 1-12 . One or more boost switch drivers 1310 may be configured to drive one or more switches 1320 . In some embodiments, there may be a one-to-one correspondence between one or more boost switch drivers 1310 and one or more switches 1320 (ie, each boost switch driver 1310 may be configured to drive only one of the switches 1320 And each of the switches 1320 may be configured to be driven by only one of the boost switch drivers 1310). In other embodiments, a single boost switch driver 1310 may drive more than one of the switches 1320 and/or a single switch of the switches 1320 may be driven by more than one of the boost switch drivers 1310 .14 is a block diagram of an example system 2100 (eg, a computing device) that may include one or more boost switch drivers, according to any of the embodiments disclosed herein. For example, any suitable ones of the components of system 2100 may include one or more of the boost switch drivers disclosed herein. Several components are shown in FIG. 14 as being included in system 2100, but any or more of these components may be omitted or duplicated depending on the application. In some embodiments, some or all of the components included in system 2100 may be attached to one or more motherboards. In some embodiments, some or all of the components are fabricated on a single system-on-chip (SoC) die.Additionally, in various embodiments, system 2100 may not include one or more of the components shown in FIG. 14, but system 2100 may include interface circuitry for coupling to the one or more components. For example, system 2100 may not include display device 2106, but may include display device interface circuitry (eg, connector and driver circuitry) to which display device 2106 may be coupled. In another set of examples, system 2100 may not include audio input device 2118 or audio output device 2108, but may include audio input or output device interface circuitry to which audio input device 2118 or audio output device 2108 may be coupled (eg, connectors and supporting circuitry).System 2100 can include a processing device 2102 (eg, one or more processing devices). As used herein, the term "processing device" or "processor" may refer to the processing of electronic data from a scratchpad and/or memory to transform the electronic data into storable in the scratchpad and/or memory any device or part of a device of other electronic data. Processing device 2102 may include one or more digital signal processors (DSPs), application specific integrated circuits (ASICs), central processing units (CPUs), graphics processing units (GPUs), cryptographic processors (which execute cryptographic algorithms in hardware) specialized processor), server processor, or any other suitable processing device. System 2100 may include memory 2104, which may itself include one or more memory devices, such as volatile memory (eg, dynamic RAM (DRAM)), non-volatile memory (eg, read only memory (ROM)) , flash memory, solid state memory and/or hard disk drives. In some embodiments, memory 2104 may include memory that shares a die with processing device 2102 . This memory can be used as cache memory and can include embedded DRAM (eDRAM) or spin transfer torque magnetic RAM (STT-MRAM).In some embodiments, the system 2100 may include a communication chip 2112 (eg, one or more communication chips). For example, the communication chip 2112 may be configured to manage wireless communication for the transfer of data to and from the system 2100 . The term "wireless" and derivatives thereof may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data through non-solid media using modulated electromagnetic radiation. The term does not imply that the associated device does not contain any wires, although in some embodiments the associated device may not contain any wires.The communication chip 2112 may implement any of several wireless standards or protocols, including but not limited to: Institute of Electrical and Electronics Engineers (IEEE) standards, including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (eg, IEEE802.16 - 2005 Amendment), Long Term Evolution (LTE) Plan, together with any amendments, updates and/or revisions (eg, LTE-Advanced Plan, Ultra Mobile Broadband (UMB) Plan (also known as "3GPP2"), etc.). IEEE 802.16 Compliant Broadband Wireless Access (BWA) networks commonly referred to as WiMAX networks - an acronym that stands for Worldwide Interoperability for Microwave Access, is the certification of products that have passed conformance and interoperability testing to the IEEE 802.16 standard logo. The communication chip 2112 may operate according to Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE network. The communication chip 2112 may operate according to Enhanced Data for GSM Evolution (EDGE), GSMEDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN) or Evolved UTRAN (E-UTRAN). The communication chip 2112 can be designated as 3G, 4G, 5G and Any other wireless protocol operation above. In other embodiments, the communication chip 2112 may operate according to other wireless protocols. System 2100 may include an antenna 2122 to facilitate wireless communications and/or receive other wireless communications, such as AM or FM radio transmissions.In some embodiments, the communication chip 2112 can manage wired communication, such as electrical, optical, or any other suitable communication protocol (eg, Ethernet). As mentioned above, the communication chip 2112 may include multiple communication chips. For example, the first communication chip 2112 may be dedicated to shorter-range wireless communication, such as Wi-Fi or Bluetooth, and the second communication chip 2112 may be dedicated to longer-range wireless communication, such as Global Positioning System (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO or others. In some embodiments, the first communication chip 2112 may be dedicated to wireless communication, and the second communication chip 2112 may be dedicated to wired communication.System 2100 may include battery/power circuitry 2114 . Battery/power circuitry 2114 may include one or more energy storage devices (eg, batteries or capacitors) and/or circuitry for coupling components of system 2100 to an energy source (eg, AC line power) separate from system 2100 system.System 2100 may include display device 2106 (or corresponding interface circuitry as discussed above). Display device 2106 may include any visual indicator, such as a heads-up display, computer monitor, projector, touch screen display, liquid crystal display (LCD), light emitting diode display, or flat panel display.System 2100 may include audio output device 2108 (or corresponding interface circuitry as discussed above). Audio output device 2108 may include any device that produces an audible indicator, such as speakers, headphones, or earbuds.System 2100 may include audio input device 2118 (or corresponding interface circuitry as discussed above). Audio input device 2118 may include any device that produces a signal representing sound, such as a microphone, a microphone array, or a digital instrument (eg, an instrument with a musical instrument digital interface (MIDI) output).System 2100 may include GPS device 2116 (or corresponding interface circuitry as discussed above). The GPS device 2116 can communicate with a satellite-based system and can receive the location of the system 2100 as is known in the art.System 2100 may include another output device 2110 (or corresponding interface circuitry as discussed above). Examples of other output devices 2110 may include audio codecs, video codecs, printers, wired or wireless transmitters for providing information to other devices, or additional storage devices.System 2100 may include another input device 2120 (or corresponding interface circuitry as discussed above). Examples of other input devices 2120 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device (such as a mouse), a stylus, a touchpad, a barcode reader, a quick response (QR) code reader , any sensor or radio frequency identification (RFID) reader.System 2100 may have any desired form factor, such as a handheld or mobile electrical device (eg, cell phone, smartphone, mobile internet device, music player, tablet, laptop, netbook, ultrabook, personal digital assistants (PDAs, ultra-mobile personal computers, etc.), desktop appliances, server devices or other network-connected computing components, printers, scanners, monitors, set-top boxes, entertainment control units, vehicle control units, digital cameras, Digital video recorders or wearable electrical devices. In some embodiments, system 2100 may be any other electronic device that processes data.15 is a block diagram of an example RF device 2200 (eg, an RF transceiver) that may include one or more components with one or more boost switch drivers, according to any of the embodiments disclosed herein. For example, any suitable ones of the components of RF device 2200 may include a die including at least one LED configured to drive one or more switches according to any of the embodiments disclosed herein pressure switch driver or part thereof. In some embodiments, RF device 2200 may be included within, or may be coupled to, any of the components of system 2100 as described with reference to FIG. 16 , eg, memory 2104 and and/or processing device 2102. In still other embodiments, RF device 2200 may further include any of the components described with reference to FIG. 16, such as, but not limited to, battery/power circuit 2114, memory 2104, and various inputs as shown in FIG. 16 and output device.In general, RF device 2200 can be any device that can support wireless transmission and/or reception of signals in the form of electromagnetic waves in the RF range of approximately 3 kilohertz (kHz) to approximately 300 gigahertz (GHz) or system. In some embodiments, RF device 2200 may be used for wireless communication, eg, in any base station (BS) or user equipment (UE) device suitable for cellular wireless communication technologies such as GSM, WCDMA or LTE. In other examples, the RF device 2200 may be used or used, for example, in millimeter wave wireless technologies such as fifth generation (5G) wireless (ie, high frequency/short wavelength spectrum, eg, having between about 20 GHz and 60 GHz) frequencies in the range of , corresponding to wavelengths in the range between about 5 millimeters and 15 millimeters) of BS or UE devices. In yet another example, RF device 2200 may be used to use Wi-Fi technology (eg, 2.4 GHz band, corresponding to a wavelength of about 12 cm; or 5.8 GHz band, spectrum, corresponding to a wavelength of about 5 cm) wireless communications, for example, in Wi-Fi enabled devices such as desktop computers, laptops PCs, video game consoles, smartphones, tablets, smart TVs, digital audio players, cars, printers, etc. In some implementations, a Wi-Fi enabled device may, for example, be configured to communicate with other nodes such as , smart sensors) nodes in an intelligent system that communicate data. In yet another example, the RF device 2200 may be used for a radio using Bluetooth technology (eg, a frequency band from about 2.4 GHz to about 2.485 GHz, corresponding to a wavelength of about 12 cm). Wireless communication. In other embodiments, the RF device 2200 may be used to transmit and/or receive RF signals for purposes other than communication, for example, in automotive radar systems, or in medical applications such as magnetic resonance Imaging (MRI).In various embodiments, RF device 2200 may be included in frequency division duplexing (FDD) or time domain duplexing (TDD) variants that may be used for frequency allocation in cellular networks. In an FDD system, separate frequency bands may be used simultaneously for uplink (ie, transmission of RF signals from a UE device to a BS) and downlink (ie, transmission of RF signals from a BS to a US device). In a TDD system, the uplink and downlink may use the same frequency, but at different times.Several components are shown in FIG. 15 as being included in RF device 2200, but any or more of these components may be omitted or duplicated depending on the application. For example, in some embodiments, RF device 2200 may be an RF device (eg, an RF transceiver) that supports both wireless transmission and reception of RF signals, in which case it may include what is referred to herein as a transmission ( components of both the TX) path and what is referred to herein as the receive (RX) path. However, in other embodiments, the RF device 2200 may be an RF device (eg, an RF receiver) that only supports wireless reception, in which case it may include components of the RX path but not the TX path, or RF device 2200 may be an RF device (eg, an RF transmitter) that only supports wireless transmission, in which case it may include components of the TX path, but not the RX path.In some embodiments, some or all of the components included in RF device 2200 may be attached to one or more motherboards. In some embodiments, some or all of the components are fabricated on a single die, eg, a single SoC die.Additionally, in various embodiments, RF device 2200 may not include one or more of the components shown in Figure 15, but RF device 2200 may include interface circuitry for coupling to the one or more components . For example, RF device 2200 may not include antenna 2202, but may include antenna interface circuitry (eg, matching circuitry, connectors, and driver circuitry) to which antenna 2202 may be coupled. In another set of examples, RF device 2200 may not include digital processing unit 2208 or local oscillator 2206, but may include device interface circuitry (eg, connectors and support circuitry).As shown in FIG. 15 , RF device 2200 may include antenna 2202 , duplexer 2204 , local oscillator 2206 , digital processing unit 2208 . As also shown in FIG. 15, RF device 2200 may include an RX path, which may include RX path amplifier 2212, RX path premix filter 2214, RX path mixer 2216, RX path postmix filter 2218, and ADC2220. As further shown in FIG. 15, RF device 2200 may include a TX path, which may include TX path amplifier 2222, TX path postmix filter 2224, TX path mixer 2226, TX path premix filter 2228, and DAC2230. Still further, the RF device 2200 may further include an impedance tuner 2232 , an RF switch 2234 , and control logic 2236 . In various embodiments, RF device 2200 may include multiple instances of any of the components shown in FIG. 15 . In some embodiments, RX path amplifier 2212 , TX path amplifier 2222 , duplexer 2204 , and RF switch 2234 may be considered to form or be part of an RF front end (FE) of RF device 2200 . In some embodiments, RX path amplifier 2212 , TX path amplifier 2222 , duplexer 2204 , and RF switch 2234 may be considered to form or be part of the RF FE of RF device 2200 . In some embodiments, RX path mixer 2216 and TX path mixer 2226 (possibly with their associated premix and postmix filters shown in FIG. 15 ) may be considered to form or as RF device 2200 (if only the RX path or TX path components are included in the RF device 2200, respectively, may be considered to form or be part of an RF receiver or RF transmitter). In some embodiments, RF device 2200 may further include one or more control logic elements/circuits shown in FIG. 15 as control logic 2236, eg, an RF FE control interface. In some embodiments, the control logic 2236 may be configured to control at least a portion of any of the boost switch drivers operating within any of the components of the RF device 2200 as described herein. In some embodiments, control logic 2236 may be used to perform control of other functions within RF device 2200, eg, enhanced control of complex RF system environments, support for implementation of envelope tracking techniques, reduction of dissipated power, and the like.Antenna 2202 may be configured to wirelessly transmit and/or receive RF signals according to any wireless standard or protocol (eg, Wi-Fi, LTE, or GSM), as well as any other wireless protocol designated as 3G, 4G, 5G, and above. If the RF device 2200 is an FDD transceiver, the antenna 2202 may be configured to simultaneously receive and transmit communication signals in separate (ie, non-overlapping and non-contiguous) frequency bands (eg, in frequency bands with a separation of, eg, 20 MHz from each other) . If the RF device 2200 is a TDD transceiver, the antenna 2202 may be configured to sequentially receive and transmit communication signals in frequency bands that may be the same or overlapping for the TX path and the RX path. In some embodiments, RF device 2200 may be a multi-band RF device, in which case antenna 2202 may be configured for simultaneous reception of signals having multiple RF components in separate frequency bands and/or for simultaneous transmission in separate frequency bands A signal with multiple RF components in a frequency band. In such embodiments, the antenna 2202 may be a single broadband antenna or multiple band-specific antennas (ie, multiple antennas each configured to receive and/or transmit signals in a particular frequency band). In various embodiments, the antenna 2202 may include multiple antenna elements, eg, multiple antenna elements forming a phased antenna array (ie, a communication system that may use multiple antenna elements and phase shifts to transmit and receive RF signals or any antenna array). Compared to a single antenna system, a phased antenna array can provide advantages such as increased gain, directional steering capability, and simultaneous communication. In some embodiments, RF device 2200 may include more than one antenna 2202 to implement antenna diversity. In some such embodiments, RF switch 2234 may be deployed to switch between different antennas.The output of antenna 2202 may be coupled to the input of duplexer 2204. Duplexer 2204 may be any suitable component configured to filter multiple signals to allow bidirectional communication via a single path between duplexer 2204 and antenna 2202. Duplexer 2204 may be configured to provide RX signals to the RX path of RF device 2200 and receive TX signals from the TX path of RF device 2200 .RF device 2200 may include one or more local oscillators 2206 configured to provide local oscillator signals that may be used for down-conversion of RF signals received by antenna 2202 and/or to be transmitted by antenna 2202 Upconversion of the signal.RF device 2200 may include a digital processing unit 2208, which may include one or more processing devices. In some embodiments, the digital processing unit 2208 may be implemented as the processing device 2102 shown in FIG. 16, a description of which is provided above (when used as the digital processing unit 2208, the processing device 2102 may, but need not, implement any of the boost switch drivers as described herein). Digital processing unit 2208 may be configured to perform various functions related to digital processing of RX and/or TX signals. Examples of such functions include, but are not limited to: decimation/downsampling, error correction, digital downconversion or upconversion, DC offset cancellation, automatic gain control, and the like. Although not shown in FIG. 15 , in some embodiments, RF device 2200 may further include a memory device, such as memory device 2104 configured to cooperate with digital processing unit 2208 as described with reference to FIG. 16 . When used within or coupled to RF device 2200, memory device 2104 may, but need not, implement any of the boost switch drivers as described herein.Returning to the details of the RX path that may be included in RF device 2200, RX path amplifier 2212 may include a low noise amplifier (LNA). The input of RX path amplifier 2212 may be coupled to an antenna port (not shown) of antenna 2202, eg, via duplexer 2204. RX path amplifier 2212 may amplify RF signals received by antenna 2202.The output of RX path amplifier 2212 may be coupled to the input of RX path premix filter 2214, which may be a harmonic or Bandpass (eg, lowpass) filters.The output of RX path premix filter 2214 may be coupled to the input of RX path mixer 2216 (also referred to as a downconverter). The RX path mixer 2216 may contain two inputs and one output. The first input may be configured to receive an RX signal, which may be a current signal indicative of a signal received by the antenna 2202 (eg, the first input may receive the output of the RX path premix filter 2214). The second input may be configured to receive a local oscillator signal from one of the local oscillators 2206 . The RX path mixer 2216 may then mix the signals received at its two inputs to produce a downconverted RX signal provided at the output of the RX path mixer 2216. As used herein, down-conversion refers to the process of mixing a received RF signal using a local oscillator signal to produce a lower frequency signal. In particular, when two input frequencies are provided at the two input ports, the TX path mixer (eg, downconverter) 2216 may be configured to generate sum and/or difference frequencies at the output ports. In some embodiments, RF device 2200 may implement a direct conversion receiver (DCR), also known as a homodyne, synchronous, or zero-IF receiver, in which case RX path mixer 2216 may be configured to use a local oscillator The signal demodulates the incoming radio signal whose frequency is equal to or very close to the carrier frequency of the radio signal. In other embodiments, RF device 2200 may utilize down-conversion to intermediate frequency (IF). The IF can be used in a superheterodyne radio receiver, where the received RF signal is shifted to the IF before the final detection of information in the received signal is done. Converting to IF can be useful for several reasons. For example, when several filter stages are used, they can all be set to a fixed frequency, which makes them easier to build and tune. In some embodiments, the RX path mixer 2216 may include several such IF conversion stages.Although a single RX path mixer 2216 is shown in the RX path of Figure 15, in some embodiments the RX path mixer 2216 may be implemented as a quadrature downconverter, in which case the RX path mixer The mixers will include a first RX path mixer and a second RX path mixer. The first RX path mixer may be configured to perform down-conversion to produce in-phase (I) down-converted by mixing the RX signal received by antenna 2202 with the in-phase component of the local oscillator signal provided by local oscillator 2206 Convert the RX signal. The second RX path mixer may be configured to perform down-conversion to produce a quadrature (Q) channel by mixing the quadrature components of the RX signal received by antenna 2202 and the local oscillator signal provided by local oscillator 2206 The RX signal is down-converted (the quadrature component is the component that is shifted in phase by 90 degrees from the in-phase component of the local oscillator signal). The output of the first RX path mixer may be provided to the I signal path, and the output of the second RX path mixer may be provided to the Q signal path, which may be substantially 90 out of phase with the I signal path Spend.The output of the RX path mixer 2216 may optionally be coupled to an RX path postmix filter 2218, which may be a low pass filter. In the case where the RX path mixer 2216 is a quadrature mixer implementing the first and second mixers as described above, at the outputs of the first and second mixers, respectively The inphase and quadrature components provided at can be coupled to respective respective first path postmix filters and second RX path postmix filters included in filter 2218.ADC 2220 may be configured to convert the mixed RX signal from RX path mixer 2216 from the analog domain to the digital domain. ADC 2220 may be a quadrature ADC that, like RX path quadrature mixer 2216, may include two ADCs configured to digitize downconverted RX path signals separated in in-phase and quadrature components. The output of ADC 2220 may be provided to a digital processing unit 2208, which is configured to perform various functions related to digital processing of the RX signal so that the information encoded in the RX signal may be extracted. One or more of any of the embodiments of boost switch drivers as described herein may be included within ADC 2220 .Returning to the details of the TX path that may be included in RF device 2200 , a digital signal (TX signal) to be later transmitted by antenna 2202 may be provided from digital processing unit 2208 to DAC 2230 . Like ADC 2220, DAC 2230 may include two DACs configured to convert digital I-path and Q-path TX signal components, respectively, to analog form.Optionally, the output of DAC 2230 may be coupled to a TX path premix filter 2228, which may be a TX path premix filter configured to filter out signal components outside a desired frequency band from the analog TX signal output by DAC 2230 A bandpass (eg, low-pass) filter (or in the case of quadrature processing, a pair of bandpass (eg, low-pass) filters). The digital TX signal may then be provided to a TX path mixer 2226, which may also be referred to as an upconverter. Like RX path mixer 2216, TX path mixer 2226 may include a pair of TX path mixers for in-phase and quadrature component mixing. As with the first RX path mixer and the second RX path mixer, which may be included in the RX path, each of the TX path mixers of TX path mixer 2226 may include two inputs and one output. The first input may receive TX signal components converted into analog form by respective DACs 2230, which will be up-converted to produce the RF signal to be transmitted. The first TX path mixer may generate an in-phase (I) upconverted signal (at In various embodiments, local oscillator 2206 may include multiple different local oscillators, or be configured to provide different local oscillator frequencies to mixer 2216 in the RX path and mixer 2226 in the TX path). The second TX path mixer may generate a quadrature phase (Q) upconverted signal by mixing the TX signal component converted to analog form by the DAC 2230 with the quadrature component of the TX path local oscillator signal. The output of the second TX path mixer can be added to the output of the first TX path mixer to generate a true RF signal. The second input of each of the TX path mixers may be coupled to a local oscillator 2206 .Optionally, RF device 2200 may include a TX path postmix filter 2224 configured to filter the output of TX path mixer 2226.The TX path amplifier 2222 may be a power amplifier (PA) configured to amplify the upconverted RF signal prior to providing the upconverted RF signal to the antenna 2202 for transmission.In various embodiments, any of RX path premix filter 2214, RX path postmix filter 2218, TX postmix filter 2224, and TX premix filter 2228 may be implemented as RF filters. In some embodiments, the RF filter may be implemented as multiple RF filters or filter banks. The filter bank may include multiple RF filters that may be coupled to a switch (eg, RF switch 2234 ) configured to selectively turn on and off any of the multiple RF filters (eg, activate multiple RF filters) any of the RF filters) in order to achieve the desired filtering characteristics of the filter bank (ie, to program the filter bank). For example, when the RF device 2200 is or is included in a BS or UE device, this filter bank can be used to switch between different RF frequency ranges. In another example, this filter bank may be programmable to suppress TX leakage over different duplex distances.Impedance tuner 2232 may include any suitable circuitry configured to match input and output impedances of different RF circuitry to minimize signal loss in RF device 2200. For example, impedance tuner 2232 may include an antenna impedance tuner. Being able to tune the impedance of the antenna 2202 can be particularly advantageous because the impedance of the antenna varies depending on the environment in which the RF device 2200 is located, eg, whether the antenna is held in the hand, placed on the roof of a car, etc., for example.As described above, RF switch 2234 may be a device configured to route high frequency signals through a transmission path, eg, to selectively switch between instances of any of the components shown in FIG. 15 , eg , to achieve the desired behavior and characteristics of the RF device 2200. For example, in some embodiments, an RF switch may be used to switch between different antennas 2202. In other embodiments, an RF switch may be used to switch between multiple RF filters of RF device 2200 (eg, by selectively turning RF filters on and off). Typically, an RF system will contain multiple such RF switches.RF device 2200 provides a simplified version and, in other embodiments, may include other components not specifically shown in FIG. 15 . For example, the RX path of RF device 2200 may include a current-to-voltage amplifier between RX path mixer 2216 and ADC 2220, which may be configured to amplify and convert the down-converted signal to a voltage Signal. In another example, the RX path of RF device 2200 may include a balun for generating a balanced signal. In yet another example, the RF device 2200 may further include a clock generator, which may, for example, include a suitable phase-locked loop (PLL) configured to receive a reference clock signal and use it to generate different clock signals, the A different clock signal may then be used to clock the operation of ADC 2220, DAC 2230 and/or may also be used by local oscillator 2206 to generate a local oscillator signal to be used in the RX path or the TX path.Example Data Processing System16 provides a block diagram illustrating an example data processing system 2300 that may be configured to control the operation of one or more boost switch drivers as described herein, according to some embodiments of the present disclosure. For example, data processing system 2300 may be configured to implement or control boost switch drivers 204, 804, or part of any other embodiment of a boost switch driver as described herein. In another example, data processing system 2300 may be configured to implement at least a portion of control logic 2236 shown in FIG. 15 .As shown in FIG. 16 , data processing system 2300 can include at least one processor 2302 (eg, hardware processor 2302 ) coupled to memory element 2304 through system bus 2306 . As such, a data processing system may store program code within memory element 2304. Further, processor 2302 can execute program code accessed from memory element 2304 via system bus 2306. In one aspect, a data processing system may be implemented as a computer suitable for storing and/or executing program code. It should be appreciated, however, that data processing system 2300 may be implemented in any system including a processor and memory capable of performing the functions described within this disclosure.In some embodiments, the processor 2302 may execute software or algorithms to perform activities as discussed in this disclosure, particularly activities related to operating a boost switch driver as described herein. The processor 2302 may include any combination of hardware, software, or firmware that provides programmable logic, including, by way of non-limiting example, microprocessors, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic Array (PLA), Application Specific IC (ASIC) or Virtual Machine Processor. The processor 2302 is communicatively coupled to the memory element 2304, eg, in a direct memory access (DMA) configuration, such that the processor 2302 can read from or write to the memory element 2304.In general, memory element 2304 may comprise any suitable volatile or non-volatile memory technology, including double data rate (DDR) random access memory (RAM), synchronous RAM (SRAM), dynamic RAM (DRAM), Flash, read only memory (ROM), optical media, virtual memory areas, magnetic or tape storage, or any other suitable technology. Unless otherwise specified, any of the memory elements discussed herein should be construed as encompassed within the broad term "memory." Information measured, processed, tracked, or sent to or from any of the components of data processing system 2300 may be provided in any database, register, All of these can be referenced at any suitable time frame in a control list, cache or storage structure. Any such storage option may be included within the broad term "memory" as used herein. Similarly, any of the potential processing elements, modules, and machines described herein should be construed to be encompassed by the broad term "processor." Each of the elements shown in the figures of the present disclosure, eg, any of the elements showing a boost switch driver as shown in FIGS. 1-12, may also be included for receiving, transmitting and/or in a network environment Suitable interfaces for otherwise conveying data or information such that it can communicate with, for example, data processing system 2300 .In certain example embodiments, the mechanisms for implementing one or more boost switch drivers as outlined herein may be implemented by logic encoded in one or more tangible media that may include non-transitory media, such as , embedded logic provided in an ASIC, DSP instructions, software (possibly including object code and source code) for execution by a processor or other similar machine, or the like. In some of these examples, a memory element, such as, for example, memory element 2304 shown in FIG. 16, may store data or information for the operations described herein. This includes memory elements capable of storing software, logic, code, or processor instructions that are executed to implement the activities described herein. A processor may execute any type of instructions associated with data or information used to implement the operations detailed herein. In one example, a processor, such as, for example, processor 2302 shown in FIG. 16, can transform an element or item (eg, data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented using fixed logic or programmable logic (eg, software/computer instructions executed by a processor), and the elements identified herein may be some type of programmable logic Processors, programmable digital logic (eg, FPGA, DSP, Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM)) or containing digital logic, software, code, electronic instructions or any suitable combination thereof.Memory element 2304 may include one or more physical memory devices, such as, for example, local memory 2308 and one or more mass storage devices 2310. Local memory may refer to RAM or other non-persistent memory devices typically used during actual execution of program code. Mass storage devices may be implemented as hard drives or other persistent data storage devices. Processing system 2300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times the program code must be retrieved from mass storage device 2310 during execution.As shown in FIG. 16, memory element 2304 may store application program 2318. In various embodiments, application programs 2318 may be stored in local memory 2308, one or more mass storage devices 2310, or separate from local memory and mass storage devices. It should be appreciated that data processing system 2300 may further execute an operating system (not shown in FIG. 16 ) that may facilitate execution of application programs 2318 . Application programs 2318 implemented in the form of executable program code may be executed by data processing system 2300 , eg, by processor 2302 . In response to executing the application, data processing system 2300 may be configured to perform one or more operations or method steps described herein.Input/output (I/O) devices, depicted as input device 2312 and output device 2314, are optionally coupled to a data processing system. Examples of input devices may include, but are not limited to, keyboards, pointing devices, such as mice, and the like. Examples of output devices may include, but are not limited to, monitors or displays, speakers, and the like. In some embodiments, the output device 2314 may be any type of screen display, such as a plasma display, a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an electroluminescence (EL) display, or any other indicator, For example dials, barometers or LEDs. In some implementations, the system may include a driver (not shown) for the output device 2314. Input and/or output devices 2312, 2314 may be coupled to the data processing system either directly or through intervening I/O controllers.In an embodiment, the input device and the output device may be implemented as a combined input/output device (shown in phantom in FIG. 16 around input device 2312 and output device 2314). An example of such a combined device is a touch sensitive display, sometimes also referred to as a "touch screen display" or simply a "touch screen". In this embodiment, input to the device may be provided by movement of a physical object, such as, for example, a stylus or a user's finger touching on or near the display screen.Network adapters 2316 may also be optionally coupled to the data processing system such that it becomes coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. A network adapter may include a data receiver for receiving data transmitted by the system, device and/or network to the data processing system 2300, and for transmitting data from the data processing system 2300 to the system, device and/or network or network data transmitter. Modems, cable modems, and Ethernet cards are examples of the different types of network adapters that may be used with data processing system 2300 .Choose an instanceThe following paragraphs provide various examples of the embodiments disclosed herein.Example 1 provides a switch driver circuit comprising: an input configured to receive an input clock signal; an output configured to provide an output clock signal; a first transistor (eg, transistor m5 shown in the figures of the present disclosure); a second transistor A transistor (eg, transistor m6 shown in the figures of this disclosure), wherein each of the first transistor and the second transistor includes a first terminal (eg, a gate terminal) and a second terminal (eg, , drain terminal), and wherein one of the first transistor and the second transistor is a P-type transistor (eg, a PMOS transistor) and the other is an N-type transistor (eg, an NMOS transistor); and an electrical A level shifter circuit configured to level shift the input clock signal to generate a level shifted input clock signal. The first terminal of the first transistor is configured to receive a signal indicative of the input clock signal (eg, the input clock signal itself or a portion of the input clock signal that has been matched with a signal to be provided to the second transistor). a level-shifted version of the signal in a complementary manner), the first terminal of the second transistor is configured to receive a signal indicative of the level-shifted input clock signal, and the first transistor Each of the second terminal of the second transistor and the second terminal of the second transistor is coupled to the output (ie, the second terminal of the first transistor is coupled to the second terminal of the second transistor the second terminal, and both are coupled to the output).Example 2 provides the switch driver circuit of example 1, wherein the input clock signal has a low voltage value and a high voltage value, and level shifting the input clock signal includes the level shifter circuit changing the Each of the low voltage value and the high voltage value of the signal is input to generate the level shifted input signal.Example 3 provides the switch driver circuit of examples 1 or 2, wherein the level shifter circuit includes a coupling capacitor and a voltage controller circuit, and the first terminal of the second transistor is configured through a first capacitor An electrode is coupled to the input and a second capacitor electrode is coupled to the voltage controller circuit and each of the first terminals of the second transistor receives the level-shifted input clock signal indicative of the level-shifted input clock signal Signal.Example 4 provides the switch driver circuit of example 3, wherein a voltage controller circuit is configured to control a high voltage value in the level shifted input clock signal. This voltage controller circuit may be referred to as a "maximum level controller" because it sets the maximum voltage value of the level shifted input clock signal. The minimum voltage value is then automatically adjusted based on the ratio between the capacitance of the coupling capacitor and the capacitance for the load of the voltage controller circuit.Example 5 provides the switch driver circuit of example 4, wherein the third terminal of the second transistor is coupled to a supply voltage, and the supply voltage has a value corresponding to (eg, substantially equal to) the level shifted the high voltage in the input clock signal.Example 6 provides the switch driver circuit of examples 4 or 5, wherein the voltage controller circuit includes a pair of cross-coupled transistors, each including a first terminal (eg, a gate terminal), a second terminal (eg, drain terminal) and a third terminal (eg, source terminal), the first terminal of the first transistor of the pair of cross-coupled transistors is coupled to all the second transistors of the pair of cross-coupled transistors the second terminal, the first terminal of the second transistor of the pair of cross-coupled transistors is coupled to the second terminal of the first transistor of the pair of cross-coupled transistors, the the third terminal of each of the first transistor of the pair of cross-coupled transistors and the second transistor of the pair of cross-coupled transistors is coupled to a reference voltage, and the value of the reference voltage corresponds to (eg, is substantially equal to) the high voltage in the level-shifted input clock signal.Example 7 provides the switch driver circuit of any of Examples 4-6, wherein the first transistor is an N-type transistor and the second transistor is a P-type transistor.Example 8 provides the switch driver circuit of any of Examples 4-6, wherein the level shifter circuit is a first level shifter circuit, the switch driver circuit further comprising a second level shifter circuit , and the second level shifter circuit is configured to control the low voltage level in the output clock signal.Example 9 provides the switch driver circuit of example 3, wherein a voltage controller circuit is configured to control a low voltage value in the level shifted input clock signal. This voltage controller circuit may be referred to as a "minimum level controller" because it sets the minimum voltage value of the level shifted input clock signal. The maximum voltage value is then automatically adjusted based on the ratio between the capacitance of the coupling capacitor and the capacitance for the load of the voltage controller circuit.Example 10 provides the switch driver circuit of example 9, wherein the third terminal of the second transistor is coupled to a supply voltage, and the supply voltage has a value corresponding to (eg, substantially equal to) the level shifted Low voltage in the input clock signal.Example 11 provides the switch driver circuit of examples 9 or 10, wherein the voltage controller circuit includes a pair of cross-coupled transistors, each including a first terminal (eg, a gate terminal), a second terminal (eg, drain terminal) and a third terminal (eg, source terminal). The first terminal of a first transistor of the pair of cross-coupled transistors is coupled to the second terminal of a second transistor of the pair of cross-coupled transistors, the pair of cross-coupled transistors The first terminal of a second transistor is coupled to the second terminal of the first transistor of the pair of cross-coupled transistors, the first transistor and the one of the pair of cross-coupled transistors the third terminal of each of the second transistors of the cross-coupled transistors is coupled to a supply voltage, and the supply voltage has a value corresponding to (eg, substantially equal to) the level-shifted input the low voltage in the clock signal.Example 12 provides the switch driver circuit of any of Examples 9-11, wherein the first transistor is a P-type transistor and the second transistor is an N-type transistor.Example 13 provides the switch driver circuit of any of Examples 9-12, wherein the level shifter circuit is a first level shifter circuit, the switch driver circuit further comprising a second level shifter circuit , and the second level shifter circuit is configured to control the high voltage level in the output clock signal.Example 14 provides the switch driver circuit of any of the preceding examples, further comprising a third transistor coupled to the first transistor in a cascode arrangement, wherein all of the first transistors are The second terminal is configured by coupling the second terminal of the first transistor to a third terminal (eg, source terminal) of the third transistor and coupling the second terminal (eg, drain terminal) of the third transistor to terminal) coupled to the output and coupled to the output.In various embodiments, the first terminal (eg, gate terminal) of the third transistor may be coupled to a suitable reference voltage. For example, for embodiments in which the third transistor is an N-type transistor, the reference voltage may be approximately 1V, or for embodiments in which the third transistor is a P-type transistor, the reference voltage may be approximately is 0V (ground).Example 15 provides the switch driver circuit of any of the preceding examples, wherein each of the first transistor and the second transistor is a field effect transistor, and wherein the first terminal is a gate terminal , the second terminal is a drain terminal, and the third terminal is a source terminal.Example 16 provides a switch driver circuit including: a first branch including a first transistor (eg, transistor m5 shown in the figures of the present disclosure); and a second branch including a second transistor (eg, the present disclosure) The transistors (m6) and level shifter circuits shown in the figures are disclosed. The input clock signal will be split between the first branch and the second branch such that a signal indicative of a portion of the input clock signal split into the first branch is provided to the first transistor and split A portion of the input clock signal to the second branch is level shifted by the level shifter circuit to generate a level shifted input clock signal and a signal indicative of the level shifted input clock signal is provided to the second transistor. One of the first transistor and the second transistor is an N-type transistor and the other is a P-type transistor. The output of the first transistor is combined with the output of the second transistor to generate an output clock signal.Example 17 provides the switch driver circuit of example 16, wherein each of the first transistor and the second transistor includes a first terminal (eg, a gate terminal), a second terminal (eg, a drain terminal) sub) and a third terminal (eg, source terminal), the signal indicating the portion of the input clock signal divided into the first branch is provided to the first terminal of the first transistor , and the signal indicative of the level-shifted input clock signal is provided to the first terminal of the second transistor.Example 18 provides the switch driver circuit of example 17, wherein the level shifter circuit includes a capacitor and a voltage controller circuit, the portion of the input clock signal split into the second branch is configured to be applied to the A first capacitor electrode of a capacitor and a second capacitor electrode of the capacitor are coupled to each of the output of the voltage controller circuit and the first terminal of the second transistor.Example 19 provides the switch driver circuit of examples 17 or 18, wherein the second terminal of the first transistor is coupled to the second terminal of the second transistor, the second terminal of the first transistor The three terminals are coupled to a first supply voltage, and the third terminal of the second transistor is coupled to a second supply voltage.Example 20 provides a method of making a switch driver circuit, the method comprising: providing an input configured to receive an input clock signal; providing an output configured to provide an output clock signal; providing a first transistor (eg, Transistor m5 shown in the figures of the present disclosure); a second transistor (eg, transistor m6 shown in the figures of the present disclosure) is provided, wherein each of the first transistor and the second transistor includes a a terminal (eg, gate terminal) and a second terminal (eg, drain terminal), and wherein one of the first transistor and the second transistor is a P-type transistor (eg, a PMOS transistor) and the other one is an N-type transistor (eg, an NMOS transistor); and provides a level shifter circuit configured to level shift the input clock signal to generate a level shifted input clock signal, wherein the the first terminal of the first transistor is configured to receive a signal indicative of the input clock signal, the first terminal of the second transistor is configured to receive a signal indicative of the level shifted input clock signal, and Each of the second terminal of the first transistor and the second terminal of the second transistor is coupled to the output (ie, the second terminal of the first transistor is coupled to the output). the second terminal of the second transistor and both are coupled to the output).Example 21 provides the method of example 20, wherein the switch driver circuit is the switch driver circuit of any of Examples 1-19.Variations and implementationsAlthough embodiments of the present disclosure are described above with reference to the exemplary implementations shown in FIGS. 1-16 , those skilled in the art will appreciate that the various teachings described above are applicable to a wide variety of other implementation plan.In the discussion of the above embodiments, components of the system, such as, for example, inverters, resistors, transistors, and/or other components, may be readily substituted, substituted, or otherwise modified to suit specific circuitry requirements. Furthermore, it should be noted that the use of complementary electronics, hardware, software, etc. provides equally feasible options for implementing the teachings of the present disclosure related to implementing one or more boost switch drivers.Components of various systems for implementing one or more boost switch drivers as presented herein may include electronic circuitry that performs the functions described herein. In some cases, one or more components of the system may be provided by a processor specifically configured to perform the functions described herein. For example, a processor may include one or more application-specific components, or may include programmable logic gates configured to perform the functions described herein. The circuitry can operate in the analog domain, the digital domain, or the mixed-signal domain. In some examples, a processor may be configured to perform the functions described herein by executing one or more instructions stored on a non-transitory computer-readable storage medium.In some embodiments, any number of the circuits of the figures may be implemented on a board of an associated electronic device. The board may be a general-purpose circuit board that may hold various components of the electronic device's internal electronic system and further provide connectors for other peripheral devices. More specifically, the boards may provide electrical connections through which other components of the system may be in electrical communication. Any suitable processors (including DSPs, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. may be suitably coupled to the board based on configuration requirements, processing commands, computer design, and the like. Other components such as external storage, additional sensors, controllers for audio/video display, and peripherals can be attached to the board as plug-in cards via cables, or integrated into the board itself. In various embodiments, the functionality described herein may be implemented in emulated form as software or firmware running within one or more configurable (eg, programmable) elements arranged in a structure to support the functionality. The software or firmware providing the emulation may be provided on a non-transitory computer-readable storage medium including instructions that allow a processor to implement those functionalities.In some embodiments, the circuits of the figures may be implemented as stand-alone modules (eg, devices with associated components and circuitry configured to perform a particular application or function) or as plug-in modules into application-specific hardware of an electronic device . It should be noted that embodiments of the present disclosure may readily be incorporated in part or in whole in a system-on-a-chip (SOC) package. SOC stands for an IC that integrates the components of a computer or other electronic system into a single chip. It can contain digital, analog, mixed-signal and generally RF functionality: all of which can be provided on a single chip substrate. Other embodiments may include multi-chip modules (MCMs) in which multiple individual ICs are located within a single electronic package and are configured to closely interact with each other through the electronic package.All specifications, dimensions, and relationships outlined herein (eg, the numbers of components of a boost switch driver or portions thereof, etc., shown in the figures) are provided for purposes of example and teaching only. Such information may vary considerably without departing from the spirit of the present disclosure or the scope of the appended claims. The specification applies only to one non-limiting example, and therefore it should be construed as such. In the foregoing specification, example embodiments have been described with reference to processors and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. Accordingly, the description and drawings are to be regarded in an illustrative rather than a restrictive sense.It should be noted that for many of the examples provided herein, interactions may be described in terms of two, three, four, or more electrical components. However, this is done for purposes of clarity and example only. It should be appreciated that systems may be combined in any suitable manner. In accordance with similar design alternatives, any of the components, modules and elements shown in the figures may be combined in various possible configurations, all of which are clearly within the broad scope of the present disclosure. In some cases, it may be easier to describe one or more of the functionality of a given set of processes by referring to only a limited number of electrical elements. It should be appreciated that the circuits of the figures and their teachings are readily expandable and can accommodate many components, as well as more complex or sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope of the circuit or inhibit the broad teachings of the circuit as it may be applied to countless other architectures.Furthermore, the functions associated with implementing one or more boost switch drivers as set forth herein represent only a few of the possible functions that may be performed by or within the systems shown in the figures. some functions. Some of these operations may be deleted or removed as appropriate, or these operations may be modified or changed considerably without departing from the scope of this disclosure. Additionally, the timing of these operations may vary considerably. The foregoing operational flows have been provided for purposes of example and discussion. The embodiments described herein provide significant flexibility in that any suitable arrangement, schedule, configuration, and timing mechanism may be provided without departing from the teachings of the present disclosure.It should be noted that all optional features of the apparatus described above can also be implemented with respect to the methods or processes described herein, and that details in the examples can be used anywhere in one or more embodiments.Numerous other changes, substitutions, changes, alterations and modifications will be apparent to those skilled in the art, and this disclosure is intended to encompass all such changes, substitutions, changes, alterations and modifications that fall within the scope of the appended claims. |
In the fabrication of integrated circuits, one specific technique for making surfaces flat is chemical-mechanical planarization. However, this technique is quite time consuming and expensive, particularly as applied to the numerous intermetal dielectric layers - the insulative layers sandwiched between layers of metal wiring - in integrated circuits. Accordingly, the inventor devised several methods for making nearly planar intermetal dielectric layers without the use of chemical-mechanical planarization and methods of modifying metal layout patterns to facilitate formation of dielectric layers with more uniform thickness. These methods of modifying metal layouts and making dielectric layers can be used in sequence to yield nearly planar intermetal dielectric layers with more uniform thickness. |
Claims 1. A method of forming a nearly planar dielectric film on a metal layer, comprising : forming a metal layer having a predetermined maximum feature spacing; forming an oxide layer on the metal layer using a TEOS-based procedure; facet-etching the oxide layer; and reflowing at least a portion of the oxide layer.2. The method of claim 1, wherein forming the metal layer comprises forming metal layer with a maximum feature spacing of 0.3 microns.3. The method of claim 1, wherein forming the metal layer comprises forming metal runners and wherein forming the oxide layer forms oxide on one or more sidewalls of the metal runners.4. The method of claim 1, wherein forming the oxide layer comprises forming a portion of the oxide layer using a TEOS-based procedure at a first deposition rate and forming a portion of the oxide layer using a TEOS-based procedure at a second deposition rate which is less than the first deposition rate.5. The method of claim 1, wherein forming the oxide layer comprises forming a portion of the oxide layer using a TEOS-based procedure at a first deposition rate having a tendency to form voids and forming a portion of the oxide layer using a TEOS-based procedure at a second deposition rate having a tendency to form substantially no voids or fewer voids than the first deposition rate.6. The method of claim 1, further comprising facet etching the oxide layer to reduce severity of any trenches in oxide layer overlying gaps between metal features in the metal layer.7. A method of forming a nearly planar dielectric film on a metal layer, comprising: forming a metal layer having a predetermined maximum feature spacing of 0.3 microns; forming a first oxide layer using a TEOS-based procedure at a first deposition rate on the metal layer; forming a second oxide layer on the first oxide layer using a TEOS-based procedure at a second deposition rate which is less than the first deposition rate; and facet etching the second oxide layer.8. A method of forming a nearly planar dielectric film on a metal layer, comprising: forming a metal layer; depositing a film resistant to lateral etching on the metal layer; etching the metal layer to form a metal pattern having a predetermined maximum feature spacing of 0.3 microns; forming a first oxide layer using a TEOS-based procedure at a first deposition rate on the metal pattern; forming a second oxide layer on the first oxide layer using a TEOS-based procedure at a second deposition rate which is less than the first deposition rate; and facet etching the second oxide layer.9. A method of forming a nearly planar dielectric film on a metal layer, comprising : forming a metal layer; depositing a film resistant to lateral etching on the metal layer; etching the metal layer to form a metal pattern having a predetermined maximum feature spacing; forming two or more oxide spacers on two or more metal features of the metal layer to provide an effective space less than about the predetermined maximum feature spacing between two or more oxide spacers; forming a first oxide layer using a TEOS-based procedure at a first deposition rate on the metal pattern; forming a second oxide layer on the first oxide layer using a TEOS-based procedure at a second deposition rate which is less than the first deposition rate; and facet etching the second oxide layer.10. A method of forming a nearly planar dielectric film on a metal layer, comprising : forming a metal layer including two or more metal features spaced with a maximum feature spacing greater than 0.3 microns; forming a oxide spacer on the two or more metal features to provide oxide features spaced by less than the maximum feature spacing; and forming an oxide layer on the metal layer having a thickness less than about 6000 angstroms using a TEOS-based procedure.11. The method of claim 10, wherein forming the oxide layer comprises forming a portion of the oxide layer using a TEOS-based procedure at a first deposition rate and forming a portion of the oxide layer using a TEOS-based procedure at a second deposition rate which is less than the first deposition rate.12. The method of claim 10, wherein forming the oxide layer comprises forming a portion of the oxide layer using a TEOS-based procedure at a first deposition rate having a tendency to form voids and forming a portion of the oxide layer using a TEOS-based procedure at a second deposition rate having a tendency to form fewer voids than the first deposition rate.13. The method of claim 10, further comprising facet etching the oxide layer to reduce severity of any trenches in oxide layer overlying gaps between metal features in the metal layer. 14. The method of claim 10, wherein forming an oxide layer on the metal layer having a thickness less than about 6000 angstroms using a TEOS-based procedure comprises forming an oxide layer having a thickness less than about 4000 angstroms.15. A method of making nearly planar dielectric films on a metal layer, where maximum feature spacing cannot be reduced because of lateral electrical coupling concerns, the method comprising: forming a metal pattern with maximum feature spacing of about five microns ; forming an oxide spacer on one or more metal features of the metal layer to provide an effective space less than about five microns; and executing a FLOW-FILL procedure to form a substantially void-free oxide layer on the metal layer and the oxide spacer.16. The method of claim 15, wherein forming the metal pattern includes forming a metal layer; depositing a film resistant to lateral etching on the metal layer; and etching the metal layer to form the metal pattern.17. The method of claim 16, wherein the film resistant to lateral etching is aTEOS, oxide-nitride film.18. The method of claim 15, wherein forming the metal pattern includes forming a pattern including extensive serif features to avoid large open areas.19. A method of making a dielectric layer, comprising: depositing a dielectric material at a first deposition rate having a tendency to form voids; and depositing a dielectric material on the deposited dielectric material at a second deposition rate having a tendency to form substantially no voids or fewer voids than the first deposition rate. 20. A method of making a dielectric layer, comprising: depositing a dielectric material at a first deposition rate; and depositing a dielectric material on the deposited dielectric material at a second deposition rate less than the first deposition rate.21. A method of making a dielectric layer, comprising: depositing a dielectric material using a deposition process having a first conforma tendency; and depositing a dielectric material on the deposited dielectric material using a deposition process having a second conforma tendency greater than the first conforma tendency.22. The method of claim 21 : wherein depositing a dielectric material using a deposition process having a first conforma tendency comprises depositing a dielectric material using a TEOS-based procedure at a first deposition rate; and wherein depositing a dielectric material using a deposition process having a second conforma tendency comprises depositing a dielectric material using a TEOS-based procedure at a second deposition rate lower than the first deposition rate.23. A method of making an integrated circuit, comprising: providing a first metal layout having a first pattern fill density; generating a metal layout pattern based on the first metal layout and having a second pattern fill density greater than the first pattern fill density; forming a metal pattern on a layer; depositing a dielectric material on the metal pattern using a deposition process having a first conforma tendency; and depositing a dielectric material on the deposited dielectric material using a deposition process having a second conforma tendency greater than the first conforma tendency. 24. A method of making an integrated circuit, comprising: providing a first metal layout having a first pattern fill density; generating a metal layout pattern based on the first metal layout and having a second pattern fill density greater than the first pattern fill density; forming a metal pattern on a layer; depositing a dielectric material on the metal pattern at a first deposition rate; and depositing a dielectric material on the deposited dielectric material at a second deposition rate less than the first deposition rate.25. A method of making an integrated circuit, comprising: providing a first metal layout having a first pattern fill density; generating a metal layout pattern based on the first metal layout and having a second pattern fill density greater than the first pattern fill density; forming a metal pattern on a layer; forming a first oxide layer on the metal pattern using a TEOS-based procedure at a first deposition rate; and forming a second oxide layer on the first oxide layer using a TEOS-based procedure at a second deposition rate lower than the first deposition rate.26. A method of increasing pattern-fill density of a metal layout, comprising: identifying and filling in open areas of the metal layout with floating metal; identifying and filling in notches of the metal layout; and identifying and filling in corners of the metal layout.27. A method of increasing pattern-fill density of a metal layout, comprising: identifying and filling in one or more open areas of the metal layout with floating metal; identifying and filling in one or more notches of the metal layout after identifying and filling in one or more open areas; and identifying and filling in one or more corners of the metal layout after filling in one or more notches.28. A method of increasing pattern-fill density of a metal layout, comprising: identifying and filling in one or more notches of the metal layout; identifying and filling in one or more corners of the metal layout; and identifying and filling in between opposing edges of live metal regions of the metal layout.29. A method of increasing pattern-fill density of a hierarchical metal layout pattern definition, comprising: identifying and filling in one or more notches, inner lines, and corners of the metal layout to define a first derivative metal layout pattern definition; determining whether the first derivative metal layout pattern definition has a predetermined pattern fill density; identifying and filling in one or more notches and corners of the metal layout to define a second derivative metal layout pattern definition; determining whether the second derivative metal layout pattern definition has the predetermined pattern fill density; and redefining one or more edges of the second derivative metal layout in response to determining that the second derivative metal layout does not have the predetermined pattern fill density.30. A computer-readable medium comprising: instructions for identifying and filling in one or more notches, inner lines, and corners of the metal layout to define a first derivative metal layout pattern definition; and instructions for determining whether the first derivative metal layout pattern definition has a predetermined pattern fill density; 31. A computer-readable medium comprising: instructions for identifying and filling in one or more predetermined forms of non-metallic regions of a metal layout pattern definition to define a derivative metal layout pattern definition; and instructions for determining whether the derivative metal layout pattern definition has a predetermined pattern fill density.32. A system comprising: at least one processor; and a memory coupled to the processor and comprising: instructions for identifying and filling in one or more notches and corners of the metal layout to define a first derivative metal layout pattern definition; and instructions for determining whether the first derivative metal layout pattern definition has a predetermined pattern fill density; 33. An integrated circuit comprising: one or more conductors; a first insulative layer which is substantially free of voids and which contacts the one or more conductors; a second insulative layer which lies on the first insulative layer and which includes a substantial number of voids.34. The integrated circuit of claim 33, wherein the first and second insulative layer consist essentially of silicon oxide and have substantially different dielectric constants.35. An integrated memory circuit comprising: one or more memory cells; one or more conductors coupled to the one or more memory cells; a first insulative layer which is substantially free of voids and which contacts the one or more conductors; and a second insulative layer which lies on the first insulative layer and which includes a substantial number of voids.36. The integrated memory circuit of claim 35, wherein the first and second insulative layers consist essentially of silicon oxide and have substantially different dielectric constants.37. A system comprising: a processor; and at least one integrated memory circuit comprising: one or more memory cells; one or more conductors coupled to the one or more memory cells; a first insulative layer which is substantially free of voids and which contacts the one or more conductors; and a second insulative layer which lies on the first insulative layer and which includes a substantial number of voids.38. The system of claim 37, wherein the first and second insulative layers consist essentially of silicon oxide and have substantially different dielectric constants.39. The system of claim 1, wherein the processor is a digital signal processor. |
METHODS FOR MAKING NEARLY PLANAR DIELECTRIC FILMSIN INTEGRATED CIRCUITSRelated ApplicationThis application is a continuation of U. S. Provisional Application 60/187,658, which was filed on March 7,2000 and which is incorporated herein by reference. Technical FieldThe present invention concerns methods of making integrated circuits, particularly methods of making metal masks and dielectric, or insulative, films. Background of the InventionIntegrated circuits, the key components in thousands of electronic and computer products, are interconnected networks of electrical components fabricated on a common foundation, or substrate. Fabricators typically build the circuits layer by layer, using techniques, such as doping, masking, and etching, to form thousands and even millions of microscopic resistors, transistors, and other electrical components on a silicon substrate, known as a wafer. The components are then wired, or interconnected, together to define a specific electric circuit, such as a computer memory. One important concern during fabrication is flatness, or planarity, of various layers of the integrated circuit. For example, planarity significantly affects the accuracy of a photo-imaging process, known as photomasking or photolithography, which entails focusing light on light-sensitive materials to define specific patterns or structures in a layer of an integrated circuit. In this process, the presence of hills and valleys in a layer forces various regions of the layer out of focus, causing photo-imaged features to be smaller or larger than intended. Moreover, hills and valleys can reflect light undesirably onto other regions of a layer and add undesirable features, such as notches, to desired features. These problems can be largely avoided if the layer is sufficiently planar. One process for making surfaces flat or planar is known as chemicalmechanical planarization or polishing. Chemical-mechanical planarization typically entails applying a fluid containing abrasive particles to a surface of an integrated circuit, and polishing the surface with a rotating polishing head. The process is used frequently to planarize the insulative, or dielectric, layers that lie between layers of metal wiring in integrated circuits. These insulative layers, which typically consist of silicon dioxide, are sometimes called intermetal dielectric layers. In conventional integrated-circuit fabrication, planarization of these layers is necessary because each insulative layer tends to follow the hills and valleys of the underlying metal wiring, similar to the way a bed sheet follows the contours of whatever it covers. Thus, fabricators generally deposit an insulative layer much thicker than necessary to cover the metal wiring and then planarize the insulative layer to remove the hills and valleys. Unfortunately, conventional methods of forming these intermetal dielectric layers suffer from at least two problems. First, the process of chemical-mechanical planarization is not only relatively costly but also quite time consuming. And second, the thickness of these layers generally varies considerably from point to point because of underlying wiring. Occasionally, the thickness variation leaves metal wiring under a layer too close to metal wiring on the layer, encouraging shorting or crosstalking. Crosstalk, a phenomenon that also occurs in telephone systems, occurs when signals from one wire are undesirable transferred or communicated to another nearby wire. Accordingly, the art needs fabrication methods that reduce the need to planarize intermetal dielectric layers, that reduce thickness variation in these layers, and that improve their electrical properties generally. Summary of the InventionTo address these and other needs, the inventor devised various methods of making dielectric layers on metal layers, which reduce the need for chemicalmechanical planarization procedure. Specifically, a first exemplary method of the invention forms a metal layer with a predetermined maximum feature spacing and then uses a TEOS-based (tetraethyl-orthosilicate-based) oxide deposition procedure to form an oxide film having nearly planar or quasi-planar characteristics. The exemplary method executes a CVD (chemical vapor deposition) TEOS oxide procedure to form an oxide layer on a metal layer having a maximum feature spacing of 0.2-0.5 microns. A second exemplary method includes voids within the oxide, or more generally insulative, film to improve its effective dielectric constant and thus improve its ability to prevent shorting and crosstalk between metal wiring.Specifically, the exemplary method uses a TEOS process at a non-conformal rate sufficient to encourage the formation of voids, and then uses the TEOS process at a conforma rate of deposition to seal the voids. More generally, however, the invention uses a non-conformal deposition procedure to encourage formation of voids and then a more conforma deposition to seal the voids. A third exemplary method increases the metal-fill density of metal patterns to facilitate formation of intermetal dielectric layers having more uniform thicknesses. The third exemplary method adds floating metal to open areas in a metal layout and then extends non-floating metal dimensions according to an iterative procedure that entails filling in notches, and corners and moving selected edges of the layout. Brief Description of the DrawingsFigure 1 is a cross-sectional view of a partial integrated-circuit assembly 10 including a substrate 12 and metal wires 14a, 14b, and 14c;Figure 2 is a cross-sectional view of the Figure 1 integrated-circuit assembly after formation of a substantially planar insulative layer 16, including a portion 16a with voids and a portion 16b without voids;Figure 3 is a cross-sectional view of the Figure 2 assembly after a facet etch to improve the planarity of layer 16;Figure 4 is a cross-sectional view of the Figure 3 assembly after formation of metal wires 18a and 18b, and substantially planar insulative layer 20, including a portion 20a with voids and a portion 20b without voids;Figure 5 is a cross-sectional view of a partial integrated-circuit assembly 21 including a substrate 22 and metal wires 24a, 24b, and 24c;Figure 6 is a cross-sectional view of the Figure 5 assembly after formation of an oxide spacer 26 and a substantially planar insulative layer 28, including a portion 28a with voids and a portion 28b without voids;Figure 7 is a cross-sectional view of the Figure 6 assembly after a facet etch to improve the planarity of layer 28; Figure 8 is a cross-sectional view of the Figure 7 assembly after formation of metal wires 30a and 30b, and substantially planar insulative layer 34, including a portion 34a with voids and a portion 34b without voids;Figure 9 is a cross-sectional view of a partial integrated-circuit assembly 35 including a substrate 36 and metal wires 36a, 36b, and 36c;Figure 10 is a cross-sectional view of the Figure 9 assembly after formation of an oxide spacer 40 and a substantially planar insulative layer 42;Figure 11 is a flow chart illustrating an exemplary method of modifying a metal layout to facilitate fabrication of intermetal dielectric layers with more uniform thickness;Figure 12 is a partial top view of a metal layout showing how the exemplary method of Figure 11 adds metal to open areas in a metal layout;Figure 13 is a partial top view of a metal layout showing how the exemplary method of Figure 11 fills notches in a metal layout;Figure 14 is a partial top view of a metal layout showing how the exemplary method of Figure 11 fills corners in a metal layout;Figure 15 is a partial view of a metal layout showing how the exemplary method of Figure 11 fills in between opposing edges of live metal regions in a metal layout;Figure 16 is a partial view of a metal layout showing how the exemplary method of Figure 11 moves edges;Figure 17 is a block diagram of an exemplary computer system 42 for hosting and executing a software implementation of the exemplary pattern-filling method of Figure 11; andFigure 18 is a simplified schematic diagram of an exemplary integrated memory circuit 50 that incorporates one or more nearly planar intermetal dielectric layers and/or metal layers made in accord with exemplary methods of the invention. Description of the Preferred EmbodimentsThe following detailed description, which references and incorporates the above-identified Figures, describes and illustrates specific embodiments of the invention. These embodiments, offered not to limit but only to exemplify and teach the invention, are shown and described in sufficient detail to enable those skilled in the art to implement or practice the invention. Thus, where appropriate to avoid obscuring the invention, the description may omit certain information known to those of skill in the art.First Exemplary Method of Forming Nearly Planar Dielectric FilmsFigures 1-4 show a number of exemplary integrated-circuit assemblies, which taken collectively and sequentially, illustrate an exemplary method of making nearly planar or quasi planar dielectric films, or layers, within the scope of the present invention. As used herein, a quasi planar film is globally planar with local nonplanarities having slopes less than or equal to 45 degrees and depths less than the thickness of the next metal layer to be deposited. The local nonplanarities typically occur over the gaps between underlying metal features. The method, as shown in Figure 1, a cross-sectional view, begins with formation of an integrated-circuit assembly or structure 10, which can exist within any integrated circuit, for example, an integrated memory circuit.Assembly 10 includes a substrate 12. The term"substrate,"as used herein, encompasses a semiconductor wafer as well as structures having one or more insulative, semi-insulative, conductive, or semiconductive layers and materials.Thus, for example, the term embraces silicon-on-insulator, silicon-on-sapphire, and other advanced structures. Substrate 12 includes three representative wires or conductive structures 14a, 14b, and 14c, with a maximum (or average) feature spacing 14s. In the exemplary embodiment, wires 14a-14c are approximately 3000-6000 angstroms thick and comprise metals, such as aluminum, gold, or silver, and nonmetals, such as heavily doped polysilicon. Spacing 14s, in the exemplary embodiment, is 0.3 microns. Wires 14a-14c can be formed using any number of methods, for example, photolithography and dry etching. To avoid increasing feature spacing during dry etching, the exemplary embodiment forms a lateral-etch-resistant layer, that is, a layer resistant to lateral etching, on a metal layer before etching. Examples of suitable layers include a TEOS, oxide-nitride layer. Alternatively, one can add extensive serif features to the metal mask layout to avoid large open areas, especially to reduce the diagonal distance between features. Figure 2 shows that the exemplary method next entails forming an insulative layer 16 over substrate 12 and wires 14a-14b. Layer 16 has a thickness 16t of, for example, 6000 angstroms, and includes two layers or sublayers 16a and 16b. Sublayer 16a includes a number of voids, particularly voids 17 between wires 14a and 14b, and between wires 14b and 14c, to increase its dielectric constant. Sublayer 16b is either substantially voidless or includes a substantially fewer number of voids than sublayer 16a. The presence of voids in sublayer 16a reduces lateral electrical coupling between adjacent metal features, for example, between wires 14a and 14b and between wires 14a-14c and any overlying conductive structures. The exemplary method forms layer 16 using a combination of a nonconforma and conforma oxide depositions. In particular, it uses a CVD TEOS (chemical vapor deposition tetraethyl-orthosilicate) or PECVD TEOS (plasmaenhanced CVD TEOS) oxide deposition process at a non-conformal deposition rate to form void-filled sublayer 16a voids and then lowers the TEOS deposition rate to, a conforma rate to form substantially voidless sublayer 16b. Figure 3 shows that after forming sublayer 16b, which includes some level of nonplanarity, the exemplary method facet etches the sublayer at an angle of about 45 degrees to improve its global planarity. (That layer 16b has undergone further processing is highlighted by its new reference numeral 16b'.)The facet etch reduces or smooths any sharp trenches in regions overlying gaps between metal features, such as wires 14a-14c. As used herein, the term"facet etch"refers to any etch process that etches substantially faster in the horizontal direction than in the vertical direction. Thus, for example, the term includes an angled sputter etch or reactive-ion etch. To optimize the slopes of any vias, one can perform the facet etch before via printing. More specifically, one can facet etch after etching any necessary vias and stripping photoresist to produce vias having greater slope and smoothness. Figure 4 shows the results of forming a second metallization level according to the procedure outlined in Figures 1-3. In brief, this entails forming conductive structures 18a and 18b on insulative sublayer 16b'and forming an insulative layer 20 on sublayer 16b'and conductive structures 18a and 18b. Insulative layer 20, like insulative layer 16, includes void-filled sublayer 20a and substantially void-free sublayer 20b'. Sublayer 20a includes one or more voids19 between conductive structures 18a and 18b. Sublayer 20b'was facet etch to improve its planarity. Layer 20 has a thickness 20t, of for example 3000-6000 angstroms. Second Exemplary Method of Formmg Nearly Planar Dielectric FilmsFigures 5-8 show a number of exemplary integrated-circuit assemblies, which taken collectively and sequentially, illustrate a second exemplary method of making nearly planar or quasi planar dielectric layers within the scope of the present invention. The second method is particularly applicable to maximum metal feature spacing greater than about 0.3 microns or oxide thickness less than 6000 angstroms to allow for shallow via formation, that is, via depths less than about 4000 angstroms. More particularly, Figure 5 shows that the method begins with formation of an integrated-circuit assembly or structure 21, which, like assembly 10 inFigure 1, can exist within any integrated circuit. Assembly 10 includes a substrate 22 which supports three representative wires or conductive structures 24a, 24b, and 24c, with a desired feature spacing 24s. In the exemplary embodiment, spacing 24s is greater than 0.3 microns. Some embodiments set a minimum spacing of 0.17 microns. However, the present invention is not limited to any particular spacing. Figure 6 shows that the exemplary method next entails forming an insulative spacer 26 and an insulative layer 28. Insulative spacers 26, which consists of silicon dioxide for example, lies over portions of substrate 22 adjacent wires 24a-24c to reduce the effective separation of wires 24a-24c. The exemplary method uses a TEOS oxide deposition and subsequent etching to form spacers 26. Insulative layer 28 has a thickness 28t of, for example, 4000 angstroms, and includes two sublayers 28a and 28b, analogous to sublayers 16a and 16b in the first embodiment. Specifically, sublayer 28a includes a number of voids 27 between the wires to increase its dielectric constant, and sublayer 28b is either substantially voidless or includes a substantially fewer number of voids than sublayer 28a. A two-stage TEOS oxide deposition process, similar to that used in the first embodiment, is used to form layer 28. Figure 7 shows that after forming sublayer 28b, which includes some level of nonplanarity, the exemplary method facet etches the sublayer at an angle of about 45 degrees to improve its global planarity. Figure 8 shows the results of forming a second metallization level according to the procedure outlined in Figures 5-7. This entails forming conductive structures 30a and 30b on insulative sublayer 28b'and forming an insulative spacer 32 and an insulative layer 34, which, like insulative layer 28, includes void-filled sublayer 34a and substantially void-free sublayer 34b'.Sublayer 34a includes voids 31 between conductive structures 30a and 30b, and sublayer 34b'is facet etched to improve its planarity. Third Exemplary Method of Forming Nearly Planar Dielectric FilmsFigures 9 and 10 show a number of exemplary integrated-circuit assemblies, which taken collectively and sequentially, illustrate a third exemplary method of making nearly planar or quasi planar dielectric layers within the scope of the present invention. In contrast to the first and second embodiment, the third exemplary embodiment is intended for forming insulative films on metal layers with maximum feature spacing up to about 0.5 microns. Figure 9 shows that the method begins with formation of an integratedcircuit assembly or structure 35, which like assembly 10 in Figure 1 and assembly 21 in Figure 5, can exist within any integrated circuit. Assembly 35 includes a substrate 36 which supports three representative wires or conductive structures 38a, 38b, and 38c, with a desired feature spacing 38s of about 0.5 microns. Figure 10 shows the results of forming an oxide spacers 40 and an insulative layer 42. The exemplary embodiment forms one or more oxide spacers 40 which is about 1000 angstroms wide, and thus reduces the effective spacing between conductors 38a-38c by 2000 angstroms. Forming insulative layer 42 entails executing a flow-fill procedure, such as TRIKON-200 by TrikonTechnologies, Inc. To obtain global and local planarity, one can reduce the maximum feature space by using oxide/TEOS spacer as taught in the second exemplary method, or by enlarging the metal feature, or by adding floating metal between the metal features. Exemplary Method of Promoting Uniform Thickness of Intermetal Dielectnc LayersTo facilitate the formation of more uniformly thick inter-metal dielectric layers, such as those described above, the inventor developed specific methods of (and related computer software) for increasing the pattern density of metal layouts. The methods and associated software take a given metal layout and modify, or fill, open areas of the layout to increase pattern density and thus promote uniform thickness or reduce thickness variation across dielectric layers formed on metal layers based on the layouts. These methods and software can thus be used, for example, to facilitate formation of the conductive structures shown in Figures 1,5, and 9. The exemplary method generally entails iteratively measuring a given layout, adding floating metal to fill large open areas in the layout, and extending or filling out existing metal areas to meet maximum feature spacing, or gap, criteria. Figure 11 shows a flow chart of the exemplary method, which is suitable for implementation as a computer-executable program. Specifically, the flow chart includes a number of process or decision blocks 110,120,130, and 140. The exemplary method begins at process block 110 which entails measuring a given layout. This entails determining open (unmetallized or nonconductive) areas large enough to be filled with floating metal and identifying live metal areas that require additional metal to obtain desired spacing. Floating metal is metal that is not coupled to a signal path or component, whereas live metal is metal that is coupled to a signal path or component. After executing block 110, the exemplary method proceeds to block 120 which entails adding floating metal to any large areas identified in block 110. To illustrate, Figure 12 shows a hypothetical layout having a live metal region 200 with open area 210. In general, if dimension A is greater than the sum of dimension S1, dimension S2, and L (the maximum feature spacing criteria), the exemplary method adds floating metal, such as floating metal region 220. After adding floating metal, the exemplary method adds live metal as indicated in block 120 of Figure 11. Figure 12 is again instructive of the exemplary method. If dimension B is less than the sum of dimension S 1, dimension S2, and L, the exemplary method adds metal as indicated by added active metal region 230. process block 104 which entails filling in notches in the layout. More particularly, the exemplary method follows an iterative process for adding live (or non-floating) metal, as indicated by blocks 130a-130g. Block 130a entails filling notches in the current live metal. Figure 13 shows a live metal region 300 of a hypothetical metal layout having a notch 310.Included within notch 310 are a series of iteratively added live metal regions 320-325. The amount of metal added at each iteration can be selected using a minimum surface area criteria or computed dynamically each iteration. The exemplary embodiment repeatedly adds metal to the notch until it is filled, before advancing to block 31 Ob. However, other embodiments can advance to block 31 Ob before the notch is filled, relying on subsequent trips or iterations through the first loop in the flowchart to complete filling of the notch. Block 130b entails filling in corners in the current live metal, meaning the live metal after filling notches. Figure 14 illustrates a live metal region 400 having a corner 410 and added L-shaped live metal regions 420-423 and a rectangular live metal region 424. (Other embodiments add other shapes of live metal regions.) The amount of metal added at each iteration can be selected using a minimum surface area or single-dimensional criteria or computed dynamically each iteration. The exemplary embodiment repeatedly adds metal to the corner until it is filled, before advancing to block 130c. However, other embodiments can advance to block 310b before the notch is filled, relying on subsequent trips through the inner loop to complete filling of the notch. Block 130c entails filling in between opposing edges of adjacent live metal regions to achieve a desired spacing, such as a maximum desired spacingL. Figure 15 shows live metal regions 510 and 520, which have respective opposing edges 510a and 520a. The exemplary method entails adding live metal regions, such as live metal regions 521-523, one edge such as edge 520a to achieved the maximum desired spacing L. However, other embodiments add live metal to both of the opposing edges to achieve the desired spacing. Still other embodiments look at the lengths of the opposing edges and use one or both of the lengths to determine one or more dimensions of the added live metal regions. After filling in between opposing edges of existing live metal regions, the exemplary method advances to decision block 130d in Figure 11. This block entails determining whether more live metal can be added. More precisely, this entails measuring the layout as modified by the live metal already added and determining whether there are any adjacent regions that violate the desired maximum spacing criteria. (Note that some exemplary embodiments include more than one maximum spacing criteria to account for areas where capacitive effects or crosstalk issues are of greater importance than others.) If the determination indicates that more metal can be added execution proceeds back to block 130a to fill in remaining notches, and so forth. If the determination indicates that no more live metal can be added to satisfy the maximum spacing criteria, execution to proceeds to block 130e in Figure 11. Block 130e entails moving (or redefining) one or more edges (or portions of edges) of live metal regions in the modified layout specification. To illustrate,Figure 16 shows live metal regions 610 and 620, which have respective edges 610a and 620a. It also shows the addition of live metal region 630 to edge 620a, which effectively extends the edge. Similarly, edge 620a has been extended with the iterative addition of live metal regions 631 and 632. The additions can be made iteratively using a dynamic or static step size, or all it once by computing the size of an optimal addition to each edge. Exemplary execution then proceeds to decision block 130f. In decision block 130f, the exemplary method decides again whether more metal can be added to the layout. If more metal can be added, the exemplary method repeats execution of process blocks 104-122. However, if no metal can be added, the method proceeds to process block 140 to output the modified layout for use in a fabrication process. Although not show explicitly in the exemplary flow chart in Figure 11, the exemplary method performs data compaction to minimize or reduce the amount of layout data carried forward from iteration to iteration. Data compaction reduces the number of cells which define the circuit associated with the metal layout and the computing power necessary to create the metal layout. The exemplary compaction scheme flattens all array placement into single instance placements. For example, a single array placement of a cell incorporating a 3x4 matrix flattens to 12 instances of a single cell. It also flattens specific cells, such as array core cells, vias, or contacts, based on layout or user settings. Additionally, it flattens cells which contain less than a predetermined number of shapes regardless of any other effects. For example, one can flatten cells having less than 10,20, or 40 shapes. Lastly, the exemplary compaction scheme attempts to merge shapes to minimize overlapping shapes and redundant data. The appropriate or optimum degree of flattening depends largely on the processing power and memory capabilities of the computer executing the exemplary method. Faster computers with more core memory and swap space can handle larger number of shapes per cell and thus have less need for flattening than slower computers with less core memory and swap space. In the extreme, a complete circuit layout can be flattened into one cell. If a given layout design is not a single flat list of shapes but includes two or more cells placed into each other as instances, additional precaution should be taken to reduce the risk of introducing unintended shorts into the layout during the pattern-fill process. In the exemplary embodiment, this entails managing the hierarchy of cells. The exemplary embodiment implements a hierarchy management process which recognizes that each cell has an associated fill area that will not change throughout the metal-fill process. The exemplary management process entails executing the following steps from the bottom up until all cell dependencies are resolved. For each instance in each cell, the process creates a temporary unique copy of the cell associated with a given instance. After this, the process copies metal from other cells into the cell being examined if it falls into the fill area.The process then copies metal from other cell into the cell if the metal falls into a ring around the fill area. Next, the process identifies, extracts, and marks conflict areas. This exemplary pattern-filling method and other simpler or more complex methods embodying one or more filling techniques of the exemplary embodiment can be used in combination with the methods of making nearly planar intermetal dielectric layers described using Figures 1-10. More precisely, one can use a pattern-filling method according to the invention to define a layout for a particular metal layer, form a metal layer based on the layout, and then form a nearly planar intermetal dielectric layer according to the invention on the metal layer. The combination of these methods promises to yield not only a nearly planar dielectric layer that reduces or avoids the need for chemicalmechanical planarization, but also a dielectric layer with less thickness deviation because of the adjusted pattern fill density of the underlying metal layer. Exemplary Computer System Incorporating Pattern-Filling MethodFigure 17 shows an exemplary computer system or workstation 42 for hosting and executing a software implementation of the exemplary pattern-filling method. The most pertinent features of system 42 include a processor 44, a local memory 45 and a data-storage device 46. Additionally, system 42 includes display devices 47 and user-interface devices 48. Some embodiments use distributed processors or parallel processors, and other embodiments use one or more of the following data-storage devices: a read-only memory (ROM), a random-access-memory (RAM), an electrically-erasable and programmableread-only memory (EEPROM), an optical disk, or a floppy disk. Exemplary display devices include a color monitor, and exemplary user-interface devices include a keyboard, mouse, joystick, or microphone. Thus, the invention is not limited to any genus or species of computerized platforms. Data-storage device 46 includes layout-development software 46a, pattern-filling software 46b, an exemplary input metal layout 46c, and an exemplary output metal layout 46d. (Software 46a and 46b can be installed on system 42 separately or in combination through a network-download or through a computer-readable medium, such as an optical or magnetic disc, or through other software transfer methods.) Exemplary storage devices include hard disk drives, optical disk drives, or floppy disk drives. In the exemplary embodiment, software 46b is an add-on tool to layout-development software 46a and layout 46c was developed using software 46a. However, in other embodiments, software 46b operates as a separate application program and layout 46c was developed by non-resident layout-development software. General examples of suitable layout-development software are available from Cadence and Mentor Graphics.. Thus, the invention is not limited to any particular genus or species of layout-development software. Exemplary Integrated Memory CircuitFigure 18 shows an exemplary integrated memory circuit 50 that incorporates one or more nearly planar intermetal dielectric layers and/or metal layers within the scope of the present invention. One more memory circuits resembling circuit 50 can be used in a variety of computer or computerized systems, such as system 42 of Figure 17. Memory circuit 50, which operates according to well-known and understood principles, is generally coupled to a processor (not shown) to form a computer system. More particularly, circuit 50 includes a memory array 52, which comprises a number of memory cells 53a, 53b, 53c, and 53d; a column address decoder 54, and a row address decoder 55; bit lines 56a and 56b; word lines 57a and 57b; and voltage-sense-amplifier circuit 58 coupled in conventional fashion to bit lines 56a and 56b. (For clarity, Figure 18 omits many conventional elements of a memory circuit.)ConclusionIn furtherance of the art, the inventor has presented several methods for making nearly planar intermetal dielectric layers without the use of chemicalmechanical planarization. Additionally, the inventor has presented a method of modifying metal layouts to facilitate formation of dielectric films with more uniform thickness. These methods of modifying metal layouts and making dielectric layers can be used in sequence to yield nearly planar intermetal dielectric layers with more uniform thickness. The embodiments described above are intended only to illustrate and teach one or more ways of practicing or implementing the present invention, not to restrict its breadth or scope. The actual scope of the invention, which embraces all ways of practicing or implementing the invention, is defined only by the following claims and their equivalents. |
Aspects of the embodiments are directed to systems, methods, and computer program products that facilitate downstream port operation in a separate reference clock (SRIS) mode with independent spread spectrum timing (SSC). The system may determine that the downstream port supports one or more SRIS selection mechanisms; determining a system clock configuration from the downstream port to a corresponding upstream port, the corresponding upstream port being connected to the downstream port through a PCIe-compliant link; setting an SRIS mode in the downstream port; and transmitting data from a downstream port across links using the determined system clock configuration. |
1. A device comprising:a port for coupling the device to another device via a link; andcontrol logic for:determining a timing framework supported by the other device; andThe timing architecture to be used on the link is set in a Peripheral Component Interconnect Express (PCIe) link control register, wherein the device is to send information via the link to all Describe another device.2. The apparatus of claim 1, wherein the timing architecture comprises a Separate Reference Clock (SRIS) architecture with independent spread spectrum timing or a non-SRIS architecture.3. The device of claim 1 or 2, wherein the control logic is configured to:Setting the timing architecture to a Separate Reference Clock (SRIS) architecture with independent spread spectrum timing by setting a bit in the PCIe Link Control Register to 1; orThe clocking architecture is set to a non-SRIS architecture by setting the bit in the PCIe link control register to 0.4. The device of claim 3, wherein the control logic is to set the timing architecture by setting bit 12 in the PCIe link control register.5. The device of claim 3, wherein the control logic is to set the clocking architecture by setting another bit in the PCIe link control register.6. The apparatus of any one of claims 1 to 5, wherein the control logic is to use a training set to set the timing architecture.7. The device of any one of claims 1 to 6, wherein the control logic is configured to determine a timing architecture supported by the other device based on a PCIe Link Capability 2 register of the other device.8. The device of claim 7, wherein the control logic is to determine the clocking architecture supported by the other device based on bits 9-22 in the PCIe Link Capability 2 register.9. The device of any one of claims 1 to 8, wherein the control logic is further configured to determine a data rate supported by the other device based on a PCIe Link Capability 2 register.10. The device of any one of claims 1 to 9, wherein the control logic is operable to determine all supported by the other device based on bits 9-15 in the PCIe Link Capability 2 register. data rate.11. A method comprising:determining, by the first device, a timing framework supported by the second device;setting a timing architecture to be used on the link between the first device and the second device in a Peripheral Component Interconnect Express (PCIe) link control register; andInformation is sent from the first device to the second device via the link using the set timing framework.12. The method of claim 11, wherein the timing architecture comprises a Separate Reference Clock (SRIS) architecture with independent spread spectrum timing or a non-SRIS architecture.13. The method of claim 11 or 12, wherein setting the timing architecture comprises setting the timing architecture to have independent spread spectrum timing by setting a bit in the PCIe link control register to 1 separate reference clock (SRIS) architecture.14. The method of claim 13, wherein the control logic is to set the timing architecture by setting bit 12 in the PCIe link control register.15. The method of claim 13, wherein the control logic is to set the timing architecture by setting another bit in the PCIe link control register.16. The method according to any one of claims 11 to 15, wherein setting the timing architecture comprises setting the timing architecture to 0 by setting the bit in the PCIe link control register to 0. For non-SRIS architecture.17. The method of claim 11, wherein the control logic is to use a training set to set the timing architecture.18. A system comprising:a host device comprising a first port for supporting a link; andan endpoint device comprising a second port for supporting the link, wherein the host device is coupled to the endpoint device via the link;Wherein, the host device includes control logic, and the control logic is used for:determining a timing architecture supported by the endpoint device; andA timing architecture to be used on the link is set in a Peripheral Component Interconnect Express (PCIe) link control register, wherein the host device is configured to use the set timing architecture to send information via the link to the endpoint device.19. The system of claim 18, wherein the timing architecture comprises one of a separate reference clock (SRIS) architecture with independent spread spectrum timing or a non-SRIS architecture.20. The system of claim 18, wherein the control logic is to:Setting the timing architecture to a Separate Reference Clock (SRIS) architecture with independent spread spectrum timing by setting a bit in the PCIe Link Control Register to 1; orThe clocking architecture is set to a non-SRIS architecture by setting the bit in the PCIe link control register to 0.21. The system of any one of claims 18 to 20, wherein the control logic is to set the timing architecture by setting bit 12 in the PCIe link control register.22. The system of any one of claims 18 to 20, wherein the control logic is to set the timing architecture by setting another bit in the PCIe link control register.23. The system of any one of claims 18 to 20, wherein the control logic is to use a training set to set the timing architecture.24. A method for operating a downstream port of an upstream component connected to a downstream component via a link, the method comprising:determining a system clock configuration from the downstream port to an upstream port of the downstream component; andsending data from the downstream port to the upstream port across the link using the determined system clock configuration;Wherein, determining the system clock configuration includes using an out-of-band management interface to determine the system clock configuration, and the out-of-band management interface includes a system management bus.25. The method of claim 24, further comprising setting a separate reference clock (SRIS) mode with independent spread spectrum timing in the downstream port.26. A system comprising:an upstream assembly comprising a downstream port; anda downstream component coupled to the upstream component via a link, the downstream component comprising an upstream port;Wherein, the downstream port includes logic for:determining a system clock configuration from the downstream port to the upstream port; andsending data from the downstream port to the upstream port across the link using the determined system clock configuration;Wherein, determining the system clock configuration includes using an out-of-band management interface to determine the system clock configuration, and the out-of-band management interface includes a system management bus.27. The system of claim 26, wherein the logic is further to set a separate reference clock (SRIS) mode with independent spread spectrum timing in the downstream port.28. An apparatus comprising means for performing the steps of the method according to claim 24 or 25.29. One or more non-transitory computer-readable storage media comprising instructions which, when executed by a processor, cause the processor to perform the method of claim 24 or 25. |
System, method and device for SRIS mode selection for PCIEThis application is a divisional application of the patent application of the same name with the application number 201811030425.1 submitted on September 5, 2018.Background techniqueAn interconnect can be used to provide communication between different devices within a system, using some type of interconnection mechanism. One typical communication protocol used for communication interconnection between devices in a computer system is the Peripheral Component Interconnect Express (PCI Express™ (PCIe™)) communication protocol. This communication protocol is an example of a load/store input/output (I/O) interconnect system. Communication between devices is typically performed serially at very high speeds according to this protocol.Devices may be connected across various numbers of data links, each data link comprising multiple data lanes. Upstream and downstream devices undergo link training upon initialization to optimize data transmission across various links and channels.Description of drawingsFigure 1 illustrates an embodiment of a block diagram of a computing system including a multi-core processor.2 is a schematic diagram of an example Peripheral Component Interconnect Express (PCIe) link architecture according to an embodiment of the disclosure.FIG. 3 is a schematic illustration of a link capability register including support for SRIS mode selection mechanism bits according to an embodiment of the disclosure.FIG. 4 is a schematic illustration of a link control register including bits to support an SRIS mode selection mechanism, according to an embodiment of the disclosure.5 is a process flow diagram of a PCIe compliant port functioning based on the SRIS mode selection mechanism according to an embodiment of the disclosure.Figure 6 illustrates an embodiment of a computing system including an interconnect fabric.Figure 7 illustrates an embodiment of an interconnect architecture including a layered stack.Figure 8 illustrates an embodiment of a request or packet to be generated or received within the interconnection fabric.Figure 9 shows an embodiment of a transmitter and receiver pair of an interconnection architecture.Figure 10 shows another embodiment of a block diagram of a computing system including a processor.Figure 11 illustrates an embodiment of a block of a computing system including multiple processor sockets.Figure 12 illustrates another embodiment of a block diagram of a computing system.detailed descriptionNumerous specific details are set forth in the following description, for example, specific types of processors and system configurations, specific hardware structures, specific architectural and microarchitectural details, specific register configurations, specific instruction types, specific system components, specific measurements/altitude, specific Examples of processor pipeline stages and operations, etc., in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention. In other instances, well-known components or methods have not been described in detail so as not to unnecessarily obscure the invention: for example, specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific Interconnect operations, specific logic configurations, specific fabrication techniques and materials, specific compiler implementations, specific algorithmic expressions in code form, specific power-down and gating techniques/logic, and other specific operational details of computer systems.While the following embodiments may be described with reference to energy conservation and energy efficiency in particular integrated circuits (eg, in computing platforms or microprocessors), other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy savings. For example, the disclosed embodiments are not limited to desktop computer systems or Ultrabooks™, and may also be used in other devices such as handheld devices, tablet computers, other thin notebook computers, system-on-chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include microcontrollers, digital signal processors (DSPs), system-on-chips, network computers (NetPCs), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below . Furthermore, the apparatus, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimizations for energy conservation and efficiency. As will become apparent in the description that follows, embodiments of the methods, apparatus, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are critical to a "green technology" future balanced with performance considerations. important.As computing systems evolve, the components within them become more complex. Consequently, the complexity of interconnect architectures used for coupling and communication between components is also increasing to ensure bandwidth requirements for optimal component operation. Additionally, different market segments require different aspects of interconnect architectures to meet market demands. For example, servers demand higher performance, while mobile ecosystems are sometimes able to sacrifice overall performance to save power. However, the single purpose of most structures is to provide the highest possible performance and maximum power savings. A number of interconnects are discussed below that would potentially benefit from aspects of the invention described herein.Referring to FIG. 1 , an embodiment of a block diagram of a computing system including a multi-core processor is depicted. Processor 100 includes any processor or processing device, such as a microprocessor, embedded processor, digital signal processor (DSP), network processor, handheld processor, application processor, coprocessor, system on chip (SOC) ) or other devices used to execute code. In one embodiment, processor 100 includes at least two cores - cores 101 and 102, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 100 may include any number of processing elements, which may be symmetrical or asymmetrical.In one embodiment, a processing element refers to hardware or logic that supports software threads. Examples of hardware processing elements include: thread units, thread slots, threads, processing units, contexts, context units, logical processors, hardware threads, cores, and/or any other element capable of maintaining the state of the processor, e.g., execution state or schema state. In other words, in one embodiment, a processing element refers to any hardware capable of being independently associated with code (eg, a software thread, operating system, application, or other code). A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core often refers to logic located on an integrated circuit capable of maintaining independent architectural states, where each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to a core, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining independent architectural state that shares access to execution resources. As can be seen, the line between the nomenclature of hardware threads and cores overlaps when certain resources are shared while other resources are dedicated to architectural state. Often, however, the operating system treats cores and hardware threads as individual logical processors, where the operating system can schedule operations on each logical processor individually.As shown in FIG. 1 , physical processor 100 includes two cores—cores 101 and 102 . Here, cores 101 and 102 are considered symmetric cores, ie cores having the same configuration, functional units and/or logic. In another embodiment, core 101 includes out-of-order processor cores and core 102 includes in-order processor cores. However, cores 101 and 102 may be individually selected from any type of core: for example, a native core, a software managed core, a core adapted to execute a native instruction set architecture (ISA), a core adapted to execute a converted instruction set architecture ( ISA), co-designed cores, or other known cores. In a heterogeneous core environment (ie, asymmetric cores), some form of translation (eg, binary translation) may be utilized to schedule or execute code on one or both cores. For further discussion, however, the functional units shown in core 101 are described in further detail below, as the units in core 102 operate in a similar manner in the depicted embodiment.As depicted, core 101 includes two hardware threads 101a and 101b, which may also be referred to as hardware thread slots 101a and 101b. Thus, in one embodiment, a software entity, such as an operating system, potentially views processor 100 as four separate processors, ie, four logical processors or processing elements capable of executing four software threads concurrently. As mentioned above, a first thread is associated with architectural state register 101a, a second thread is associated with architectural state register 101b, a third thread can be associated with architectural state register 102a, and a fourth thread can be associated with architectural state register 102b. Associated. Here, each of the architectural state registers (101a, 101b, 102a, and 102b) may be referred to as a processing element, thread slot, or thread unit, as described above. As shown, the architectural state register 101a is duplicated in the architectural state register 101b, thus enabling storage of individual architectural states/contexts for logical processor 101a and logical processor 101b. In core 101, other smaller resources such as instruction pointers and renaming logic in allocator and renamer block 130 may also be duplicated for threads 101a and 101b. Some resources, such as reorder buffers in reorder/retirement unit 135, ILTB 120, load/store buffers, and queues, may be shared through partitioning. Other resources, such as general-purpose internal register(s), page-table base register(s), low-level data-cache and data-TLB 115 , execution unit(s) 140 , and parts of out-of-order unit 135 are potentially fully shared.Processor 100 often includes other resources, which may be fully shared, shared through partitions, or dedicated by/to processing elements. In FIG. 1 , an embodiment of a purely exemplary processor is shown with illustrative logic units/resources of the processor. Note that a processor may include or omit any of these functional units, as well as include any other known functional units, logic or firmware not depicted. As shown, core 101 includes a simplified representative out-of-order (OOO) processor core. However, in-order processors may be used in different embodiments. The OOO core includes a branch target buffer 120 for predicting branches to be executed/taken, and an instruction translation buffer (I-TLB) 120 for storing address translation entries for instructions.The core 101 also includes a decode module 125 coupled to the fetch unit 120 for decoding the fetched elements. In one embodiment, fetch logic includes individual sequencers associated with thread slots 101a, 101b, respectively. Typically, core 101 is associated with a first ISA that defines/specifies instructions executable on processor 100 . Often, machine code instructions that are part of the first ISA include a portion of the instruction (called an opcode) that references/specifies the instruction or operation to be performed. Decode logic 125 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions in the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below, in one embodiment decoder 125 includes logic designed or adapted to recognize specific instructions, such as transactional instructions. As a result of the identification made by the decoder 125, the architecture or core 101 takes specific, predefined actions to perform the task associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations and methods described herein may be performed in response to single or multiple instructions; some of which may be new instructions or legacy instructions. Note that in one embodiment, decoder 126 recognizes the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, decoder 126 recognizes a second ISA (either a subset of the first ISA or a different ISA).In one example, the allocator and renamer block 130 includes an allocator for reserving resources (eg, a register file for storing instruction processing results). However, threads 101a and 101b are potentially capable of out-of-order execution, where allocator and renamer block 130 also reserves other resources, such as reorder buffers for tracking instruction results. Unit 130 may also include a register renamer for renaming program/instruction reference registers to other registers inside processor 100 . The reorder/retirement unit 135 includes components such as the above-mentioned reorder buffers, load buffers, and store buffers, among others, to support out-of-order execution and later in-order retirement of instructions executed out-of-order.In one embodiment, scheduler(s) and execution unit block 140 includes a scheduler unit for scheduling instructions/operations on execution units. For example, floating point instructions are scheduled on execution unit ports that have floating point execution units available. Also included is a register file associated with the execution unit to store information instruction processing results. Exemplary execution units include floating point execution units, integer execution units, jump execution units, load execution units, store execution units, and other known execution units.A lower level data cache and data translation buffer (D-TLB) 150 is coupled to execution unit(s) 140 . The data cache is used to store recently used/operated elements, such as data operands, which are potentially retained in memory coherency state. The D-TLB is used to store recent virtual/linear-to-physical address translations. As a specific example, a processor may include a page table structure to divide physical memory into multiple virtual pages.Here, cores 101 and 102 share access to a higher level or further cache, eg, a second level cache associated with on-chip interface 110 . Note that higher level or farther refers to cache levels increasing or moving further away from the execution unit(s). In one embodiment, the higher level cache is a last level data cache - the last level cache in the memory hierarchy on processor 100 - eg, a second or third level data cache. However, a higher level cache is not limited thereto, as it may be associated with or include an instruction cache. A trace cache—a type of instruction cache—may alternatively be coupled after decoder 125 to store recently decoded traces. Here, an instruction potentially refers to a macroinstruction (ie, a general instruction recognized by a decoder), which can be decoded into multiple microinstructions (micro-operations).In the depicted configuration, processor 100 also includes an on-chip interface module 110 . Historically, memory controllers, described in more detail below, have been included in computing systems external to processor 100 . In this scenario, the on-chip interface 110 is used to communicate with devices external to the processor 100, such as the system memory 175, the chipset (often including a memory controller hub for connecting to the memory 175, and I/O for connecting to peripheral devices). O controller hub), memory controller hub, north bridge or other integrated circuits. And in this scenario, the bus 105 may comprise any known interconnect, for example, a multipoint bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g., cache coherent) bus, a layered protocol architecture , Differential bus and GTL bus.Memory 175 may be dedicated to processor 100 or shared with other devices in the system. Common examples of types of memory 175 include DRAM, SRAM, non-volatile memory (NV memory), and other known memory devices. Note that device 180 may include a graphics accelerator, processor or card coupled to a memory controller hub, a data storage device coupled to an I/O controller hub, a wireless transceiver, a flash memory device, an audio controller, a network controller, or other known device.More recently, however, as more logic and devices are integrated on a single die (eg, SOC), each of these devices may be consolidated on processor 100 . For example, in one embodiment, the memory controller hub is on the same package and/or die as the processor 100 . Here, a portion of the core (on-core portion) 110 includes one or more controllers for interfacing with other devices such as memory 175 or graphics device 180 . Configurations that include interconnects and controllers for interfacing with such devices are often referred to as on-core (or uncore configurations). As an example, on-chip interface 110 includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link 105 for off-chip communication. However, in a SOC environment, even more devices (e.g., network interfaces, coprocessors, memory 175, graphics processor 180, and any other known computer devices/interfaces) can be integrated on a single die or integrated circuit , to provide a small form factor with high functionality and low power consumption.In one embodiment, processor 100 is capable of executing compiler, optimization and/or converter code 177 to compile, convert and/or optimize application code 176 to support or interface with the apparatus and methods described herein. A compiler often includes a program or assembly that converts source text/code into object text/code. Typically, compiling program/application code with a compiler is done in multiple stages and passes on to transforming high-level programming language code into low-level machine or assembly language code. However, single-pass compilers can still be used for simple compilation. A compiler may utilize any known compilation technique and perform any known compiler operations, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code transformation, and code optimization.Larger compilers often include multiple stages, but most often these stages are included in two general stages: (1) the front end, which is where usually syntactic processing, semantic processing, and some transformations/optimizations can take place, and (2) the backend, which is where analysis, transformations, optimizations, and code generation usually take place. Some compilers refer to the middle, which illustrates the ambiguity depicted between the compiler's front end and back end. Thus, references to insertion, association, generation, or other operations of a compiler may occur in any of the aforementioned phases or passes as well as any other known phases or passes of a compiler. As an illustrative example, the compiler potentially inserts operations, calls, functions, etc. in one or more phases of compilation, e.g., inserting calls/operations during the front-end phase of compilation, and then transforming the calls/operations during the transformation phase into lower level code. Note that during dynamic compilation, compiler code or dynamically optimized code can insert such operations/calls, as well as optimize code for execution during runtime. As a specific illustrative example, binary code (compiled code) can be dynamically optimized during runtime. Here, the program code may include dynamically optimized code, binary code or a combination thereof.Similar to compilers, translators (eg, binary translators) translate code statically or dynamically to optimize and/or transform the code. Thus, references to execution code, application code, program code, or other software environment may refer to: (1) dynamically or statically executing compiler program(s), optimizing code optimizer or translator to compile program code, Maintain software structure, perform other operations, optimize code, or transform code; (2) execute main program code including operations/calls, e.g., optimized/compiled application code; (3) execute other programs associated with main program code code (eg, libraries) to maintain software structure, perform other software-related operations, or optimize code; or (4) a combination thereof.PCI Express (PCIe) supports multiple clocking architectures where the essential difference between these clocking architectures is that the same reference clock is provided to both components on the link ("common" clocking), in this case the clock Whether it is "spread" is usually irrelevant; or there is no shared reference, in which case it is very important whether the clock is spread - this mode is called Split Reference Clock with Independent Spread Spectrum Clocking (SSC) (SRIS) . When SRIS was originally defined, the choice of SRIS or non-SRIS mode operation was implementation specific. However, this has proven to be a poor fit for the way platform and silicon vendors want to implement SRIS. This disclosure provides techniques for enabling system software to change the mode of operation of a downstream port.Silicon from different vendors implements different approaches, and thus it is difficult for platform vendors to build systems where, for example, some PCIe connectors connect directly to the root complex and others connect to switches and potentially retimers .This disclosure defines a register interface for system software to determine and control the operation of the PCIe link, and enable to change the Timekeeping Mode.The advantages of the present disclosure will be readily apparent to those skilled in the art. Among the advantages are mechanisms for reconfiguring hardware. The techniques described in this paper can be integrated into the PCIe specification.FIG. 2 is a schematic diagram of an example peripheral component interconnect express (PCIe) link architecture 200 according to an embodiment of the disclosure. The PCIe link fabric 200 includes a first component 202, which may be an upstream component, a root complex, or a PCIe protocol compliant switch. The first component 202 can include a downstream port 210 that facilitates communication with upstream components across a link 222 (eg, a PCIe protocol compliant link). The first component 202 can be coupled to a second component 208, which can be a downstream component, endpoint, or switch that complies with the PCIe protocol. In some embodiments, the first component may be linked to one or more intermediate components, eg, first retimer 204 and second retimer 206 .In an embodiment, the first component 202 may include a downstream port 210 to facilitate downstream communication with the second component 208 (if directly connected) or with the upstream (pseudo) port 212 of the retimer 204 (e.g., towards the second component 208). The second component 208 may include an upstream port 220 to facilitate upstream communication with the first component 202 (if directly connected) or with the downstream (dummy) port 212 of the retimer 204 (eg, toward the first component 202 ).In the example shown in FIG. 2 , first component 202 may be linked to first retimer 204 via first link segment 224 . Likewise, first retimer 204 may be linked to second retimer 206 via link segment 226 . The second retimer 206 can be linked to the second component 208 by a link segment 228 . Link segments 224 , 226 , and 228 may constitute all or a portion of link 222 .Link 222 can facilitate upstream and downstream communications between first component 202 and second component 208 . In an embodiment, upstream communication refers to data and control information sent from the second component 208 toward the first component 202 ; and downstream communication refers to data and control information sent from the first component 202 toward the second component 208 . As mentioned above, one or more retimers (eg, retimers 204 and 206 ) may be used to extend the range of link 222 between first component 202 and second component 208 .A link 222 containing one or more retimers (e.g., retimers 204, 206) can form two or more a separate electronic link. For example, if link 222 includes a single retimer, link 222 may form a link with two separate sub-links, each operating at 8.0 GT/s or higher. As shown in Figure 2, multiple retimers 204, 206 may be utilized to extend the link 222. Three link segments 222, 224 and 226 may be defined by two retimers 204, 206, wherein a first sublink 222 connects the first component 202 to the first retimer 204, a second sublink 224 The first retimer 204 is connected to the second retimer 206 and the third sub-link 226 connects the second retimer 206 to the second component 208 .As shown in the example of FIG. 2, in some implementations, a retimer may include two ports (or pseudo-ports), and the ports may dynamically determine their respective downstream/upstream directions. In an embodiment, retimer 204 may include upstream port 212 and downstream port 214 . Likewise, retimer 206 may include upstream port 216 and downstream port 218 . Each retimer 204, 206 may have an upstream path and a downstream path. Additionally, the retimers 204, 206 may support modes of operation including a forward mode and an execute mode. In some examples, the retimers 204, 206 may decode data received on a sub-link and re-encode data to be forwarded downstream on another sub-link thereof. Thus, a retimer can capture the received bitstream before regenerating the bitstream and resending it to another device or even another retimer (or redriver or repeater). In some cases, a retimer may modify some values in the data it receives, for example, when processing and forwarding ordered-set data. Additionally, a retimer can potentially support any width option as its maximum width, eg, a set of width options defined by a specification such as PCIe.As the data rates of serial interconnects (eg, PCIe, UPI, USB, etc.) increase, retimers are increasingly used to extend channel range. Multiple retimers can be cascaded for even longer channel ranges. It is expected that as signal speed increases, channel range typically decreases in general. Therefore, the use of retimers may become more common as interconnect technology accelerates. As an example, if PCIe Gen-4 (16GT/s) is adopted to support PCIe Gen-3 (8GT/s), the use of retimers may increase in PCIe interconnects, and in other interconnects as speed increases This may be the case.System software may access downstream port 210 (eg, in first component 202, which may be an upstream component such as a root complex or a switch) before the link is established or when link 222 is not functioning properly. In an embodiment, a register (eg, a link capability register) may be set to perform clock mode selection in the downstream port 210 . The system firmware/software can configure the downstream port 210 to the desired mode, and if a change is required, this will be done by the system firmware/software rather than by the hardware.As noted above, there are basically two types of clocking architectures for PCIe: In the first scenario, there is no shared clock reference between each component. In this first scenario, the clock is extended for electromagnetic interference (EMI) mitigation - this first mode is called Separate Reference Clock with Independent SSC (SRIS). In the second scenario, the same reference clock is provided to every component on link 222 (sometimes called common clocking), in which case it is generally irrelevant whether the clock is "spread" (non-SRIS).The PCISIG tentatively determined that for components to support "Gen 5" (also known as 5.0, aka 32GT/s), both SRIS and non-SRIS modes of operation must be supported. The systems, methods and functions described herein are proposed for inclusion in the PCIe 5.0 base specification for supporting both SRIS and non-SRIS timing. Specific elements include:Support for 32G mode operation and other downstream ports may indicate support for the "SRIS Mode Selection Mechanism".Downstream ports indicating such support can:Support both SRIS and SRNS (“non-SRIS”) modes of operation, and do so symmetrically such that both Rx and Tx of a port are always in the same mode;implement a configuration mechanism (defined below) to select the mode of operation of the downstream port;Support for changing the operating mode of a downstream port when a link (e.g., link 222) is disabled (and not at other times); andWhich mode to use is indicated to the port at the retimer (eg, dummy port) and the upstream port based on a training set (TS) or ordered set (OS) sent by the downstream port (eg, downstream port 210 ).FIG. 3 is a schematic illustration of a link capability register 300 including bits to support an SRIS mode selection mechanism, according to an embodiment of the disclosure. Link capability register 300 identifies PCI Express link specific capabilities. The allocation of register fields in link capability register 300 is shown in FIG. 3 . Table 1 provides the corresponding bit definitions.In link capability register 300, a number of bits may be included for various capability mechanisms. Among the bits in the link capability register 300 there is a reserved bit (eg, bit 23), which can be used as a set bit indicating that the SRIS mode selection mechanism is supported. The following can be added to the 23-bit capability definition:Table 1. Link Capability Register Bit 23 DefinitionsFIG. 4 is a schematic illustration of a link control register 400 including bits to support an SRIS mode selection mechanism, according to an embodiment of the disclosure. In link control register 400, a number of bits may be included for various capability mechanisms. Among the bits in the link control register 400 are reserved bits (eg, bit 12), which can be used as set bits to indicate that the SRIS mode selection mechanism is supported. The following may be added to the 12-bit capability definition of Table 2:Table 2. Link Control Register Bit 12 DefinitionsIn an embodiment, the SRIS mode selection bits may be a multi-bit field. For example, a multi-bit field may be used to select from a menu of PPM/SKP policies, eg, 1000 ppm, beyond that defined by the current SRIS/Non-SRIS mode. In an embodiment, an SRIS mode select bit may also be used to allow SRIS mode selection to be implemented in the upstream port as well.The Link Capability 2 register field can be redefined as follows:The link control 3 register can be redefined as follows:L1 PM substateThe L1 Power Management (PM) substate establishes a link power management regime that creates lower power substates of the L1 link state, and associated mechanisms for using these substates.Ports that support L1 PM substates do not require a reference clock when in L1 PM substates other than L1.0.When operating in SRIS mode, a port that supports the L1 PM substate and also supports SRIS mode is required to support the L1 PM substate. In this case, the CLKREQ# signal is used by the L1 PM substate protocol, but has no defined relationship to any local clocks used by any port on the link, and the management of these local clocks is implementation specific.Form Factor Requirements for RefClock (Reference Clock) ArchitectureEach form factor specification must include the following table, which provides a clear overview of the timing architecture requirements for devices supporting the form factor specification. For each timing architecture, the table indicates whether the architecture is required, optional, or not allowed for that form factor. Note that this refers to the operation of the device -- not the underlying silicon capabilities. Use the SRIS mode selection mechanism described above to discover and control underlying silicon capabilities.FIG. 5 is a process flow diagram 500 for a PCIe compliant port functioning based on the SRIS mode selection mechanism, according to an embodiment of the disclosure. First, software or firmware controlling a downstream port of an upstream component (eg, a root complex or switch) may determine whether (or determine) that the downstream port supports the SRIS mode selection mechanism (502). Software/firmware may make this determination based on bits set in one or more registers (eg, link capability register and/or link control register). If the downstream port does not support the SRIS mode selection mechanism, the software/firmware can forego other steps regarding SRIS mode selection. Bits in the Link Capability Register and/or the Link Control Register may be set on boot, reboot, warm boot, etc., or when a new device is attached to an existing host or root controller.The software/firmware may determine, based at least in part on an asserted bit in one or both of the link capability register or the link control register (or other register) ) system clock configuration (504). Software/firmware may determine system clock configuration, for example, using an out-of-band management interface (eg, a system management (SM) bus for querying a device/switch) and/or using system-level elements such as expansion cards or backplanes.The software/firmware may set the SRIS mode selection in the downstream port to the appropriate mode based on the determination of the system clock configuration (506). In some embodiments, software/firmware may communicate the SRIS pattern to one or more upstream ports, including pseudo ports across a PCIe compliant link to connected retimer(s). The downstream ports may communicate downstream data and control information across the link using the selected SRIS mode (510). That is, when in SRIS mode, the upstream and downstream components can each use independent clocks with spread spectrum for data and control transfers.One interconnect fabric architecture includes a peripheral component interconnect express (PCI) (PCIe) architecture. The primary goal of PCIe is to enable components and devices from different vendors to interoperate in an open architecture across multiple market segments; clients (desktop and mobile), servers (standard and enterprise), and embedded and communication equipment. PCI Express is a high-performance, general-purpose I/O interconnect defined for a variety of future computing and communications platforms. Some PCI attributes (eg, its usage model, load-store architecture, and software interface) have been maintained through its revisions, while previous parallel bus implementations have been replaced by a highly scalable, fully serial interface. More recent PCI Express versions leverage point-to-point interconnects, switch-based technology, and improvements in packet protocols to achieve new levels of performance and features. Some of the improved features supported by PCI Express are power management, quality of service (QoS), hot-plug/hot-swap support, data integrity, and error handling.Referring to Figure 6, an embodiment of a structure consisting of point-to-point links interconnecting a set of components is shown. System 600 includes processor 605 and system memory 610 coupled to controller hub 615 . Processor 605 includes any processing element, such as a microprocessor, host processor, embedded processor, coprocessor, or other processor. Processor 605 is coupled to controller hub 615 through front side bus (FSB) 606 . In one embodiment, FSB 606 is a serial point-to-point interconnect as described below. In another embodiment, link 606 includes a serial differential interconnect architecture conforming to a different interconnect standard.System memory 610 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible to devices in system 600 . System memory 610 is coupled to controller hub 615 through memory interface 616 . Examples of memory interfaces include double data rate (DDR) memory interfaces, dual channel DDR memory interfaces, and dynamic RAM (DRAM) memory interfaces.In one embodiment, controller hub 615 is a root hub, root complex, or root controller in a peripheral component interconnect express (PCIe or PCIE) interconnect hierarchy. Examples of controller hubs 615 include chipsets, memory controller hubs (MCHs), north bridges, interconnect controller hubs (ICHs), south bridges, and root port controllers/hubs. Often, the term chipset refers to two physically separate controller hubs, a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include an MCH integrated with the processor 605, while the controller 615 communicates with the I/O devices in a manner similar to that described below. In some embodiments, peer-to-peer routing is optionally supported through the root complex 615 .Here, controller hub 615 is coupled to switch/bridge 620 via serial link 619 . Input/output modules 617 and 621 (also may be referred to as interfaces/ports 617 and 621 ) include/implement a layered protocol stack to provide communication between controller hub 615 and switch 620 . In one embodiment, multiple devices can be coupled to switch 620 .The switch/bridge 620 routes packets/messages from the device 625 upstream (i.e., towards the hierarchy up the root complex) to the controller core 615, and downstream (i.e., away from the root port controller) from the processor 605 or system memory 610 level down) to device 625. In one embodiment, switch 620 is referred to as a logical component of multiple virtual PCI-to-PCI bridge devices. Devices 625 include any internal or external device or component to be coupled to an electronic system, such as I/O devices, network interface controllers (NICs), plug-in cards, audio processors, network processors, hard drives, storage devices, CD/DVD ROMs, monitors, printers, mice, keyboards, routers, portable storage devices, Firewire devices, Universal Serial Bus (USB) devices, scanners, and other input/output devices. Often in PCIe, terms such as devices are referred to as endpoints. Although not specifically shown, device 625 may include a PCIe to PCI/PCI-X bridge to support legacy or other versions of PCI devices. Endpoint devices in PCIe are often categorized as legacy, PCIe, or root complex integrated endpoints.Graphics accelerator 630 is also coupled to controller hub 615 via serial link 632 . In one embodiment, graphics accelerator 630 is coupled to the MCH, which is coupled to the ICH. The switch 620 and thus the I/O device 625 are then coupled to the ICH. I/O modules 631 and 618 are also used to implement a layered protocol stack for communication between graphics accelerator 630 and controller hub 615 . Similar to the MCH discussed above, a graphics controller or graphics accelerator 630 may itself be integrated in processor 605 .Turning to Figure 7, an embodiment of a layered protocol stack is shown. Layered protocol stack 700 includes any form of layered communication stack, such as a Quick Path Interconnect (QPI) stack, a PCIe stack, a Next Generation High Performance Computing Interconnect stack, or other layered stacks. Although the discussion immediately below with reference to FIGS. 6-9 relates to the PCIe stack, the same concepts can be applied to other interconnect stacks. In one embodiment, protocol stack 700 is a PCIe protocol stack including transaction layer 705 , link layer 710 and physical layer 720 . Interfaces (eg, interfaces 617 , 618 , 621 , 622 , 626 , and 631 in FIG. 1 ) may be represented as communication protocol stack 700 . A representation as a communication protocol stack may also be referred to as a module or an interface implementing/comprising the protocol stack.PCI Express uses packets to transfer information between components. Packets are formed in the transaction layer 705 and data link layer 710 to carry information from sending components to receiving components. As the transmitted packets flow through other layers, the packets are extended with additional information needed to process the packets at those layers. On the receiving side, the reverse process occurs and the packet is transformed from its physical layer 720 representation to the data link layer 710 representation and finally (for transaction layer packets) into a form that can be processed by the transaction layer 705 of the receiving device.transaction layerIn one embodiment, transaction layer 705 is used to provide an interface between the device's processing core and the interconnection fabric (eg, data link layer 710 and physical layer 720 ). In this regard, the primary responsibility of the transaction layer 705 is the packing and unpacking of packets (ie, transaction layer packets or TLPs). The transaction layer 705 typically manages credit-based flow control for TLPs. PCIe implements split transactions, where requests and responses are separated by time, allowing the link to carry other traffic while the target device collects data for the response.Additionally, PCIe utilizes credit-based flow control. In this scheme, the device advertises an initial credit for each of the receive buffers in the transaction layer 705 . An external device (eg, controller hub 115 in FIG. 1 ) at the opposite end of the link counts the number of credits consumed by each TLP. If the transaction does not exceed the credit limit, the transaction can be sent. When a response is received, a certain amount of credit is restored. The advantage of the credit scheme is that if no credit limit is encountered, the delay in credit return does not affect performance.In one embodiment, the four transaction address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. A memory space transaction includes one or more of a read request and a write request to transfer data to or from a memory-mapped location. In one embodiment, memory space transactions can use two different address formats, eg, short address format (eg, 32-bit addresses) or long address format (eg, 64-bit addresses). Configuration space transactions are used to access the configuration space of a PCIe device. Transactions that configure space include read requests and write requests. Message space transactions (or messages for short) are defined to support in-band communication between PCIe agents.Thus, in one embodiment, the transaction layer 705 packages the packet header/payload 706 . The format of the current packet header/payload can be found in the PCIe specification at the PCIe specification website.Referring quickly to Figure 8, an embodiment of a PCIe transaction descriptor is shown. In one embodiment, transaction descriptor 800 is a mechanism for carrying transaction information. In this regard, transaction descriptor 800 supports identification of transactions in the system. Other potential uses include tracking modifications to the default transaction ordering and association of transactions to channels.Transaction descriptor 800 includes a global identifier field 802 , an attribute field 804 and a channel identifier field 806 . In the example shown, global identifier field 802 is depicted as including local transaction identifier field 808 and source identifier field 810 . In one embodiment, the global transaction identifier 802 is unique to all outstanding requests.According to one implementation, the local transaction identifier field 808 is a field generated by the requesting agent and is unique to all outstanding requests that are required to be completed by the requesting agent. Furthermore, in this example, the source identifier 810 uniquely identifies the requester agent within the PCIe hierarchy. Thus, along with source ID 810, local transaction identifier field 808 provides a global identification of the transaction within the hierarchical domain.Attributes field 804 specifies the properties and relationships of the transaction. In this regard, the attribute field 804 is potentially used to provide additional information that allows modification of the default handling of transactions. In one embodiment, attribute fields 804 include a priority field 812 , a reserved field 814 , an ordering field 816 , and a no-listen field 818 . Here, the priority subfield 812 may be modified by the originator to assign a priority to the transaction. Reserved attribute field 814 is reserved for future use or vendor-defined use. Possible usage models using priority or security attributes can be implemented using reserved attribute fields.In this example, the collation attribute field 816 is used to provide optional information conveying the type of collation that can modify the default collation. According to an example implementation, an ordering attribute of "0" indicates that the default collation is to be applied, where an ordering attribute of "1" indicates a loose ordering in which writes can pass writes in the same direction and read completions can be in the same direction Pass writes on. The listen attribute field 818 is used to determine whether the transaction is listened to. As shown, channel ID field 806 identifies the channel with which the transaction is associated.link layerLink layer 710 (also referred to as data link layer 710 ) acts as an intermediate stage between transaction layer 705 and physical layer 720 . In one embodiment, the responsibility of the data link layer 710 is to provide a reliable mechanism for exchanging transaction layer packets (TLPs) between two link components. One side of the data link layer 710 accepts the TLP packetized by the transaction layer 705, applies the packet sequence identifier 711 (i.e., identification number or packet number), calculates and applies an error detection code (i.e., CRC 712), and modifies The final TLP is submitted to the physical layer 720 for transmission across the physical to external devices.physical layerIn one embodiment, the physical layer 720 includes logical sub-blocks 721 and electronic blocks 722 to physically send packets to external devices. Here, the logical sub-block 721 is responsible for the "digital" functions of the physical layer 720 . In this regard, the logical sub-block includes a transmit portion for preparing outgoing information for transmission by the physical sub-block 722, and for identifying and preparing received information before passing the received information to the link layer 710 receiver part.Physical block 722 includes a transmitter and a receiver. The transmitter is provided with symbols by logic sub-block 721, and the transmitter serializes and sends the symbols to an external device. The serialized symbols from the external device are provided to the receiver, and the receiver transforms the received signal into a bit stream. The bitstream is deserialized and provided to logical sub-block 721 . In one embodiment, an 8b/10b transmission code is employed, where ten-bit symbols are sent/received. Here, special symbols are used to frame the packets into frames 723 . Additionally, in one example, the receiver also provides a symbol clock recovered from the incoming serial stream.As stated above, although the transaction layer 705, link layer 710, and physical layer 720 are discussed with reference to a particular embodiment of the PCIe protocol stack, the layered protocol stack is not limited thereto. In fact, any layered protocol can be included/implemented. As an example, a port/interface represented as a layered protocol includes: (1) a first layer for grouping packets, the transaction layer; a second layer for ordering packets, the link layer; and The third layer used to transmit packets is the physical layer. As a specific example, the Common Standard Interface (CSI) layered protocol is used.Referring next to FIG. 9, an embodiment of a PCIe serial point-to-point structure is shown. Although an embodiment of a PCIe serial point-to-point link is shown, the serial point-to-point link is not limited thereto as it includes any transmission path for transmitting serial data. In the illustrated embodiment, the basic PCIe link includes two pairs of low voltage differential drive signals: transmit pair 906/911 and receive pair 912/907. Accordingly, device 905 includes send logic 906 for sending data to device 910 and receive logic 907 for receiving data from device 910 . In other words, two transmit paths (ie, paths 916 and 917 ) and two receive paths (ie, paths 918 and 919 ) are included in the PCIe link.A transmission path refers to any path used to transmit data, such as a transmission line, copper line, optical line, wireless communication channel, infrared communication link, or other communication path. A connection between two devices (eg, device 905 and device 910 ) is called a link, eg, link 915 . A link can support a single lane - each lane represents a set of differential signal pairs (one pair for transmit and one pair for receive). To scale bandwidth, a link can aggregate multiple lanes denoted by xN, where N is any supported link width, such as 1, 2, 4, 8, 12, 16, 32, 64 or wider.A differential pair refers to two transmission paths, eg, lines 916 and 917, for transmitting differential signals. As an example, when line 916 switches from a low voltage level to a high voltage level (ie, a rising edge), line 917 drives from a high logic level to a low logic level (ie, a falling edge). Differential signaling potentially exhibits better electrical characteristics, eg, better signal integrity, ie cross-coupling, voltage overshoot/undershoot, ringing, etc. This allows for a better timing window, which supports faster transmission frequencies.Note that the devices, methods and systems described above can be implemented in any electronic device or system as described above. By way of illustration, the following figures provide exemplary systems for utilizing the invention as described herein. As the system is described in more detail below, many different interconnections are disclosed, described and reconsidered in light of the above discussion. And it should be apparent that the improvements described above can be applied to any of these interconnects, structures or architectures.Turning to FIG. 10 , there is shown a block diagram of an exemplary computer system formed using a processor including an execution unit that executes instructions, wherein one or more of the interconnects implement one or more features. In accordance with the invention, for example, in the embodiments described herein, system 1000 includes components, such as processor 1002, for executing algorithms for process data employing execution units including logic. System 1000 represents a processing system based on PENTIUM III™, PENTIUM 4™, Xeon™, Itanium, XScale™, and/or StrongARM™ microprocessors available from Intel Corporation (Santa Clara, California), although other systems (including those with other microprocessors) may also be used. devices, PCs, engineering workstations, set-top boxes, etc.). In one embodiment, sample system 1000 executes a version of the WINDOWS™ operating system available from Microsoft Corporation (Redmond, Washington), although other operating systems (e.g., UNIX and Linux), embedded software, and/or graphical user interfaces may also be used. interface. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.Embodiments are not limited to computer systems. Alternative embodiments of the present invention may be used in other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. An embedded application may include a microcontroller, a digital signal processor (DSP), a system on a chip, a network computer (NetPC), a set-top box, a network hub, a wide area network (WAN) switch, or may execute one or more instructions according to at least one embodiment any other system.In the illustrated embodiment, processor 1002 includes one or more execution units 1008 to implement an algorithm to execute at least one instruction. One embodiment may be described in the context of a single-processor desktop or server system, but alternative embodiments may be included in multi-processor systems. System 1000 is an example of a "hub" system architecture. Computer system 1000 includes a processor 1002 for processing data signals. As an illustrative example, processor 1002 includes a Complex Instruction Set Computer (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a combined processing processor or any other processor device (for example, a digital signal processor). Processor 1002 is coupled to processor bus 1010 , which carries data signals between processor 1002 and other components in system 1000 . The elements of system 1000 (e.g., graphics accelerator 1012, memory controller hub 1016, memory 1020, I/O controller hub 1024, wireless transceiver 1026, flash BIOS 1028, network controller 1034, audio controller 1036, serial Expansion port 1038, I/O controller 1040, etc.) perform their conventional functions well known to those skilled in the art.In one embodiment, the processor 1002 includes a Level 1 (L1 ) internal cache memory 1004 . Depending on architecture, processor 1002 may have a single internal cache or multiple levels of internal cache. Other embodiments include a combination of both internal and external caches, depending on the particular implementation and requirements. Register file 1006 is used to store different types of data in various registers, including integer registers, floating point registers, vector registers, packed registers, shadow registers, checkpoint registers, status registers, and instruction pointer registers.Also resident in processor 1002 is an execution unit 1008 that includes logic for performing integer and floating point operations. In one embodiment, the processor 1002 includes a microcode ROM for storing microcode (ucode) that, when executed, will execute algorithms for specific macroinstructions or to handle complex scenarios. Here, the microcode is potentially updatable to handle processor 1002 logic bugs/fixes. For one embodiment, execution unit 1008 includes logic for processing packed instruction set 1009 . By including the packed instruction set 1009 in the instruction set of the general-purpose processor 1002 along with the associated circuitry for executing the instructions, operations used by many multimedia applications can be performed using packed data in the general-purpose processor 1002 . As a result, many multimedia applications are accelerated and executed more efficiently by using the full width of the processor's data bus to perform operations on packed data. This potentially eliminates the need to transfer smaller units of data across the processor's data bus to perform one or more operations (one data element at a time).Alternative embodiments of execution unit 1008 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 1000 includes memory 1020 . Memory 1020 includes dynamic random access memory (DRAM) devices, static random access memory (SRAM) devices, flash memory devices, or other storage devices. Memory 1020 stores instructions and/or data represented by data signals to be executed by processor 1002 .Note that any of the aforementioned features or aspects of the invention may be used on one or more of the interconnects shown in FIG. 10 . For example, an on-die interconnect (ODI) (not shown) for coupling internal units of processor 1002 implements one or more aspects of the invention described above. Alternatively, the present invention is coupled to a processor bus 1010 (e.g., Intel Quick Path Interconnect (QPI) or other known high-performance computing interconnect), a high-bandwidth memory path 1018 to memory 1020, a point-to-point link to graphics accelerator 1012 (eg, a Peripheral Component Interconnect Express (PCIe) compliant fabric), a controller central interconnect 1022 , I/O for coupling other illustrated components, or other interconnects (eg, USB, PCI, PCIe) are associated. Some examples of these components include audio controller 1036, firmware hub (flash BIOS) 1028, wireless transceiver 1026, data storage device 1024, legacy I/O controller 1010 including user input and keyboard interface 1042, such as a general purpose serial A serial expansion port 1038 such as a bus (USB) and a network controller 1034 . Data storage devices 1024 may include hard drives, floppy disk drives, CD-ROM devices, flash memory devices, or other mass storage devices.Referring now to FIG. 11 , shown is a block diagram of a second system 1100 in accordance with an embodiment of the present invention. As shown in FIG. 11 , multiprocessor system 1100 is a point-to-point interconnect system and includes a first processor 1170 and a second processor 1180 coupled via a point-to-point interconnect 1150 . Each of processors 1170 and 1180 may be some version of a processor. In one embodiment, 1152 and 1154 are part of a serial point-to-point coherent interconnect fabric, such as Intel's Quick Path Interconnect (QPI) architecture. Thus, the present invention can be implemented within a QPI framework.While only two processors 1170, 1180 are shown, it should be understood that the scope of the invention is not so limited. In other embodiments, there may be one or more additional processors within a given processor.Processors 1170 and 1180 are shown including integrated memory controller units 1172 and 1182, respectively. Processor 1170 also includes point-to-point (P-P) interfaces 1176 and 1178 as part of its bus controller unit; similarly, second processor 1180 includes P-P interfaces 1186 and 1188 . Processors 1170 , 1180 may exchange information via P-P interface 1150 using point-to-point (P-P) interface circuitry 1178 , 1188 . As shown in FIG. 11 , IMCs 1172 and 1182 couple the processors to respective memories, memory 1132 and memory 1134 , which may be portions of main memory locally attached to the respective processors.Processors 1170, 1180 each exchange information with chipset 1190 via individual P-P interfaces 1152, 1154 using point-to-point interface circuits 1176, 1194, 1186, and 1198. Chipset 1190 also exchanges information with high performance graphics circuitry 1138 via interface circuitry 1192 along high performance graphics interconnect 1139 .A shared cache (not shown) may be included in either processor or outside of both processors; but is still connected to the processors via a P-P interconnect so that if the processors are placed in low power mode, the processor's local Either or both of the cache information may be stored in a shared cache.Chipset 1190 may be coupled to first bus 1116 via interface 1196 . In one embodiment, the first bus 1116 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, although the scope of the invention is not limited to this.As shown in FIG. 11 , various I/O devices 1114 are coupled to a first bus 1116 along with a bus bridge 1118 that couples the first bus 1116 to a second bus 1120 . In one embodiment, the second bus 1120 includes a low pin count (LPC) bus. Various devices are coupled to the second bus 1120 including, for example, a keyboard and/or mouse 1122, communication devices 1127, and a storage unit 1128 such as a disk drive or other mass storage device, which in one embodiment often includes Instructions/code and data 1130. Additionally, audio I/O 1124 is shown coupled to second bus 1120 . Note that other architectures are possible, with variations in the components and interconnect architectures involved. For example, instead of the point-to-point architecture of Figure 11, the system could implement a multidrop bus or other such architecture.Many different use cases can be achieved using the various inertial and environmental sensors present in the platform. These use cases support advanced computing operations including perceptual computing and also allow for enhancements with respect to power management/battery life, security and system responsiveness.For example, with respect to power management/battery life issues, based at least in part on information from ambient light sensors, the ambient light conditions in the platform location are determined and the intensity of the display is controlled accordingly. Thus, the power consumed to operate the display under certain light conditions is reduced.With regard to security operations, based on contextual information (eg, location information) obtained from sensors, it can be determined whether to allow a user to access a particular security document. For example, a user may be granted access to these documents at a work or home location. However, users are prevented from accessing these documents when the platform exists in a public location. In one embodiment, the determination is based on location information determined eg via a GPS sensor or camera recognition of landmarks. Other security operations may include providing pairing of devices within a close distance of each other, eg, a portable platform as described herein and a user's desktop computer, mobile phone, etc. In some implementations, when the devices are so paired, ad hoc sharing is achieved via near field communication. However, this sharing can be disabled when the device is beyond a certain distance. Furthermore, when pairing a platform as described herein with a smartphone, while in a public location, an alarm can be configured to trigger when the devices move beyond a predetermined distance from each other. Conversely, when the paired devices are in a safe location (eg, a workplace or home location), the devices may exceed the predetermined limit without triggering such an alarm.Sensor information can also be used to enhance responsiveness. For example, it is possible to enable sensors to operate at relatively low frequencies even when the platform is in a low power state. Thus, any changes in the position of the platform, such as determined by inertial sensors, GPS sensors, etc., are determined. If no such changes have been registered, a faster connection to the previous wireless hub (e.g., Wi-FiTM access point or similar wireless enabler) occurs, as no scanning for available wireless networks is required in this case resource. Thus, a higher level of responsiveness when waking up from a low power state is achieved.It should be understood that many other use cases can be achieved using sensor information obtained via integrated sensors within a platform as described herein, and that the above examples are for illustration purposes only. Using a system as described herein, a perceptual computing system may allow for the addition of alternative input modalities, including gesture recognition, and enable the system to sense user actions and intent.In some embodiments, there may be one or more infrared or other thermal sensing elements, or any other element for sensing the presence or movement of a user. Such sensing elements may include multiple different elements working together, in sequence, or both. For example, sensing elements include elements that provide initial sensing, such as light or sound projection, followed by gesture detection through, for example, an ultrasonic time-of-flight camera or a patterned light camera.Also in some embodiments, the system includes a light generator to generate the illumination lines. In some embodiments, the line provides a visual cue about a virtual boundary, ie, a fictional or virtual location in space, where the user's action of passing or breaking through the virtual boundary or plane is interpreted as an intent to engage the computing system. In some embodiments, the lighting line may change color as the computing system transitions to different states with respect to the user. The lighting lines can be used to provide visual cues to the user of the virtual boundary in the space, and can be used by the system to determine the computer's state transitions with respect to the user, including determining when the user wishes to engage the computer.In some embodiments, the computer senses the user's position and operates to interpret the movement of the user's hand across the virtual boundary as a gesture indicating the user's intent to engage the computer. In some embodiments, the light generated by the light generator may change as the user passes a virtual line or plane, thereby providing visual feedback to the user that the user has entered an area for providing gestures to provide input to the computer.The display screen may provide a visual indication of the computing system with respect to the user's status transition. In some embodiments, the first screen is provided in a first state in which the presence of the user is sensed by the system, for example by using one or more of the sensing elements.In some implementations, the system is used to sense user identity, for example, through facial recognition. Here, a transition to a second screen may be provided in a second state in which the computing system has identified the user, wherein the second screen provides visual feedback to the user that the user has transitioned to the new state. Transition to the third screen may occur in a third state in which the user has confirmed recognition of the user.In some embodiments, the computing system may use a transition mechanism to determine the location of the user's virtual boundary, where the location of the virtual boundary may vary by user and context. The computing system can generate lights (eg, illuminated lines) to indicate virtual boundaries for engaging with the system. In some embodiments, the computing system can be in a wait state and can generate light in the first color. The computing system can detect whether the user has reached the virtual boundary, for example, by using sensing elements to sense the user's presence and movement.In some embodiments, if it is detected that the user has crossed the virtual boundary (e.g., the user's hand is closer to the computing system than the virtual boundary line), the computing system may transition to a state for receiving gesture input from the user, wherein the transition is indicated The mechanism may include a light indicating that the virtual boundary changes to a second color.In some embodiments, the computing system may then determine whether gesture movement was detected. If gesture movement is detected, the computing system may proceed with the gesture recognition process, which may include using data from a gesture database, which may reside in memory in the computing device or otherwise be accessible by the computing device .If the user's gesture is recognized, the computing system can perform a function in response to the input, and return to receive additional gestures if the user is within the virtual boundary. In some embodiments, if the gesture is not recognized, the computing system may transition to an error state, wherein the mechanism to indicate the error state may include a light indicating that the virtual boundary changes to a third color, wherein if the user is within the virtual boundary, the system returns to to receive additional gestures to interface with the computing system.As mentioned above, in other embodiments, the system can be configured as a convertible tablet system that can be used in at least two different modes: tablet mode and notebook mode. A convertible system may have two panels, a display panel and a base panel, such that in tablet mode the two panels are arranged stacked on top of each other. In tablet mode, the display panel faces outward and can provide touchscreen functionality found in regular tablets. In notebook mode, both panels can be arranged in an open clamshell configuration.In various embodiments, the accelerometer may be a 3-axis accelerometer with a data rate of at least 50 Hz. A gyroscope may also be included, which may be a 3-axis gyroscope. Additionally, an electronic compass/magnetometer may be present. Additionally, one or more proximity sensors may be provided (eg, to open the lid to sense when a person is approaching (or not) the system and adjust power/performance to extend battery life). Enhanced features may be provided for some OS sensor fusion capabilities including accelerometer, gyroscope and compass. Additionally, via a sensor hub with a real-time clock (RTC), a wake-up mechanism from the sensor can be implemented to receive sensor input while the rest of the system is in a low power state.In some embodiments, an internal lid/display open switch or sensor is used to indicate when the lid is closed/opened and can be used to place the system in connected standby or automatically wake from connected standby. Other system sensors may include ACPI sensors for internal processor, memory, and skin temperature monitoring to effectuate processor and system operating state changes based on sensed parameters.In an embodiment, the OS may be an 8 OS implementing Connected Standby (also referred to herein as Win8 CS). Windows 8 Connected Standby or another OS with a similar state can provide very low ultra-idle power via a platform as described herein to enable applications to remain connected to, for example, cloud-based locations with very low power consumption. The platform can support 3 power states, screen on (normal); connected standby (as the default "off" state); and off (consuming zero watts). So, in Connected Standby, the platform is logically turned on (at the minimum power level) even if the screen is off. In such a platform, power management can be made transparent to applications and maintain a persistent connection, due in part to offloading techniques that enable the lowest powered components to perform operations.Referring now to FIG. 12 , shown is a block diagram of components found in a computer system in accordance with an embodiment of the present invention. As shown in Figure 12, system 1200 includes any combination of components. These components may be implemented as ICs, portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or combinations thereof that fit within a computer system or as components. It should also be noted that the block diagram of Figure 12 is intended to show a high-level view of the many components of a computer system. However, it should be understood that in other implementations, some of the illustrated components may be omitted, additional components may be present, and different arrangements of the illustrated components may occur. Accordingly, the invention described above may be implemented in any portion of one or more of the interconnects shown or described below.As seen in FIG. 12 , in one embodiment, processor 1210 includes a microprocessor, a multi-core processor, a multi-threaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. In the illustrated implementation, processor 1210 acts as the main processing unit and central hub for communicating with many of the various components of system 1200 . As one example, the processor 1210 is implemented as a system on chip (SoC). As a specific illustrative example, processor 1210 includes a Core™ architecture-based processor (eg, i3, i5, i7) or another such processor available from Intel Corporation (Santa Clara, CA). However, it should be understood that other low power processors (e.g., available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, MIPS-based designs from MIPS Technologies, Inc. of Sunnyvale, CA, from ARM Holdings, Inc. or its customers or an ARM-based design licensed by its licensor or adopter) may instead exist in other embodiments, such as Apple A5/A6 processors, Qualcomm Snapdragon processors, or TI OMAP processors. Note that many of the customer versions of such processors are modified and changed; however, they may support or recognize specific instruction sets that execute defined algorithms as set forth by the processor's licensor. Here, the microarchitectural implementation may change, but the architectural capabilities of the processor are generally consistent. Specific details regarding the architecture and operation of processor 1210 in one implementation are discussed further below to provide an illustrative example.In one embodiment, processor 1210 is in communication with system memory 1215 . As an illustrative example, this may be accomplished via multiple memory devices in an embodiment to provide a given amount of system memory. As an example, the memory may conform to a Joint Electron Device Engineering Council (JEDEC) Low Power Double Data Rate (LPDDR) based design, such as the current LPDDR2 standard according to JEDEC JESD 209-2E (published April 2009), or known as The next-generation LPDDR standard, LPDDR3 or LPDDR4, will provide extensions to LPDDR2 to increase bandwidth. In various implementations, individual memory devices may have different package types, eg, single die package (SDP), dual die package (DDP), or quad die package (67P). In some embodiments, these devices are soldered directly to the motherboard to provide a lower profile solution, while in other embodiments, these devices are configured as one or more memory modules, which in turn are coupled to the motherboard. And of course, other memory implementations are possible, eg, other types of memory modules, eg, different kinds of Dual Inline Memory Modules (DIMMs), including but not limited to microDIMMs, MiniDIMMs. In a particular illustrative embodiment, the memory is between 2GB and 16GB in size and may be configured as a DDR3LM package or LPDDR2 or LPDDR3 memory soldered to the motherboard via a ball grid array (BGA).A mass storage device 1220 may also be coupled to the processor 1210 in order to provide persistent storage of information such as data, applications, one or more operating systems, and the like. In various embodiments, the mass storage device may be implemented via an SSD for thinner and lighter system designs and for improved system responsiveness. However, in other embodiments, mass storage may be implemented primarily using hard disk drives (HDDs), with a lesser amount of SSD storage acting as SSD cache to enable context state and other such Non-volatile storage of information so that fast power-up can occur when system activity is initiated. As also shown in FIG. 12, flash memory device 1222 may be coupled to processor 1210, eg, via a serial peripheral interface (SPI). The flash memory device can provide non-volatile storage of system software, including basic input/output software (BIOS), and other firmware of the system.In various embodiments, the system's mass storage is implemented by an SSD alone or as a magnetic disk drive, optical drive, or other drive with an SSD cache. In some embodiments, the mass storage device is implemented as an SSD or HDD and a recovery (RST) cache module. In various implementations, HDDs provide storage between 320GB-4 terabytes (TB) and above, while RST caches are implemented using SSDs with capacities ranging from 24GB-256GB. Note that this SSD cache can be configured as a single-level cache (SLC) or multi-level cache (MLC) option to provide the appropriate level of responsiveness. In the SSD-only option, modules can be accommodated in various locations, for example, in mSATA or NGFF slots. As an example, SSDs range in capacity from 120GB-1TB.Various input/output (IO) devices may exist within system 1200 . Specifically shown in the embodiment of FIG. 12 is a display 1224, which may be a high definition LCD or LED panel disposed within the cover portion of the chassis. The display panel may also provide a touch screen 1225 (e.g., fitted externally on the display panel) so that user input via user interaction with the touch screen may be provided to the system to achieve desired operations, e.g., regarding displaying information, accessing information, etc. In one embodiment, display 1224 may be coupled to processor 1210 via a display interconnect, which may be implemented as a high-performance graphics interconnect. Touch screen 1225 may be coupled to processor 1210 via another interconnect, which may be an I2C interconnect in an embodiment. As further shown in FIG. 12 , in addition to touch screen 1225 , user input by touch can also occur via touch pad 1230 , which can be configured within the chassis and can also be coupled to the same I2C interconnect as touch screen 1225 .The display panel can operate in various modes. In the first mode, the display panel may be arranged in a transparent state in which the display panel is transparent to visible light. In various embodiments, the majority of the display panel may be the display, except for the bezel around the periphery. When the system is operating in notebook mode and the display panel is operating in a transparent state, the user can view information presented on the display panel while also being able to view objects behind the display. Additionally, information displayed on the display panel can be viewed by a user located behind the display. Or the operating state of the display panel may be an opaque state, wherein visible light does not pass through the display panel.In tablet mode, the system is folded closed so that when the bottom surface of the substrate rests on the surface or is held by the user, the rear display surface of the display panel stays in a position such that it faces outwardly towards the user. In the tablet mode of operation, the rear display surface functions as both a display and a user interface, as this surface can have touchscreen functionality and can perform other known functions of conventional touchscreen devices (eg, tablet devices). To this end, the display panel may include a transparency adjustment layer disposed between the touch screen layer and the front display surface. In some embodiments, the transparency adjusting layer may be an electrochromic layer (EC), an LCD layer, or a combination of an EC layer and an LCD layer.In various embodiments, the display may have different sizes, eg, an 11.6" or 13.3" screen, and may have an aspect ratio of 16:9, and a brightness of at least 300 nits. Additionally, the display can be full high-definition (HD) resolution (at least 1920x1080p), compatible with the embedded display port (eDP), and be a low-power panel with panel self-refresh.Regarding touch screen capability, the system can provide multi-touch capacitive and display multi-touch panels that support at least 5 fingers. And in some embodiments, the display can support 10 fingers. In one embodiment, the touch screen is housed within a low friction damage and scratch resistant glass and coating (eg, Gorilla Glass™ or Gorilla Glass 2™) to reduce "finger burn" and avoid "finger jumping". To provide an enhanced touch experience and responsiveness, in some implementations, the touch panel is multi-touch capable (e.g., less than 2 frames per static view (30Hz) during pinch-to-zoom), and 200ms (finger-to-pointer lag) ) in the case of a single touch function of less than 1cm per frame (30Hz). In some implementations, the display supports edge-to-edge glass with minimal screen bezels that are also flush with the panel surface and have limited IO interference when using multi-touch.For perceptual computing and other purposes, various sensors may be present within the system and may be coupled to processor 1210 in different ways. Certain inertial and environmental sensors may be coupled to processor 1210 through sensor hub 1240, eg, via an I2C interconnect. In the embodiment shown in FIG. 12 , these sensors may include an accelerometer 1241 , an ambient light sensor (ALS) 1242 , a compass 1243 and a gyroscope 1244 . Other environmental sensors may include one or more thermal sensors 1246, which in some embodiments are coupled to processor 1210 via a system management bus (SMBus) bus.Many different use cases can be achieved using the various inertial and environmental sensors present in the platform. These use cases support advanced computing operations including perceptual computing and also allow for enhancements with respect to power management/battery life, security and system responsiveness.For example, with respect to power management/battery life issues, based at least in part on information from ambient light sensors, the ambient light conditions in the platform location are determined and the intensity of the display is controlled accordingly. Thus, the power consumed to operate the display under certain light conditions is reduced.With regard to security operations, based on contextual information (eg, location information) obtained from sensors, it can be determined whether to allow a user to access a particular security document. For example, a user may be granted access to these documents at a work or home location. However, users are prevented from accessing these documents when the platform exists in a public location. In one embodiment, the determination is based on location information determined eg via a GPS sensor or camera recognition of landmarks. Other security operations may include providing pairing of devices within a close distance of each other, eg, a portable platform as described herein and a user's desktop computer, mobile phone, etc. In some implementations, when the devices are so paired, ad hoc sharing is achieved via near field communication. However, this sharing can be disabled when the device is beyond a certain distance. Furthermore, when pairing a platform as described herein with a smartphone, while in a public location, an alarm can be configured to trigger when the devices move beyond a predetermined distance from each other. Conversely, when the paired devices are in a safe location (eg, a workplace or home location), the devices may exceed the predetermined limit without triggering such an alarm.Sensor information can also be used to enhance responsiveness. For example, it is possible to enable sensors to operate at relatively low frequencies even when the platform is in a low power state. Thus, any changes in the position of the platform, such as determined by inertial sensors, GPS sensors, etc., are determined. If no such changes have been registered, a faster connection to the previous wireless hub (e.g., Wi-FiTM access point or similar wireless enabler) occurs, as no scanning for available wireless networks is required in this case resource. Thus, a higher level of responsiveness when waking up from a low power state is achieved.It should be understood that many other use cases can be achieved using sensor information obtained via integrated sensors within a platform as described herein, and that the above examples are for illustration purposes only. Using a system as described herein, a perceptual computing system may allow for the addition of alternative input modalities, including gesture recognition, and enable the system to sense user actions and intent.In some embodiments, there may be one or more infrared or other thermal sensing elements, or any other element for sensing the presence or movement of a user. Such sensing elements may include multiple different elements working together, in sequence, or both. For example, sensing elements include elements that provide initial sensing, such as light or sound projection, followed by gesture detection through, for example, an ultrasonic time-of-flight camera or a patterned light camera.Also in some embodiments, the system includes a light generator to generate the illumination lines. In some embodiments, the line provides a visual cue about a virtual boundary, ie, a fictional or virtual location in space, where the user's action of passing or breaking through the virtual boundary or plane is interpreted as an intent to engage the computing system. In some embodiments, the lighting line may change color as the computing system transitions to different states with respect to the user. The lighting lines can be used to provide visual cues to the user of the virtual boundary in the space, and can be used by the system to determine the computer's state transitions with respect to the user, including determining when the user wishes to engage the computer.In some embodiments, the computer senses the user's position and operates to interpret the movement of the user's hand across the virtual boundary as a gesture indicating the user's intent to engage the computer. In some embodiments, the light generated by the light generator may change as the user passes a virtual line or plane, thereby providing visual feedback to the user that the user has entered an area for providing gestures to provide input to the computer.The display screen may provide a visual indication of the computing system with respect to the user's status transition. In some embodiments, the first screen is provided in a first state in which the presence of the user is sensed by the system, for example by using one or more of the sensing elements.In some implementations, the system is used to sense user identity, for example, through facial recognition. Here, a transition to a second screen may be provided in a second state in which the computing system has identified the user, wherein the second screen provides visual feedback to the user that the user has transitioned to the new state. Transition to the third screen may occur in a third state in which the user has confirmed recognition of the user.In some embodiments, the computing system may use a transition mechanism to determine the location of the user's virtual boundary, where the location of the virtual boundary may vary by user and context. The computing system can generate lights (eg, illuminated lines) to indicate virtual boundaries for engaging with the system. In some embodiments, the computing system can be in a wait state and can generate light in the first color. The computing system can detect whether the user has reached the virtual boundary, for example, by using sensing elements to sense the user's presence and movement.In some embodiments, if it is detected that the user has crossed the virtual boundary (e.g., the user's hand is closer to the computing system than the virtual boundary line), the computing system may transition to a state for receiving gesture input from the user, wherein the transition is indicated The mechanism may include a light indicating that the virtual boundary changes to a second color.In some embodiments, the computing system may then determine whether gesture movement was detected. If gesture movement is detected, the computing system may proceed with the gesture recognition process, which may include using data from a gesture database, which may reside in memory in the computing device or otherwise be accessible by the computing device .If the user's gesture is recognized, the computing system can perform a function in response to the input, and return to receive additional gestures if the user is within the virtual boundary. In some embodiments, if the gesture is not recognized, the computing system may transition to an error state, wherein the mechanism to indicate the error state may include a light indicating that the virtual boundary changes to a third color, wherein if the user is within the virtual boundary, the system returns to to receive additional gestures to interface with the computing system.As mentioned above, in other embodiments, the system can be configured as a convertible tablet system that can be used in at least two different modes: tablet mode and notebook mode. A convertible system may have two panels, a display panel and a base panel, such that in tablet mode the two panels are arranged stacked on top of each other. In tablet mode, the display panel faces outward and can provide touchscreen functionality found in regular tablets. In notebook mode, both panels can be arranged in an open clamshell configuration.In various embodiments, the accelerometer may be a 3-axis accelerometer with a data rate of at least 50 Hz. A gyroscope may also be included, which may be a 3-axis gyroscope. Additionally, an electronic compass/magnetometer may be present. Additionally, one or more proximity sensors may be provided (eg, to open the lid to sense when a person is approaching (or not) the system and adjust power/performance to extend battery life). Enhanced features may be provided for some OS sensor fusion capabilities including accelerometer, gyroscope and compass. Additionally, via a sensor hub with a real-time clock (RTC), a wake-up mechanism from the sensor can be implemented to receive sensor input while the rest of the system is in a low power state.In some embodiments, an internal lid/display open switch or sensor is used to indicate when the lid is closed/opened and can be used to place the system in connected standby or automatically wake from connected standby. Other system sensors may include ACPI sensors for internal processor, memory, and skin temperature monitoring to effectuate processor and system operating state changes based on sensed parameters.In an embodiment, the OS may be an 8 OS implementing Connected Standby (also referred to herein as Win8 CS). Windows 8 Connected Standby or another OS with a similar state can provide very low ultra-idle power via a platform as described herein to enable applications to remain connected to, for example, cloud-based locations with very low power consumption. The platform can support 3 power states, screen on (normal); connected standby (as the default "off" state); and off (consuming zero watts). So, in Connected Standby, the platform is logically turned on (at the minimum power level) even if the screen is off. In such a platform, power management can be made transparent to applications and maintain a persistent connection, due in part to offloading techniques that enable the lowest powered components to perform operations.Also seen in FIG. 12, various peripherals may be coupled to processor 1210 via low pin count (LPC) interconnects. In the illustrated embodiment, various components may be coupled by an embedded controller 1235 . These components may include a keyboard 1236 (eg, coupled via a PS2 interface), a fan 1237 and a thermal sensor 1239 . In some embodiments, touchpad 1230 may also be coupled to EC 1235 via a PS2 interface. Additionally, a security processor such as a Trusted Platform Module (TPM) 1238 (according to the Trusted Computing Group (TCG) TPM Specification Version 1.2 dated October 2, 2003) may also be coupled via the LPC interconnect to Processor 1210. However, it should be understood that the scope of the present invention is not limited in this respect, and that secure processing and storage of secure information may take place in another protected location, such as static random access memory (SRAM) in a secure coprocessor , or as an encrypted data blob decrypted only when protected by the Secure Enclave (SE) processor mode.In particular implementations, the peripheral ports may include High-Definition Media Interface (HDMI) connectors (which may have different form factors, e.g., full-size, mini, or micro); such as according to the Universal Serial Bus Revision 3.0 specification ( November 2008), at least one port provides power to charge USB devices (eg, smartphones) when the system is in connected standby and plugged into AC wall power. In addition, one or more ThunderboltTM ports can also be provided. Other ports may include externally accessible card readers, such as a full-size SD-XC card reader and/or a SIM card reader for WWAN (eg, an 8-pin card reader). For audio, there could be a 3.5mm jack with stereo and mic capability (e.g. combo function), support for jack detection (e.g. only support for headphones using the mic in the cover or headphones using the mic in cable form). In some embodiments, this jack can be re-tasked between stereo headphone and stereo microphone inputs. Additionally, a power jack may be provided for coupling to an AC block.System 1200 can communicate with external devices in various ways, including wirelessly. In the embodiment shown in Figure 12, there are various wireless modules, each of which may correspond to a radio configured for a particular wireless communication protocol. One means for short-range wireless communication, such as near field, may be via a near field communication (NFC) unit 1245, which in one embodiment may communicate with the processor 1210 via the SMBus. Note that via this NFC unit 1245, devices that are in close proximity to each other can communicate. For example, a user may enable the system 1200 to communicate with another (for example) portable device (e.g., the user's smartphone) by tightly fitting the two devices together and enable the transmission of information such as identification information, payment information, etc. information, data such as image data, etc. Wireless power transfer can also be performed using the NFC system.Using the NFC unit described herein, a user can bump devices side-by-side and place devices side-by-side for near-field coupling functionality (e.g., near-field communication) by exploiting the coupling between the coils of one or more of such devices. and Wireless Power Transfer (WPT)). More specifically, embodiments provide devices with strategically shaped and placed ferrite materials to provide better coupling of coils. Each coil has an inductance associated therewith, which can be selected in conjunction with the resistance, capacitance, and other characteristics of the system to achieve a common resonant frequency for the system.As further seen in FIG. 12 , additional wireless units may include other short-range wireless engines, including WLAN unit 1250 and Bluetooth unit 1252 . Using the WLAN unit 1250 , Wi-Fi™ communication according to a given Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard can be realized, while via the Bluetooth unit 1252 , short-distance communication via the Bluetooth protocol can occur. These units may communicate with processor 1210 via, for example, a USB link or a Universal Asynchronous Receiver Transmitter (UART) link. Or these units may be coupled to the processor 1210 via an interconnect according to the Peripheral Component Interconnect Express™ (PCIe™) protocol, e. Another such protocol is the SDIO standard. Of course, the actual physical connections between these peripherals (which may be configured on one or more add-in cards) may be by way of NGFF connectors suitable for the motherboard.Additionally, wireless wide area communications, eg, according to cellular or other wireless wide area protocols, may occur via a WWAN unit 1256 , which in turn may be coupled to a Subscriber Identity Module (SIM) 1257 . In addition, to enable reception and use of location information, a GPS module 1255 may also be present. Note that in the embodiment shown in FIG. 12, WWAN unit 1256 and an integrated capture device such as camera module 1254 may communicate via a given USB protocol (e.g., USB 2.0 or 3.0 link, or UART or I2C protocol). communication. Again, the actual physical connection of these units can be via NGFF connectors that adapt the NGFF plug-in card to be configured on the motherboard.In certain embodiments, wireless functionality may be provided modularly, eg, as a WiFi™ 802.11ac solution (eg, a plug-in card backwards compatible with IEEE 802.11abgn) to support Windows 8 CS. The card can be configured in an internal slot (eg, via an NGFF adapter). Add-on modules may provide Bluetooth capability (eg, Bluetooth 4.0 with backward compatibility) as well as wireless display functionality. Additionally, NFC support may be provided via a separate device or multifunction device, and as an example may be placed on the right front portion of the case for easy access. Another additional module can be a WWAN device, which can provide support for 3G/4G/LTE and GPS. This module may be implemented in an internal (eg, NGFF) slot. Integrated antenna support is available for WiFiTM, Bluetooth, WWAN, NFC and GPS, enabling seamless transition from WiFiTM to WWAN radios, Wireless Gigabit (WiGig) and vice versa per the Wireless Gigabit Specification (July 2010) .As described above, an integrated camera may be incorporated into the cover plate. As one example, the camera may be a high resolution camera, eg, having a resolution of at least 2.0 megapixels (MP) and extending to 6.0 MP or higher.To provide audio input and output, an audio processor may be implemented via a digital signal processor (DSP) 1260, which may be coupled to processor 1210 via a high definition audio (HDA) link. Similarly, DSP 1260 may communicate with an integrated coder/decoder (CODEC) and amplifier 1262, which in turn may be coupled to output speakers 1263, which may be implemented within the chassis. Similarly, amplifier and CODEC 1262 may be coupled to receive audio input from microphone 1265, which in an embodiment may be implemented via a dual array of microphones (e.g., a digital microphone array) to provide high quality audio input for use in system Implement voice-activated control of various operations within. It should also be noted that audio output may be provided from amplifier/CODEC 1262 to headphone jack 1264 . While shown with these particular components in the embodiment of FIG. 12, understand the scope of the present invention is not limited in this respect.In a particular embodiment, the digital audio codec and amplifier are capable of driving a stereo headphone jack, a stereo microphone jack, an internal microphone array, and stereo speakers. In different implementations, the codec can be integrated into the audio DSP or coupled to the peripheral controller hub (PCH) via the HD audio path. In some implementations, one or more subwoofers can be provided in addition to the integrated stereo speakers, and the speaker solution can support DTS audio.In some embodiments, the processor 1210 may be powered by an external voltage regulator (VR) and multiple internal voltage regulators integrated inside the processor die, referred to as fully integrated voltage regulators (FIVRs). Using multiple FIVRs in a processor enables grouping of components into separate power planes so that power is regulated by the FIVRs and provided only to those components in the group. During power management, when a processor is placed into a particular low power state, a given power plane of one FIVR can be powered down or powered down while another power plane of another FIVR remains active or fully powered.In one embodiment, a maintenance power plane may be used during some deep sleep states to power up I/O pins for several I/O signals, for example, the interface between the processor and the PCH, the interface to the external VR, and the EC 1235 interface. The maintenance power plane also powers on-die voltage regulators that support on-board SRAM or other cache memory that stores processor context during sleep states. The maintenance power plane is also used to power up the processor's wake-up logic, which monitors and processes various wake-up source signals.During power management, when the other power planes are powered down or powered down when the processor enters certain deep sleep states, the maintenance power plane remains powered on to support the components referenced above. However, this may result in unnecessary power consumption or dissipation when these components are not needed. To this end, embodiments may provide a connected standby sleep state to maintain processor context using a dedicated power plane. In one embodiment, the connected standby sleep state facilitates processor wake-up using resources of the PCH, which itself may exist in the package with the processor. In one embodiment, a connected standby sleep state helps maintain processor architecture functionality in the PCH until the processor wakes up, which enables shutting down all unnecessary processor components that were previously kept powered on during the deep sleep state, including shutting down all clocks. In one embodiment, the PCH contains a time stamp counter (TSC) and standby logic for controlling the connection of the system during the connection's standby state. An integrated voltage regulator for maintaining the power plane can also reside on the PCH.In an embodiment, during Connected Standby, an integrated voltage regulator can be used as a dedicated power plane that remains powered on when the processor enters Deep Sleep and Connected Standby to support storing processor context (e.g., critical state variables) dedicated cache memory. The critical state may include state variables associated with the architecture, microarchitecture, debug state, and/or similar state variables associated with the processor.A wakeup source signal from the EC 1235 may be sent to the PCH rather than the processor during the connected standby state so that the PCH rather than the processor can manage the wakeup process. Additionally, the TSC is maintained in the PCH to help maintain processor architectural functionality. While shown with these particular components in the embodiment of FIG. 12, understand the scope of the present invention is not limited in this respect.Power control in the processor can result in enhanced power savings. For example, power can be dynamically allocated between cores, individual cores can change frequency/voltage, and multiple deep low power states can be provided for very low power consumption. In addition, dynamic control of cores or individual core sections can provide reduced power consumption by powering down components when they are not in use.Some implementations may provide a specific power management IC (PMIC) to control platform power. Using this solution, the system can see very low (e.g. less than 5%) for extended durations (e.g. 16 hours) when in a given standby state (e.g. when in Win8 connected standby) battery degradation. In Win8 idle state, battery life (eg at 150nit) of over eg 9 hours can be achieved. Regarding video playback, long battery life can be achieved, for example, full HD video playback can occur in a minimum of 6 hours. A platform in one implementation may have an energy capacity of, for example, 35 Watt-hours (Whr) for Win8 CS using SSD and (eg) 40-44 Whr for Win8 CS using HDD with RST cache configuration.Certain implementations may provide support for a nominal CPU thermal design power (TDP) of 15W, with a configurable CPU TDP up to a design point of approximately 25W TDP. Due to the thermal characteristics described above, the platform can include minimal ventilation openings. Additionally, the platform is pillow-friendly (since no hot air is blown at the user). Different maximum temperature points can be achieved depending on the chassis material. In one implementation of a plastic enclosure (which at least must have a plastic cover or base), the maximum operating temperature may be 52 degrees Celsius (C). And for a metal chassis implementation, the maximum operating temperature may be 46°C.In different implementations, a security module such as a TPM may be integrated into the processor, or may be a discrete device such as a TPM 2.0 device. Utilizing an integrated security module (also known as Platform Trust Technology (PTT)), the BIOS/firmware can be enabled to expose specific hardware features for specific security features, including secure instructions, secure boot, anti-theft technologies, identity protection technologies, Letter Execution Technology (TXT) and Manageability Engine technologies as well as secure user interfaces such as secure keyboards and displays.While the invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate many modifications and variations therefrom. The appended claims are intended to cover all such modifications and variations as fall within the true spirit and scope of this invention.Designs can go through various stages from creation to simulation to fabrication. The data representing the design may represent the design in a number of ways. First, as useful in simulations, hardware can be represented using a hardware description language or another functional description language. Additionally, circuit-level models with logic and/or transistor gates can be generated at certain stages of the design process. In addition, most designs at some stage reach a level of data that represents the physical placement of various devices in the hardware model. Using conventional semiconductor fabrication techniques, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers of the mask used to produce the integrated circuit. In any representation designed, data may be stored on any form of machine-readable media. A memory such as a disk or a magnetic or optical storage device may be a machine-readable medium for storing information transmitted via optical or electrical waves that are modulated or otherwise generated to transmit such information. When transmitting an electrical carrier indicating or carrying a code or design, to the extent duplication, buffering or retransmission of the electrical signal is performed, a new copy is made. Accordingly, a communications provider or network provider may store, at least temporarily, an item (eg, information encoded into a carrier wave embodying techniques of embodiments of the present invention) on a tangible, machine-readable medium.A module as used herein refers to any combination of hardware, software and/or firmware. As an example, a module includes hardware associated with a non-transitory medium, eg, a microcontroller, for storing code adapted to be executed by the microcontroller. Thus, in one embodiment, a reference to a module refers to hardware specifically configured to recognize and/or execute code to be retained on a non-transitory medium. Also, in another embodiment, use of a module refers to a non-transitory medium including code that is specifically adapted to be executed by a microcontroller to perform predetermined operations. And as may be inferred, in yet another embodiment, the term module (in this example) may refer to a combination of microcontroller and non-transitory media. Module boundaries, often shown as separate, often vary and potentially overlap. For example, a first module and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware such as transistors, registers, or other hardware (eg, programmable logic devices).In one embodiment, use of the phrases "for" or "configured to" refers to arranging, putting together, manufacturing, offering for sale, introducing and/or designing means, hardware, logic or elements to perform a specified or determined task. In this example, a device or components thereof that are not being operated are still "configured to" perform the specified tasks if the devices or components thereof are designed, coupled and/or interconnected to perform the specified tasks. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But logic gates being "configured" to provide an enable signal to a clock does not include that every potential logic gate can provide either a 1 or a 0. Instead, logic gates are coupled in such a way that a 1 or 0 output is used to enable the clock during operation. Note again that the use of the term "configured to" does not require operation, but instead focuses on a hidden state of the device, hardware, and/or element, wherein in the hidden state, the device, hardware, and/or element is designed to operate in the device, hardware, and/or element Hardware and/or components operate to perform specific tasks.Additionally, in one embodiment, use of the phrases "capable of" and/or "operable for" means that some means, logic, hardware, and/or element are designed in such a way as to enable the specified means using devices, logic, hardware and/or components. Note that, as above, in one embodiment, the use of, capable of, or operable to refer to a hidden state of a device, logic, hardware, and/or element, wherein the device, logic, hardware, and/or element is not operating but operates with This is designed in such a way that the device can be used in a specific way.A value, as used herein, includes any known representation of a number, state, logical state, or binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as the use of 1s and 0s, which simply represent binary logic states. For example, 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, memory cells, such as transistors or flash memory cells, are capable of holding a single logical value or multiple logical values. However, other representations of values have been used in computer systems. For example, decimal tens can also be represented as the binary value 1010 and the hexadecimal letter A. Accordingly, a value includes any representation of information capable of being held in a computer system.Furthermore, states can be represented by values or parts of values. As an example, a first value (eg, a logical one) may represent a default or initial state, while a second value (eg, a logical zero) may represent a non-default state. Additionally, in one embodiment, the terms reset and set refer to a default value or state and an updated value or state, respectively. For example, a default value potentially includes a high logic value (ie, a reset) and an updated value potentially includes a low logic value (ie, a set). Note that any combination of values can be used to represent any number of states.The method, hardware, software, firmware or code embodiments set forth above may be implemented via instructions or code stored on a machine-accessible, machine-readable, computer-accessible or computer-readable medium executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine (eg, a computer or electronic system). For example, non-transitory machine-accessible media include random access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage media; flash memory devices; electrical storage devices; Optical storage devices; acoustic storage devices; other forms of storage for holding information received from transitory (propagated) signals (for example, carrier waves, infrared signals, digital signals); etc., as distinguished from A non-transitory medium of information.Instructions for programming logic to perform embodiments of the invention may be stored in memory (eg, DRAM, cache, flash memory, or other storage devices) in the system. Additionally, instructions may be distributed via a network or by means of other computer-readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to floppy disks, optical disks, compact disk read-only memory (CD-ROMs), magneto-optical disks, Read Only Memory (ROM), Random Access Memory (RAM), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Magnetic or Optical Card, Flash Memory or A tangible, machine-readable storage device that transmits information via the Internet via electrical, optical, acoustic, or other forms of propagating signals (eg, carrier waves, infrared signals, digital signals, etc.). Thus, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Moreover, the foregoing use of embodiments and other exemplary language do not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, and potentially the same embodiment.Systems, methods, and apparatus may include one or a combination of the following examples:Example 1 is a method for operating a port of an upstream component connected to one or more downstream components across a Peripheral Component Interconnect Express (PCIe) compliant link, the method comprising: determining that the downstream port supports one or more A separate reference clock (SRIS) mode selection mechanism with independent spread spectrum timing (SSC); determines the system clock configuration from the downstream port to the corresponding upstream port connected to the downstream port through a PCIe-compliant link; in setting an SRIS mode in the downstream port; and sending data across the link from the downstream port using the determined system clock configuration.Example 2 may include the subject matter of Example 1, wherein setting the SRIS mode in the downstream port includes setting the SRIS mode based at least in part on the determination of the system clock configuration.Example 3 may include the subject matter of any of Examples 1 or 2, and may further include communicating the SRIS mode across the PCIe compliant link to one or more upstream ports connected to the downstream ports.Example 4 may include the subject matter of Example 3, wherein the one or more upstream ports comprise a retimer's dummy port.Example 5 may include the subject matter of any of Examples 1-4, wherein determining that the downstream port supports one or more SRIS mode selection mechanisms includes determining that an SRIS mode selection mechanism bit is set in a register associated with the link.Example 6 may include the subject matter of Example 5, wherein the link-associated registers comprise link capability registers.Example 7 may include the subject matter of Example 6, wherein the bits that are set in the link capability register include 23 bits that are set to indicate the presence of SRIS mode select capability.Example 8 may include the subject matter of Example 5, wherein the link-associated registers include link control registers.Example 9 may include the subject matter of Example 8, wherein the bits that are set in the link control register include 12 bits that are set to indicate SRIS mode selection.Example 10 may include the subject matter of any of 1-9, wherein determining the system clock configuration includes determining the system clock configuration using an out-of-band management interface, the out-of-band management interface comprising a system management bus.Example 11 is a computer program product tangibly embodied on a non-transitory computer-readable medium, the computer program product comprising instructions that, when executed, cause the embodiment to be implemented on a root port compliant with the Peripheral Component Interconnect Express (PCIe) protocol Logic on the controller to: determine that the downstream port supports one or more separate reference clock (SRIS) mode selection mechanisms with independent spread spectrum timing (SSC); determine the system clock configuration from the downstream port to the corresponding upstream port, corresponding to The upstream port is connected to the downstream port through a PCIe-compliant link; the SRIS mode is set in the downstream port; and data is sent across the link from the downstream port using the determined system clock configuration.Example 12 may include the subject matter of Example 11, wherein setting the SRIS mode in the downstream port includes setting the SRIS mode based at least in part on the determination of the system clock configuration.Example 13 may include the subject matter of any of Examples 11-12, the instruction to communicate the SRIS mode across the PCIe compliant link to one or more upstream ports connected to the downstream port.Example 14 may include the subject matter of Example 13, wherein the one or more upstream ports comprise a retimer dummy port.Example 15 may include the subject matter of any of Examples 11-14, wherein determining that the downstream port supports one or more SRIS mode selection mechanisms includes determining that an SRIS mode selection mechanism bit is set in a register associated with the link.Example 16 may include the subject matter of Example 15, wherein the link-associated register comprises a link capability register.Example 17 may include the subject matter of Example 16, wherein the bits that are set in the link capability register include 23 bits that are set to indicate the presence of SRIS mode select capability.Example 18 may include the subject matter of Examples 11-17, wherein the link-associated registers comprise link control registers.Example 19 may include the subject matter of Example 18, wherein the bits that are set in the link control register include 12 bits that are set to indicate SRIS mode selection.Example 20 may include the subject matter of Examples 11-19, wherein determining the system clock configuration includes determining the system clock configuration using an out-of-band management interface, the out-of-band management interface comprising a system management bus.Example 21 is a computing system comprising: a root port controller conforming to the Peripheral Component Interconnect Express (PCIe) protocol, the root port controller comprising a downstream port; the downstream port comprising logic at least partially implemented in hardware for : Determine that the downstream port supports one or more Separate Reference Clock (SRIS) mode selection mechanisms with independent spread spectrum timing (SSC); determine the system clock configuration from the downstream port to the corresponding upstream port, and the corresponding upstream port passes PCIe-compliant A link is connected to the downstream port; an SRIS mode is set in the downstream port; and data is sent across the link from the downstream port using the determined system clock configuration.Example 22 may include the subject matter of Example 21, wherein setting the SRIS mode in the downstream port includes setting the SRIS mode based at least in part on the determination of the system clock configuration.Example 23 may include the subject matter of Examples 21-22, the instruction to communicate the SRIS mode across the PCIe compliant link to one or more upstream ports connected to the downstream port.Example 24 may include the subject matter of Example 23, wherein the one or more upstream ports comprise a retimer dummy port.Example 25 may include the subject matter of any of Examples 21-24, wherein determining that the downstream port supports one or more SRIS mode selection mechanisms includes determining that an SRIS mode selection mechanism bit is set in a register associated with the link.Example 26 may include the subject matter of Example 25, wherein the link-associated registers include link capability registers.Example 27 may include the subject matter of Example 26, wherein the bits that are set in the link capability register include 23 bits that are set to indicate the presence of SRIS mode select capability.Example 28 may include the subject matter of any of Examples 21-27, wherein the link-associated register comprises a link control register.Example 29 may include the subject matter of Example 28, wherein the bits that are set in the link control register include 12 bits that are set to indicate SRIS mode selection.Example 30 may include the subject matter of any of Examples 21-29, wherein determining the system clock configuration includes determining the system clock configuration using an out-of-band management interface, the out-of-band management interface comprising a system management bus. |
A microelectronic device comprises a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers. The stack structure has blocks separated from one another by first dielectric slot structures. Each of the blocks comprises two crest regions, a stadium structure interposed between the two crest regions in a first horizontal direction and comprising opposing staircase structures each having steps comprising edges of the tiers of the stack structure, and two bridge regions neighboring opposing sides of the stadium structure in a second horizontal direction orthogonal to the first horizontal direction and having upper surfaces substantially coplanar with upper surfaces of the two crest regions. At least one second dielectric slot structure is within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extends through and segmenting each of the two bridge regions. Memory devices, electronic systems, and methods of forming microelectronic devices are also described. |
CLAIMSWhat is claimed is:1. A microelectronic device, comprising: a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers, the stack structure having blocks separated from one another by first dielectric slot structures, each of the blocks comprising: two crest regions; a stadium structure interposed between the two crest regions in a first horizontal direction and comprising opposing staircase structures each having steps comprising edges of the tiers of the stack structure; two bridge regions neighboring opposing sides of the stadium structure in a second horizontal direction orthogonal to the first horizontal direction and having upper surfaces substantially coplanar with upper surfaces of the two crest regions; and at least one second dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extending through and segmenting each of the two bridge regions.2. The microelectronic device of claim 1, further comprising a filled trench vertically overlying within horizontal boundaries of the stadium structure, the filled opening comprising: a first dielectric material on the opposing staircase structures of the stadium structure and on inner sidewalls of the two bridge regions; a second dielectric material on the first dielectric material and having a different material composition than the first dielectric material; and a third dielectric material on the second dielectric material and having a different material composition than the second dielectric material.
3. The microelectronic device of claim 2, wherein: the first dielectric material comprises a dielectric oxide material; and the second dielectric material comprises a dielectric nitride material.4. The microelectronic device of claim 2, wherein a portion of the at least one second dielectric slot structure is positioned within horizontal boundaries of the filled trench, the portion of at least one second dielectric slot structure partially vertically extending through the filled trench.5. The microelectronic device of claim 4, wherein, for each of the blocks of the stack structure: the portion of the at least one second dielectric slot structure continuously extends in the second horizontal direction from a first of the two bridge regions to a second of the two bridge regions; an additional portion of the at least one second dielectric slot structure continuously extends in the second horizontal direction through the first of the two bridge regions and to a first of the first dielectric slot structures; and a further portion of the at least one second dielectric slot structure continuously extends in the second horizontal direction through the second of the two bridge regions and to a second of the first dielectric slot structures.6. The microelectronic device of any one of claims 1 through 5, wherein the at least one second dielectric slot structure comprises only one second dielectric slot structure continuously extending in the second horizontal direction across more than one of the blocks of the stack structure and across at least one of the first dielectric slot structures interposed between the more than one of the blocks of the stack structure.7. The microelectronic device of any one of claims 1 through 5, wherein the at least one second dielectric slot structure is within boundaries in the first horizontal direction of a central portion of the stadium structure interposed between the opposing staircase structures of the stadium structure.
8. The microelectronic device of any one of claims 1 through 5, wherein the at least one second dielectric slot structure comprises at least two second dielectric slot structures extending in parallel in the second horizontal direction.9. The microelectronic device of any one of claims 1 through 5, wherein the at least one second dielectric slot structure comprises at least two second dielectric slot structures extending in series in the second horizontal direction.10. The microelectronic device of any one of claims 1 through 5, further comprising third dielectric slot structures with a horizontal area of each of the blocks of the stack structure, the third dielectric slot structures partially vertically extending through each of the blocks of the stack structure and horizontally extending in the first horizontal direction through one of the two crest regions of each of the blocks of the stack structure and into one of the opposing staircase structures of the stadium structure of each of the blocks of the stack structure.11. The microelectronic device of claim 10, wherein each of the third dielectric slot structures is completely horizontally offset from the at least one second dielectric slot structure in the first horizontal direction.12. A method of forming a microelectronic device, comprising: forming a preliminary stack structure comprising a vertically alternating sequence of sacrificial material and insulative material arranged in tiers, the preliminary stack structure having blocks separated from one another by slots, each of the blocks comprising: two crest regions; two bridge regions horizontally extending in parallel from and between the two crest regions and having upper boundaries substantially coplanar with upper boundaries of the two crest regions; and a stadium structure interposed between the two crest regions in a first horizontal direction and interposed between the two bridge regions in a second horizontal direction orthogonal to the first horizontal direction, the stadium
structure comprising opposing staircase structures each having steps comprising edges of the tiers of the preliminary stack structure; replacing the sacrificial material of the preliminary stack structure with conductive material to form a stack structure comprising a vertically alternating sequence of the conductive material and the insulative material arranged in the tiers, the stack structure having the blocks separated from one another by the slots; filling the slots with dielectric material to form first dielectric slot structures; and forming at least one second dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extending through and segmenting each of the two bridge regions.13. The method of claim 12, further comprising, prior to replacing the sacrificial material of the preliminary stack structure with conductive material: forming a first dielectric material on surfaces of the two crest regions, the two bridge regions, and the opposing staircase structures of the stadium structure; forming a second dielectric material on the first dielectric material, the second dielectric material having a different material composition than the first dielectric material; and forming a third dielectric material on the second dielectric material, the third dielectric material having a different material composition than the second dielectric material.14. The method of claim 13, further comprising: selecting the first dielectric material to comprise silicon dioxide; selecting the second dielectric material to comprise silicon nitride; and selecting the third dielectric material to comprise additional silicon dioxide.15. The method of claim 13, wherein forming at least one second dielectric slot structure further comprises forming the at least one second dielectric slot structure to extend in the second horizontal direction through portions of the first dielectric material, the second dielectric material, the third dielectric material, and the two bridge regions.16. The method of any one of claims 12 through 15, wherein forming at least one second dielectric slot structure further comprises forming the at least one second
dielectric slot structure to extend in the second horizontal direction through pairs of the first dielectric slot structures neighboring opposing sides of each of the blocks of the stack structure.17. The method of any one of claims 12 through 15, wherein forming at least one second dielectric slot structure comprises forming at least two second dielectric slot structures positioned in series with one another in the second horizontal direction, a first of the at least two second dielectric slot structures horizontally and vertically extending through a first of the two bridge regions of one of the blocks of the stack structure, and a second of the at least two second dielectric slot structures horizontally and vertically extending through a second of the two bridge regions of the one of the blocks of the stack structure.18. The method of any one of claims 12 through 15, wherein forming at least one second dielectric slot structure comprises forming at least two second dielectric slot structures positioned in parallel with one another in the second horizontal direction, each of the at least two second dielectric slot structures horizontally and vertically extending through each of the two bridge regions of at least one of the blocks of the stack structure.19. The method of any one of claims 12 through 15, further comprising forming third dielectric slot structures with a horizontal area of each of the blocks of the stack structure, the third dielectric slot structures completely offset from the at least one second dielectric slot structure in the first horizontal direction and extending in the first horizontal direction through one of the two crest regions of each of the blocks of the stack structure and terminating within a horizontal area of one of the opposing staircase structures of each of the blocks of the stack structure.20. The method of claim 19, further comprising forming lower boundaries of the third dielectric slot structures to be substantially coplanar with lower boundaries of the at least one second dielectric slot structure.
21. A memory device, comprising: a stack structure comprising tiers each comprising a conductive material and an insulative material vertically neighboring the conductive material, the stack structure divided into blocks extending in parallel in a first direction and separated from one another in a second direction by dielectric slot structures, each of the blocks comprising: a stadium structure comprising: opposing staircase structures individually having steps comprising horizontal ends of at least some the tiers of the stack structure; and a central portion between the opposing staircase structures in the first direction; first elevated regions neighboring opposing ends of the stadium structure in the first direction; and second elevated regions neighboring opposing sides of the stadium structure in the second direction, uppermost surfaces of the second elevated regions substantially coplanar with uppermost surfaces of the first elevated regions; at least one additional dielectric slot structure within horizontal boundaries in the first direction of the central portion of the stadium structure of each of the blocks, and horizontally and vertically extending through the second elevated regions of each of the blocks; and strings of memory cells vertically extending through a portion of each of the blocks neighboring the stadium structure in the first direction.22. The memory device of claim 21, further comprising, within each of the blocks, a filled trench vertically overlying and within a horizontal area of the stadium structure, the filled trench comprising: a dielectric oxide liner material on the opposing staircase structures and the central portion of the stadium structure, and on inner side surfaces of the bridge regions; a dielectric nitride liner material on the dielectric oxide liner material; and a dielectric fill material on the dielectric nitride liner material.
23. The memory device of claim 21, further comprising, within each of the blocks, further dielectric slot structures extending in parallel with one another in the first direction and completely horizontally offset from the at least one additional dielectric slot structure in the first direction.24. The memory device of any one of claims 21 through 23, further comprising: digit lines overlying the stack structure and electrically coupled to the strings of memory cells; a source structure underlying the stack structure and electrically coupled to the strings of memory cells; conductive contact structures on at least some of the steps of the opposing staircase structures of the stadium structure; conductive routing structures coupled to the conductive contact structures; and a control logic devices coupled to the source structure, the digit lines, and the conductive routing structures.25. An electronic system, comprising: an input device; an output device; a processor device operably coupled to the input device and the output device; and a memory device operably coupled to the processor device and comprising at least one microelectronic device structure comprising: a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers, the stack structure comprising at least two blocks separated by at least one intervening dielectric structure, each of the at least two blocks comprising: two elevated regions; a stadium structure interposed between the two elevated regions in a first horizontal direction and comprising staircase structures opposing one another in the first horizontal direction, the staircase structures each having steps comprising horizontal ends of the tiers of the stack structure;
two additional elevated regions neighboring opposing sides of the stadium structure in a second horizontal direction perpendicular to the first horizontal direction, upper boundaries of the two additional elevated regions substantially coplanar with upper boundaries of the two elevated regions; and at least one dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction, the at least one dielectric slot structure horizontally and vertically extending through each of the two additional elevated regions of each of the at least two blocks of the stack structure. |
METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES, MEMORY DEVICES, AND ELECTRONIC SYSTEMSPRIORITY CLAIMThis application claims the benefit of the filing date of United States Patent Application Serial No. 17/125,200, filed December 17, 2020, for “METHODS OF FORMING MICROELECTRONIC DEVICES, AND RELATED MICROELECTRONIC DEVICES, MEMORY DEVICES, AND ELECTRONIC SYSTEMS.”TECHNICAL FIELDThe disclosure, in various embodiments, relates generally to the field of microelectronic device design and fabrication. More specifically, the disclosure relates to methods of forming microelectronic devices, and to related microelectronic devices, memory devices, and electronic systems.BACKGROUNDMicroelectronic device designers often desire to increase the level of integration or density of features within a microelectronic device by reducing the dimensions of the individual features and by reducing the separation distance between neighboring features. In addition, microelectronic device designers often seek to design architectures that are not only compact, but offer performance advantages, as well as simplified designs.One example of a microelectronic device is a memory device. Memory devices are generally provided as internal integrated circuits in computers or other electronic devices. There are many types of memory devices including, but not limited to, non-volatile memory devices (e.g., NAND Flash memory devices). One way of increasing memory density in nonvolatile memory devices is to utilize vertical memory array (also referred to as a “three- dimensional (3D) memory array”) architectures. A conventional vertical memory array includes strings of memory cells vertically extending through one or more stack structures including tiers of conductive material and insulative material. Each string of memory cells may include at least one select device coupled thereto. Such a configuration permits a greater number of switching devices (e.g., transistors) to be located in a unit of die area (i.e., length and width of active surface consumed) by building the array upwards (e.g., vertically) on a
die, as compared to structures with conventional planar (e.g., two-dimensional) arrangements of transistors.Vertical memory array architectures generally include electrical connections between the conductive material of the tiers of the stack structure(s) of the memory device and control logic devices (e.g., string drivers) so that the memory cells of the vertical memory array can be uniquely selected for writing, reading, or erasing operations. One method of forming such an electrical connection includes forming so-called “staircase” (or “stair step”) structures at edges (e.g., horizontal ends) of the tiers of the stack structure(s) of the memory device. The staircase structure includes individual “steps” defining contact regions for the conductive material of the tiers, upon which conductive contact structures can be positioned to provide electrical access to the conductive material. In turn, conductive routing structures can be employed to couple the conductive contact structures to the control logic devices. However, conventional staircase structure fabrication techniques can segment the conductive material of an individual tier in a manner resulting in discontinuous conductive paths through the tier that can require the use of multiple (e.g., more than one) switching devices (e.g., transistors) of at least one string driver to drive voltages completely across the tier and/or in opposing directions across the tier.SUMMARYIn some embodiments, a microelectronic device comprises a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers. The stack structure has blocks separated from one another by first dielectric slot structures. Each of the blocks comprises two crest regions, a stadium structure interposed between the two crest regions in a first horizontal direction and comprising opposing staircase structures each having steps comprising edges of the tiers of the stack structure, and two bridge regions neighboring opposing sides of the stadium structure in a second horizontal direction orthogonal to the first horizontal direction and having upper surfaces substantially coplanar with upper surfaces of the two crest regions. The microelectronic device further comprises at least one second dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extending through and segmenting each of the two bridge regions.In additional embodiments, a method of forming a microelectronic device comprises forming a preliminary stack structure comprising a vertically alternating sequence of
sacrificial material and insulative material arranged in tiers. The preliminary stack structure has blocks separated from one another by slots. Each of the blocks comprises two crest regions, two bridge regions horizontally extending in parallel from and between the two crest regions and having upper boundaries substantially coplanar with upper boundaries of the two crest regions, two bridge regions horizontally extending in parallel from and between the two crest regions and having upper boundaries substantially coplanar with upper boundaries of the two crest regions, and a stadium structure interposed between the two crest regions in a first horizontal direction and interposed between the two bridge regions in a second horizontal direction orthogonal to the first horizontal direction. The stadium structure comprises opposing staircase structures each having steps comprising edges of the tiers of the preliminary stack structure. The sacrificial material of the preliminary stack structure is replaced with conductive material to form a stack structure comprising a vertically alternating sequence of the conductive material and the insulative material arranged in the tiers. The stack structure has the blocks separated from one another by the slots. The slots are filled with dielectric material to form first dielectric slot structures. At least one second dielectric slot structure is formed within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extends through and segments each of the two bridge regions.In further embodiments, a memory device comprises a stack structure comprising tiers each comprising a conductive material and an insulative material vertically neighboring the conductive material. The stack structure is divided into blocks extending in parallel in a first direction and separated from one another in a second direction by dielectric slot structures. Each of the blocks comprises a stadium structure comprising opposing staircase structures individually having steps comprising horizontal ends of at least some the tiers of the stack structure, and a central portion between the opposing staircase structures in the first direction; first elevated regions neighboring opposing ends of the stadium structure in the first direction; and second elevated regions neighboring opposing sides of the stadium structure in the second direction, uppermost surfaces of the second elevated regions substantially coplanar with uppermost surfaces of the first elevated regions. The memory device further comprises at least one additional dielectric slot structure, and strings of memory cells. The at least one additional dielectric slot structure is within horizontal boundaries in the first direction of the central portion of the stadium structure of each of the blocks, and horizontally and vertically extends through the second elevated regions of each of the blocks. The strings of memory
cells vertically extend through a portion of each of the blocks neighboring the stadium structure in the first direction.In yet further embodiments, an electronic system comprises an input device, an output device, a processor device operably coupled to the input device and the output device, and a memory device operably coupled to the processor device. The memory device comprises at least one microelectronic device structure comprising a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers. The stack structure further comprises at least two blocks separated by at least one intervening dielectric structure. Each of the at least two blocks comprises two elevated regions, a stadium structure, and two additional elevated regions. The stadium structure is interposed between the two elevated regions in a first horizontal direction and comprises staircase structures opposing one another in the first horizontal direction. The staircase structures each have steps comprising horizontal ends of the tiers of the stack structure. The two additional elevated regions neighbor opposing sides of the stadium structure in a second horizontal direction perpendicular to the first horizontal direction. Upper boundaries of the two additional elevated regions are substantially coplanar with upper boundaries of the two elevated regions. The at least one microelectronic device structure further comprises at least one dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction. The at least one dielectric slot structure horizontally and vertically extends through each of the two additional elevated regions of each of the at least two blocks of the stack structure.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 A is a simplified, partial perspective view of a microelectronic device structure at a processing stage of a method forming a microelectronic device, in accordance with embodiments of the disclosure. FIG. IB is a simplified, longitudinal cross-sectional view of a portion A (identified with dashed lines in FIG. 1 A) of the microelectronic device structure shown in FIG. 1A.FIG. 2 is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure shown in FIGS. 1A and IB at another processing stage of the method forming the microelectronic device following the processing stage of FIGS. 1A and IB.
FIG. 3 is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure shown in FIGS. 1A and IB at another processing stage of the method forming the microelectronic device following the processing stage of FIG. 2.FIG. 4A is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure shown in FIGS. 1A and IB at another processing stage of the method forming the microelectronic device following the processing stage of FIG. 3. FIG. 4B is a simplified, longitudinal cross-sectional view of a portion of the microelectronic device structure at the processing stage of FIG. 4A about the dashed line B- B shown in FIG. 4A. FIG. 4C is a simplified, partial top-down view of the microelectronic device structure at the processing stage of FIG. 4A. FIG. 4D shows a magnified view of a portion C (identified with dashed lines in FIG. 4C) of the simplified, partial top-down view of the microelectronic device structure shown in FIG. 4C.FIG. 5 A is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure shown in FIGS. 1A and IB at another processing stage of the method forming the microelectronic device following the processing stage of FIGS. 4A through 4D. FIG. 5B is a simplified, partial top-down view of the microelectronic device structure at the processing stage of FIG. 5 A. FIG. 5C is a simplified, partial perspective view of portions D (identified with dashed lines in FIG. 5B) of the microelectronic device structure shown in FIG. 5B.FIG. 6 is a simplified, partial top-down view of a microelectronic device structure at a processing stage of a method forming a microelectronic device, in accordance with additional embodiments of the disclosure.FIG. 7 is a simplified, partial top-down view of a microelectronic device structure at a processing stage of a method forming a microelectronic device, in accordance with yet additional embodiments of the disclosure.FIG. 8 is a simplified, partial top-down view of a microelectronic device structure at a processing stage of a method forming a microelectronic device, in accordance with further embodiments of the disclosure.FIG. 9 is a simplified, partial top-down view of a microelectronic device structure at a processing stage of a method forming a microelectronic device, in accordance with yet further embodiments of the disclosure.FIG. 10 is a simplified partial cutaway perspective view of a microelectronic device, in accordance with embodiments of the disclosure.
FIG. 11 is a schematic block diagram illustrating an electronic system, in accordance with embodiments of the disclosure.MODE(S) FOR CARRYING OUT THE INVENTIONThe following description provides specific details, such as material compositions, shapes, and sizes, in order to provide a thorough description of embodiments of the disclosure. However, a person of ordinary skill in the art would understand that the embodiments of the disclosure may be practiced without employing these specific details. Indeed, the embodiments of the disclosure may be practiced in conjunction with conventional microelectronic device fabrication techniques employed in the industry. In addition, the description provided below does not form a complete process flow for manufacturing a microelectronic device (e.g., a memory device). The structures described below do not form a complete microelectronic device. Only those process acts and structures necessary to understand the embodiments of the disclosure are described in detail below. Additional acts to form a complete microelectronic device from the structures may be performed by conventional fabrication techniques.Drawings presented herein are for illustrative purposes only, and are not meant to be actual views of any particular material, component, structure, device, or system. Variations from the shapes depicted in the drawings as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments described herein are not to be construed as being limited to the particular shapes or regions as illustrated, but include deviations in shapes that result, for example, from manufacturing. For example, a region illustrated or described as box-shaped may have rough and/or nonlinear features, and a region illustrated or described as round may include some rough and/or linear features. Moreover, sharp angles that are illustrated may be rounded, and vice versa. Thus, the regions illustrated in the figures are schematic in nature, and their shapes are not intended to illustrate the precise shape of a region and do not limit the scope of the present claims. The drawings are not necessarily to scale. Additionally, elements common between figures may retain the same numerical designation.As used herein, a “memory device” means and includes microelectronic devices exhibiting memory functionality, but not necessary limited to memory functionality. Stated another way, and by way of non-limiting example only, the term “memory device” includes not only conventional memory (e.g., conventional volatile memory, such as conventional
dynamic random access memory (DRAM); conventional non-volatile memory, such as conventional NAND memory), but also includes an application specific integrated circuit (ASIC) (e.g., a system on a chip (SoC)), a microelectronic device combining logic and memory, and a graphics processing unit (GPU) incorporating memory.As used herein, the term “configured” refers to a size, shape, material composition, orientation, and arrangement of one or more of at least one structure and at least one apparatus facilitating operation of one or more of the structure and the apparatus in a pre-determined way.As used herein, the terms “vertical,” “longitudinal,” “horizontal,” and “lateral” are in reference to a major plane of a structure and are not necessarily defined by earth’s gravitational field. A “horizontal” or “lateral” direction is a direction that is substantially parallel to the major plane of the structure, while a “vertical” or “longitudinal” direction is a direction that is substantially perpendicular to the major plane of the structure. The major plane of the structure is defined by a surface of the structure having a relatively large area compared to other surfaces of the structure. With reference to the figures, a “horizontal” or “lateral” direction may be perpendicular to an indicated “Z” axis, and may be parallel to an indicated “X” axis and/or parallel to an indicated “Y” axis; and a “vertical” or “longitudinal” direction may be parallel to an indicated “Z” axis, may be perpendicular to an indicated “X” axis, and may be perpendicular to an indicated “Y” axis.As used herein, features (e.g., regions, structures, devices) described as “neighboring” one another means and includes features of the disclosed identity (or identities) that are located most proximate (e.g., closest to) one another. Additional features (e.g., additional regions, additional structures, additional devices) not matching the disclosed identity (or identities) of the “neighboring” features may be disposed between the “neighboring” features. Put another way, the “neighboring” features may be positioned directly adjacent one another, such that no other feature intervenes between the “neighboring” features; or the “neighboring” features may be positioned indirectly adjacent one another, such that at least one feature having an identity other than that associated with at least one the “neighboring” features is positioned between the “neighboring” features. Accordingly, features described as “vertically neighboring” one another means and includes features of the disclosed identity (or identities) that are located most vertically proximate (e.g., vertically closest to) one another. Moreover, features described as “horizontally neighboring” one another means and includes features of
the disclosed identity (or identities) that are located most horizontally proximate (e.g., horizontally closest to) one another.As used herein, spatially relative terms, such as “beneath,” “below,” “lower,” “bottom,” “above,” “upper,” “top,” “front,” “rear,” “left,” “right,” and the like, may be used for ease of description to describe one element’s or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. Unless otherwise specified, the spatially relative terms are intended to encompass different orientations of the materials in addition to the orientation depicted in the figures. For example, if materials in the figures are inverted, elements described as “below” or “beneath” or “under” or “on bottom of’ other elements or features would then be oriented “above” or “on top of’ the other elements or features. Thus, the term “below” can encompass both an orientation of above and below, depending on the context in which the term is used, which will be evident to one of ordinary skill in the art. The materials may be otherwise oriented (e.g., rotated 90 degrees, inverted, flipped) and the spatially relative descriptors used herein interpreted accordingly.As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.As used herein, “and/or” includes any and all combinations of one or more of the associated listed items.As used herein, the phrase “coupled to” refers to structures operatively connected with each other, such as electrically connected through a direct Ohmic connection or through an indirect connection (e.g., by way of another structure).As used herein, the term “substantially” in reference to a given parameter, property, or condition means and includes to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a degree of variance, such as within acceptable tolerances. By way of example, depending on the particular parameter, property, or condition that is substantially met, the parameter, property, or condition may be at least 90.0 percent met, at least 95.0 percent met, at least 99.0 percent met, at least 99.9 percent met, or even 100.0 percent met.As used herein, “about” or “approximately” in reference to a numerical value for a particular parameter is inclusive of the numerical value and a degree of variance from the numerical value that one of ordinary skill in the art would understand is within acceptable tolerances for the particular parameter. For example, “about” or “approximately” in reference to a numerical value may include additional numerical values within a range of from 90.0
percent to 110.0 percent of the numerical value, such as within a range of from 95.0 percent to 105.0 percent of the numerical value, within a range of from 97.5 percent to 102.5 percent of the numerical value, within a range of from 99.0 percent to 101.0 percent of the numerical value, within a range of from 99.5 percent to 100.5 percent of the numerical value, or within a range of from 99.9 percent to 100.1 percent of the numerical value.As used herein, “conductive material” means and includes electrically conductive material such as one or more of a metal (e.g., tungsten (W), titanium (Ti), molybdenum (Mo), niobium (Nb), vanadium (V), hafnium (Hl), tantalum (Ta), chromium (Cr), zirconium (Zr), iron (Fe), ruthenium (Ru), osmium (Os), cobalt (Co), rhodium (Rh), iridium (Ir), nickel (Ni), palladium (Pa), platinum (Pt), copper (Cu), silver (Ag), gold (Au), aluminum (Al)), an alloy (e.g., a Co-based alloy, an Fe-based alloy, an Ni-based alloy, an Fe- and Ni-based alloy, a Co- and Ni-based alloy, an Fe- and Co-based alloy, a Co- and Ni- and Fe-based alloy, an Al-based alloy, a Cu-based alloy, a magnesium (Mg)-based alloy, a Ti-based alloy, a steel, a low- carbon steel, a stainless steel), a conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide), and a conductively doped semiconductor material (e.g., conductively-doped polysilicon, conductively-doped germanium (Ge), conductively-doped silicon germanium (SiGe)). In addition, a “conductive structure” means and includes a structure formed of and including conductive material.As used herein, “insulative material” means and includes electrically insulative material, such one or more of at least one dielectric oxide material (e.g., one or more of a silicon oxide (SiOx), phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, an aluminum oxide (A1OX), a hafnium oxide (HfOx), a niobium oxide (NbOx), a titanium oxide (TiOx), a zirconium oxide (ZrOx), a tantalum oxide (TaOx), and a magnesium oxide (MgOx)), at least one dielectric nitride material (e.g., a silicon nitride (SiNy)), at least one dielectric oxynitride material (e.g., a silicon oxynitride (SiOxNy)), and at least one dielectric carboxynitride material (e.g., a silicon carboxynitride (SiOxCzNy)). Formulae including one or more of “x”, “y”, and “z” herein (e.g., SiOx, A1OX, HfOx, NbOx, TiOx, SiNy, SiOxNy, SiOxCzNy) represent a material that contains an average ratio of “x” atoms of one element, “y” atoms of another element, and “z” atoms of an additional element (if any) for every one atom of another element (e.g., Si, Al, Hf, Nb, Ti). As the formulae are representative of relative atomic ratios and not strict chemical structure, an insulative material may comprise one or more stoichiometric compounds and/or one or more non-stoichiometric
compounds, and values of “x”, “y”, and “z” (if any) may be integers or may be non-integers. As used herein, the term “non-stoichiometric compound” means and includes a chemical compound with an elemental composition that cannot be represented by a ratio of well-defined natural numbers and is in violation of the law of definite proportions. In addition, an “insulative structure” means and includes a structure formed of and including insulative material.As used herein, the term “homogeneous” means relative amounts of elements included in a feature (e.g., a material, a structure) do not vary throughout different portions (e.g., different horizontal portions, different vertical portions) of the feature. Conversely, as used herein, the term “heterogeneous” means relative amounts of elements included in a feature (e.g., a material, a structure) vary throughout different portions of the feature. If a feature is heterogeneous, amounts of one or more elements included in the feature may vary stepwise (e.g., change abruptly), or may vary continuously (e.g., change progressively, such as linearly, parabolically) throughout different portions of the feature. The feature may, for example, be formed of and include a stack of at least two different materials.Unless the context indicates otherwise, the materials described herein may be formed by any suitable technique including, but not limited to, spin coating, blanket coating, chemical vapor deposition (CVD), plasma enhanced CVD (PECVD), atomic layer deposition (ALD), plasma enhanced ALD (PEALD), physical vapor deposition (PVD) (e.g., sputtering), or epitaxial growth. Depending on the specific material to be formed, the technique for depositing or growing the material may be selected by a person of ordinary skill in the art. In addition, unless the context indicates otherwise, removal of materials described herein may be accomplished by any suitable technique including, but not limited to, etching (e.g., dry etching, wet etching, vapor etching), ion milling, abrasive planarization (e.g., chemicalmechanical planarization (CMP)), or other known methods.FIG. 1 A through FIG. 5C are various views (described in further detail below) illustrating a microelectronic device structure at different processing stages of a method of forming a microelectronic device (e.g., a memory device, such as a 3D NAND Flash memory device), in accordance with embodiments of the disclosure. With the description provided below, it will be readily apparent to one of ordinary skill in the art that the methods described herein may be used for forming various devices. In other words, the methods of the disclosure may be used whenever it is desired to form a microelectronic device.
FIG. 1 A depicts a simplified, partial perspective view of a microelectronic device structure 100. As shown in FIG. 1A, the microelectronic device structure 100 may be formed to include a preliminary stack structure 102 including a vertically alternating (e.g., in a Z-direction) sequence of insulative material 104 and sacrificial material 106 arranged in tiers 108. Each of the tiers 108 of the preliminary stack structure 102 may individually include the sacrificial material 106 vertically neighboring (e.g., directly vertically adjacent) the insulative material 104. In addition, the preliminary stack structure 102 may be divided (e.g., segmented, partitioned) into preliminary blocks 110 separated from one another by slots 112 (e.g., slits, openings, trenches). The slots 112 may vertically extend (e.g., in the Z-direction) completely through the preliminary stack structure 102. Additional features (e.g., materials, structures) of the preliminary stack structure 102 (including the preliminary blocks 110 thereol) are described in further detail below. FIG. IB is a simplified, longitudinal cross-sectional view of a portion A (identified with a dashed box in FIG. 1A) of the microelectronic device structure 100 at the processing stage depicted in FIG. 1A.The insulative material 104 of each of the tiers 108 of the preliminary stack structure 102 may be formed of and include at least one dielectric material, such one or more of at least one dielectric oxide material (e.g., one or more of SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, AlOx, HfOx, NbOx, TiOx, ZrOx, TaOx, and MgOx), at least one dielectric nitride material (e.g., SiNy), at least one dielectric oxynitride material (e.g., SiOxNy), and at least one dielectric carboxynitride material (e.g., SiOxCzNy). In some embodiments, the insulative material 104 of each of the tiers 108 of the preliminary stack structure 102 is formed of and includes a dielectric oxide material, such as SiOx (e.g., SiCh). The insulative material 104 of each of the tiers 108 may be substantially homogeneous, or the insulative material 104 of one or more (e.g., each) of the tiers 108 may be heterogeneous.The sacrificial material 106 of each of the tiers 108 of the preliminary stack structure 102 may be formed of and include at least one material (e.g., at least one insulative material) that may be selectively removed relative to the insulative material 104. The sacrificial material 106 may be selectively etchable relative to the insulative material 104 during common (e.g., collective, mutual) exposure to a first etchant; and the insulative material 104 may be selectively etchable to the sacrificial material 106 during common exposure to a second, different etchant. As used herein, a material is “selectively etchable”
relative to another material if the material exhibits an etch rate that is at least about five times (5x) greater than the etch rate of another material, such as about ten times (lOx) greater, about twenty times (20x) greater, or about forty times (40x) greater. By way of non-limiting example, depending on the material composition of the insulative material 104, the sacrificial material 106 may be formed of and include one or more of at least one dielectric oxide material (e.g., one or more of SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, A1OX, HfOx, NbOx, TiOx, ZrOx, TaOx, and a MgOx), at least one dielectric nitride material (e.g., SiNy), at least one dielectric oxynitride material (e.g., SiOxNy), at least one dielectric oxy carbide material (e.g., SiOxCy), at least one hydrogenated dielectric oxy carbide material (e.g., SiCxOyHz), at least one dielectric carboxynitride material (e.g., SiOxCzNy), and at least one semiconductive material (e.g., poly crystalline silicon). In some embodiments, the sacrificial material 106 of each of the tiers 108 of the preliminary stack structure 102 is formed of and includes a dielectric nitride material, such as SiNy (e.g., SisNi). The sacrificial material 106 may, for example, be selectively etchable relative to the insulative material 104 during common exposure to a wet etchant comprising phosphoric acid (H3PO4).The preliminary stack structure 102 may be formed to include any desired number of the tiers 1 8. By way of non-limiting example, the preliminary stack structure 102 may be formed to include greater than or equal to sixteen (16) of the tiers 108, such as greater than or equal to thirty-two (32) of the tiers 108, greater than or equal to sixty-four (64) of the tiers 108, greater than or equal to one hundred and twenty-eight (128) of the tiers 108, or greater than or equal to two hundred and fifty-six (256) of the tiers 108.Still referring to FIG. 1 A, the preliminary blocks 110 of the preliminary stack structure 102 may horizontally extend parallel in an X-direction (e.g., a first horizontal direction). As used herein, the term “parallel” means substantially parallel. Horizontally neighboring preliminary blocks 110 of the preliminary stack structure 102 may be separated from one another in a Y-direction (e.g., a second horizontal direction) orthogonal to the X- direction by the slots 112. The slots 112 may also horizontally extend parallel in the X- direction. Each of the preliminary blocks 110 of the preliminary stack structure 102 may exhibit substantially the same geometric configuration (e.g., substantially the same dimensions and substantially the same shape) as each other of the preliminary blocks 110, or one or more of the preliminary blocks 110 may exhibit a different geometric configuration (e.g., one or more different dimensions and/or a different shape) than one or more other of the
preliminary blocks 110. In addition, each pair of horizontally neighboring preliminary blocks 110 of the preliminary stack structure 102 may be horizontally separated from one another by substantially the same distance (e.g., corresponding to a width in the Y-direction of each of the slots 112) as each other pair of horizontally neighboring preliminary blocks 110 of the preliminary stack structure 102, or at least one pair of horizontally neighboring preliminary blocks 110 of the preliminary stack structure 102 may be horizontally separated from one another by a different distance than that separating at least one other pair of horizontally neighboring preliminary blocks 110 of the preliminary stack structure 102. In some embodiments, the preliminary blocks 110 of the preliminary stack structure 102 are substantially uniformly (e.g., substantially non-variably, substantially equally, substantially consistently) sized, shaped, and spaced relative to one another.As shown in FIG. 1A, each preliminary block 110 of the preliminary stack structure 102 may individually include stadium structures 114, crest regions 122 (e.g., elevated regions), and bridge regions 124 (e.g., additional elevated regions). The stadium structures 114 may be distributed throughout and substantially confined within a horizontal area of the preliminary block 110. The crest regions 122 may be horizontally interposed between stadium structures 114 horizontally neighboring one another in the X-direction. The bridge regions 124 may horizontally neighbor opposing sides of individual stadium structures 114 in the Y-direction, and may horizontally extend from and between crest regions 122 horizontally neighboring one another in the X-direction. In FIG. 1 A, for clarity and ease of understanding the drawings and associated description, portions (e.g., some of the bridge regions 124 horizontally neighboring first sides of the stadium structures 114 in the Y-direction) of one of the preliminary blocks 110 of the preliminary stack structure 102 are depicted as transparent to more clearly show the stadium structures 114 distributed within the preliminary block 110.Still referring to FIG. 1 A, at least some (e.g., each) of the stadium structures 114 within an individual preliminary block 110 of the preliminary stack structure 102 may be positioned at different vertical elevations in the Z-direction than one another. For example, as depicted in FIG. 1 A, an individual preliminary block 110 may include a first stadium structure 114 A, a second stadium structure 114B at a relatively lower vertical position (e.g., in the Z-direction) within the preliminary block 110 than the first stadium structure 114 A, a third stadium structure 114C at a relatively lower vertical position within the preliminary block 110 than the second stadium structure 114B, and a fourth stadium
structure 114D at a relatively lower vertical position within the preliminary block 110 than the third stadium structure 114C. In addition, stadium structures 114 may be substantially uniformly (e.g., equally, evenly) horizontally spaced apart from one another. In additional embodiments, one or more blocksl lO of the preliminary stack structure 102 may individually include a different quantity of stadium structures 114 and/or a different distribution of stadium structures 114 than that depicted in FIG. 1 A. For example, an individual preliminary block 110 of the preliminary stack structure 102 may include greater than four (4) of the stadium structures 114 (e.g., greater than or equal to five (5) of the stadium structures 114, greater than or equal to ten (10) of the stadium structures 114, greater than or equal to twenty -five (25) of the stadium structures 114, greater than or equal to fifty (50) of stadium structures 114), or less than four (4) of the stadium structures 114 (e.g., less than or equal to three (3) of the stadium structures 114, less than or equal to two (2) of the stadium structures 114, only one (1) of the stadium structures 114). As another example, within an individual preliminary block 110, stadium structures 114 may be at least partially non-uniformly (e.g., non-equally, non-evenly) horizontally spaced, such that at least one of the stadium structures 114 is separated from at least two other of the stadium structures 114 horizontally neighboring (e.g., in the X-direction) the at least one stadium structures 114 by different (e.g., non-equal) distances. As an additional nonlimiting example, within an individual preliminary block 110, vertical positions (e.g., in the Z-direction) of the stadium structures 114 may vary in a different manner (e.g., may alternate between relatively deeper and relatively shallower vertical positions) than that depicted in FIG. 1A.Each stadium structure 114 may include opposing staircase structures 116, and a central region 117 horizontally interposed between (e.g., in the X-direction) the opposing staircase structures 116. The opposing staircase structures 116 of each stadium structure 114 may include a forward staircase structure 116A and a reverse staircase structure 116B. A phantom line extending from a top of the forward staircase structure 116A to a bottom of the forward staircase structure 116A may have a positive slope, and another phantom line extending from a top of the reverse staircase structure 116B to a bottom of the reverse staircase structure 116B may have a negative slope. In additional embodiments, one or more of the stadium structure 114 may individually exhibit a different configuration than that depicted in FIG. 1 A. As a non-limiting example, at least one stadium structures 114 may be modified to include a forward staircase structure 116A but not a reverse staircase
structure 116B (e.g., the reverse staircase structure 116B may be absent), or at least one stadium structure 114 may be modified to include a reverse staircase structure 116B but not a forward staircase structure 116A (e.g., the forward staircase structure 116A may be absent). In such embodiments, the central region 117 horizontally neighbors a bottom of the forward staircase structure 116A (e.g., if the reverse staircase structure 116B is absent), or horizontally neighbors a bottom of the reverse staircase structure 116B (e.g., if the forward staircase structure 116A is absent).The opposing staircase structures 116 (e.g., the forward staircase structure 116A and the reverse staircase structure 116B) of an individual stadium structure 114 each include steps 118 defined by edges (e.g., horizontal ends) of the tiers 108 of the preliminary stack structure 102 within a horizontal area of an individual preliminary block 110 of the preliminary stack structure 102. For the opposing staircase structures 116 of an individual stadium structure 114, each step 118 of the forward staircase structure 116A may have a counterpart step 118 within the reverse staircase structure 116B having substantially the same geometric configuration (e.g., shape, dimensions), vertical position (e.g., in the Z-direction), and horizontal distance (e.g., in the X-direction) from a horizontal center (e.g., in the X- direction) of the central region 117 of the stadium structure 114. In additional embodiments, at least one step 118 of the forward staircase structure 116A does not have a counterpart step 118 within the reverse staircase structure 116B having substantially the same geometric configuration (e.g., shape, dimensions), vertical position (e.g., in the Z-direction), and/or horizontal distance (e.g., in the X-direction) from horizontal center (e.g., in the X-direction) of the central region 117 of the stadium structure 114; and/or at least one step 118 of the reverse staircase structure 116B does not have a counterpart step 118 within the forward staircase structure 116A having substantially the same geometric configuration (e.g., shape, dimensions), vertical position (e.g., in the Z-direction), and/or horizontal distance (e.g., in the X-direction) from horizontal center (e.g., in the X-direction) of the central region 117 of the stadium structure 114.Each of the stadium structures 114 within an individual preliminary block 110 of the preliminary stack structure 102 may individually include a desired quantity of steps 118. Each of the stadium structures 114 may include substantially the same quantity of steps 118 as each other of the stadium structures 114, or at least one of the stadium structures 114 may include a different quantity of steps 118 than at least one other of the stadium structures 114. In some embodiments, at least one of the stadium structures 114 includes a different (e.g., greater,
lower) quantity of steps 118 than at least one other of the stadium structures 114. As shown in FIG. 1A, in some embodiments, the steps 118 of each of the stadium structures 114 are arranged in order, such that steps 118 directly horizontally adjacent (e.g., in the X- direction) one another correspond to tiers 108 of the preliminary stack structure 102 directly vertically adjacent (e.g., in the Z-direction) one another. In additional embodiments, the steps 118 of at least one of the stadium structures 114 are arranged out of order, such that at least some steps 118 of the stadium structure 114 directly horizontally adjacent (e.g., in the X-direction) one another correspond to tiers 108 of preliminary stack structure 102 not directly vertically adjacent (e.g., in the Z-direction) one another.With continued reference to FIG. 1A, for an individual stadium structure 114, the central region 117 thereof may horizontally intervene (e.g., in the X-direction) between and separate the forward staircase structure 116A thereof from the reverse staircase structure 116B thereof. The central region 117 may horizontally neighbor a vertically lowermost step 118 of the forward staircase structure 116A, and may also horizontally neighbor a vertically lowermost step 118 of the reverse staircase structure 116B. The central region 117 of an individual stadium structure 114 may have any desired horizontal dimensions. In addition, within an individual preliminary block 110 of the preliminary stack structure 102, the central region 117 of each of the stadium structures 114 may have substantially the same horizontal dimensions as the central region 117 of each other of the stadium structures 114, or the central region 117 of at least one of the stadium structures 114 may have different horizontal dimensions than the central region 117 of at least one other of the stadium structures 114.For each preliminary block 110 of the preliminary stack structure 102, each stadium structure 114 (including the forward staircase structure 116A, the reverse staircase structure 116B, and the central region 117 thereol) within the preliminary block 110 may individually partially define boundaries (e.g., horizontal boundaries, vertical boundaries) of a filled trench 120 vertically extending (e.g., in the Z-direction) through the preliminary block 110. The crest regions 122 and the bridge regions 124 horizontally neighboring an individual stadium structure 114 may also partially define the boundaries of the filled trench 120 associated with the stadium structure 114. The filled trench 120 may only vertically extend through tiers 108 of the preliminary stack structure 102 defining the forward staircase structure 116A and the reverse staircase structure 116B of the stadium structure 114; or may also vertically extend through additional tiers 108 of the preliminary stack structure 102 not defining the forward staircase structure 116A and the reverse staircase
structure 116B of the stadium structure 114, such as additional tiers 108 of the preliminary stack structure 102 vertically overlying the stadium structure 114. Edges of the additional tiers 108 of the preliminary stack structure 102 may, for example, define one or more additional stadium structures vertically overlying and horizontally offset from the stadium structure 114. The filled trench 120 may be filled with one or more dielectric materials, as described in further detail below with reference to FIG. IB.Still referring to FIG. 1A, for each preliminary block 110 of the preliminary stack structure 102, the crest regions 122 (which may also be referred to as “elevated regions” or “plateau regions”) and the bridge regions 124 (which may also be referred to as “additional elevated regions” or “additional plateau regions”) thereof may comprise portions of the preliminary block 110 remaining following the formation of the stadium structures 114. Within each preliminary block 110, crest regions 122 and the bridge region 124 thereof may define horizontal boundaries (e.g., in the X-direction and in the Y-direction) of unremoved portions of the tiers 108 of the preliminary stack structure 102.As shown in FIG. 1 A, the crest regions 122 of an individual preliminary block 110 of the preliminary stack structure 102 may intervene between and separate stadium structures 114 horizontally neighboring one another in the X-direction. For example, one of the crest regions 122 may intervene between and separate the first stadium structure 114A and the second stadium structure 114B; an additional one of the crest regions 122 may intervene between and separate the second stadium structure 114B and the third stadium structure 114C; and a further one of the crest regions 122 may intervene between and separate the third stadium structure 114C and the fourth stadium structure 114D. A vertical height of the crest regions 122 in the Z-direction may be substantially equal to a maximum vertical height of the preliminary block 110 in the Z-direction; and a horizontal width of the crest regions 122 in the Y-direction may be substantially equal to a maximum horizontal width of the preliminary block 110 in the Y-direction. In addition, each of the crest regions 122 may individually exhibit a desired horizontal length in the X-direction. Each of the crest regions 122 of an individual preliminary block 110 of the preliminary stack structure 102 may exhibit substantially the same horizontal length in the X-direction as each other of the crest regions 122 of the preliminary block 110; or at least one of the crest regions 122 of the preliminary block 110 may exhibit a different horizontal length in the X-direction than at least one other of the crest regions 122 of the preliminary block 110.
As shown in FIG. 1 A, the bridge regions 124 of an individual preliminary block 110 of the preliminary stack structure 102 may intervene between and separate the stadium structures 114 if the preliminary block 110 from the slots 112 horizontally neighboring the preliminary block 110 in the Y-direction. For example, for each stadium structure 114 within an individual preliminary block 110 of the preliminary stack structure 102, a first bridge region 124A may be horizontally interposed in the Y-direction between a first side of the stadium structure 114 and a first of the slots 112 horizontally neighboring the preliminary block 110; and a second bridge region 124B may be horizontally interposed in the Y-direction between a second side of the stadium structure 114 and a second of the slots 112 horizontally neighboring the preliminary block 110. The first bridge region 124 A and the second bridge region 124B may horizontally extend in parallel in the X-direction. In addition, the first bridge region 124A and the second bridge region 124B and may each horizontally extend from and between crest regions 122 of the preliminary block 110 horizontally neighboring one another in the X-direction. The bridge regions 124 of the preliminary block 110 may be integral and continuous with the crest regions 122 of the preliminary block 110. Upper boundaries (e.g., upper surfaces) of the bridge regions 124 may be substantially coplanar with upper boundaries of the crest regions 122. A vertical height of the bridge regions 124 in the Z-direction may be substantially equal to a maximum vertical height of the preliminary block 110 in the Z-direction. In addition, each of the bridge regions 124 (including each first bridge region 124A and each second bridge region 124B) may individually exhibit a desired horizontal width in the Y-direction and a desired horizontal length in the X-direction. Each of the bridge regions 124 of the preliminary block 110 may exhibit substantially the same horizontal length in the X-direction as each other of the bridge regions 124 of the preliminary block 110; or at least one of the bridge regions 124 of the preliminary block 110 may exhibit a different horizontal length in the X-direction than at least one other of the bridge regions 124 of the preliminary block 110. In addition, each of the bridge regions 124 of the preliminary block 110 may exhibit substantially the same horizontal width in the Y-direction as each other of the bridge regions 124 of the preliminary block 110; or at least one of the bridge regions 124 of the preliminary block 110 may exhibit a different horizontal width in the Y- direction than at least one other of the bridge regions 124 of the preliminary block 110.For each preliminary block 110 of the preliminary stack structure 102, the bridge regions 124 thereof horizontally extend around the filled trenches 120 of the preliminary block 110. As described in further detail below, following subsequent processing (e.g., so-
called “replacement gate” or “gate last” processing), some of the bridge regions 124 of the preliminary block 110 may be employed to form continuous conductive paths extending from and between horizontally neighboring crest regions 122 of the preliminary block 110. As also described in further detail below, following such subsequent (e.g., replacement gate) processing, at least the bridge regions 124 (e.g., the first bridge region 124A and the second bridge region 124B) horizontally neighboring the first stadium structure 114A in the Y- direction may be further acted upon (e.g., segmented) to disrupt (e.g., break) at least a portion of the continuous conductive paths extending from and between the crest regions 122 horizontally neighboring the first stadium structure 114A in the X-direction.As previously described, FIG. IB is a simplified, longitudinal cross-sectional view of portion A (identified with a dashed box in FIG. 1A) of the microelectronic device structure 100 at the processing stage depicted in FIG. 1A. The portion A encompasses the first stadium structure 114A of an individual preliminary block 110 (FIG. 1 A) of the preliminary stack structure 102 (FIG. 1A). The portion A also encompasses parts of the bridge regions 124 (FIG. 1A) horizontally neighboring the first stadium structure 114A in the Y-direction; parts of the crest regions 122 horizontally neighboring the first stadium structure 114A in the X-direction; and the filled trench 120 having boundaries defined by the first stadium structure 114 A, and each of the bridge regions 124 and the crest regions 122 horizontally neighboring the first stadium structure 114A. While additional features (e.g., structures, materials) of the microelectronic device structure 100 are described hereinbelow with reference to the portion A of the microelectronic device structure 100, such additional features may also be formed and included in additional portions of the microelectronic device structure 100, including additional portions encompassing additional stadium structures 114 of one or more (e.g., each) of the preliminary blocks 110 (FIG. 1A) of the preliminary stack structure 102 (FIG. 1A) and parts of the bridge regions 124 (FIG. 1A), the crest regions 122, and the filled trenches 120 having boundaries defined by the additional stadium structures 114.Referring to FIG. IB, the filled trenches 120 may individually be filled with multiple (e.g., more than one) dielectric materials. For example, as shown in FIG. IB, each filled trench 120 may include a first dielectric material 126 (e.g., a dielectric liner material), a second dielectric material 128 (e.g., an additional dielectric liner material), and a third dielectric material 130 (e.g., a dielectric fill material). For an individual filled trench 120, the first dielectric material 126 may be formed on or over surfaces (e.g., horizontally extending
surfaces, vertically extending surfaces) of the stadium structure 114 (e.g., the first stadium structure 114 A), the crest regions 122, and the bridge regions 124 (FIG. 1A) of the preliminary block 110 (FIG. 1A) defining boundaries (e.g., horizontal boundaries, vertical boundaries) of the filled trench 120; the second dielectric material 128 may be formed on or over the first dielectric material 126; and the third dielectric material 130 may be formed on or over the second dielectric material 128. As depicted in FIG. IB, one or more (e.g., each) of the first dielectric material 126, the second dielectric material 128, and the third dielectric material 130 may also be formed to extend beyond boundaries (e.g., horizontal boundaries, vertical boundaries) of the filled trenches 120. For example, first dielectric material 126, the second dielectric material 128, and the third dielectric material 130 may also be formed to extend over uppermost surfaces of the crest regions 122 and/or the bridge regions 124 (FIG. 1A) of individual preliminary blocks 110 (FIG. 1A) of the preliminary stack structure 102 (FIG. 1A) of the microelectronic device structure 100. In additional embodiments, the first dielectric material 126 and the second dielectric material 128 may be omitted (e.g., absent).The first dielectric material 126 may be employed (e.g., serve) as a barrier material to protect (e.g., mask) the second dielectric material 128 from removal during subsequent processing acts (e.g., subsequent replacement gate processing acts, such as subsequent etching acts), as described in further detail below. The first dielectric material 126 may be formed to have a desired thickness capable of protecting the second dielectric material 128 during the subsequent processing acts. The first dielectric material 126 may be formed to substantially continuously extend on or over surfaces of the preliminary blocks 110 of the preliminary stack structure 102 outside of the horizontal boundaries of the slots 112 (FIG. 1A). For each of the preliminary blocks 110, the first dielectric material 126 may substantially continuously extend on or over surfaces of the opposing staircase structures 116 (e.g., the forward staircase structure 116A and the reverse staircase structure 116B) of each of the stadium structures 114 (e.g., the first stadium structure 114A, the second stadium structure 114B, the third stadium structure 114C, the fourth stadium structure 114D), as well as on or over inner sidewalls of the bridge regions 124 (FIG. 1 A) (e.g., the first bridge region 124A, the second bridge region 124B) horizontally neighboring (e.g., in the Y-direction) each of the stadium structures 114. In addition, for each of the preliminary blocks 110, the first dielectric material 126 may also substantially continuously extend on or over upper surfaces of the crest regions 122 and the bridge regions 124. Surfaces of the preliminary blocks 110 (FIG. 1 A)
defining the horizontal boundaries of the slots 112 (FIG. 1 A) may be substantially free of the first dielectric material 126. The first dielectric material 126 may be omitted (e.g., absent) from the slots 112 (FIG. 1 A) between the preliminary blocks 110 (FIG. 1 A) of the preliminary stack structure 102 (FIG. 1A).The first dielectric material 126 may be formed of and include at least one dielectric material having different etch selectivity' than the sacrificial material 106 of the tiers 108 of the preliminary stack structure 102. The first dielectric material 126 may also have different etch selectivity than the second dielectric material 128. The first dielectric material 126 may, for example, have etch selectively substantially similar to that of the insulative material 104 of the tiers 108 of the preliminary stack structure 102. By way of non-limiting example, the first dielectric material 126 may be formed of and include at least one oxy gencontaining dielectric material, such as a one or more of at least one dielectric oxide material (e.g., one or more of SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, A1OX, HfOx, NbOx, and TiOx), at least one dielectric oxynitride material (e.g., SiOxNy), and at least one dielectric carboxynitride material (e.g., SiOxCzNy). In some embodiments, the first dielectric material 126 is formed of and includes SiOx (e.g., SiO2).The second dielectric material 128 may be employed (e.g., serve) as an etch stop material during subsequent processing acts (e.g., subsequent etching acts) to form openings (e.g., contact openings, contact vias) vertically extending through the third dielectric material 130, as described in further detail below. The second dielectric material 128 may be formed to have a desired thickness capable of protecting the first dielectric material 126 underlying the second dielectric material 128 from removal during the subsequent processing acts. The second dielectric material 128 may be formed to substantially continuously extend on or over the first dielectric material 126. The second dielectric material 128 may be omitted (e.g., absent) from the slots 112 (FIG. 1A) between the preliminary blocks 110 (FIG. 1A) of the preliminary stack structure 102 (FIG. 1A).The second dielectric material 128 may be formed of and include at least one dielectric material having different etch selectivity than the third dielectric material 130. The second dielectric material 128 may also have different etch selectivity than the first dielectric material 126. The second dielectric material 128 may, for example, have etch selectively substantially similar to that of the sacrificial material 106 of the tiers 108 of the preliminary stack structure 102. By way of non-limiting example, the second dielectric material 128 may
be formed of and include at least one nitrogen-containing dielectric material, such as at least one dielectric nitride material. In some embodiments, the second dielectric material 128 is formed of and includes SiNy(e.g., SislSU).Still referring to FIG. IB, the third dielectric material 130 may substantially fill portions of the filled trenches 120 unoccupied by the first dielectric material 126 and the second dielectric material 128. The third dielectric material 130 may be formed to substantially continuously extend on or over the second dielectric material 128. The third dielectric material 130 may be omitted (e.g., absent) from the slots 112 (FIG. 1 A) between the preliminary blocks 110 (FIG. 1 A) of the preliminary stack structure 102 (FIG. 1 A). The third dielectric material 130 may be formed to exhibit a substantially planer upper vertical boundary, and a substantially non-planar lower vertical boundary complementary to (e.g., substantially mirroring) a topography thereunder.The third dielectric material 130 may be formed of and include at least one dielectric material having different etch selectivity than the second dielectric material 128. The third dielectric material 130 may, for example, have etch selectively substantially similar to that of one or more of the first dielectric material 126 the insulative material 104 of the tiers 108 of the preliminary stack structure 102. By way of non-limiting example, the third dielectric material 130 may be formed of and include at least one oxy gen-containing dielectric material, such as a one or more of at least one dielectric oxide material (e.g., one or more of SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric oxynitride material (e.g., SiOxNy), and at least one dielectric carboxynitride material (e.g., SiOxCzNy). In some embodiments, the third dielectric material 130 is formed of and includes SiOx (e.g., SiCh).As described in further detail below with reference to FIG. 4B, the microelectronic device structure 100 may be formed to further include support pillars vertically extending through the preliminary blocks 110 of the preliminary stack structure 102. The support pillars may be configured and positioned to support the tiers 108 of the preliminary stack structure 102 during subsequent processing (e.g., replacement gate processing) of the microelectronic device structure 100. For example, within each of the preliminary blocks 110, the support pillars may be configured and positioned to impede (e.g., substantially prevent) collapse of portions of the insulative material 104 of the tiers 108 with horizontal areas of the stadium structures 114 during subsequent replacement gate processing acts.
Next, referring to FIG. 2, which is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure 100 following (e.g., subsequent to) the processing stage previously described with reference to FIGS. 1A and IB, the microelectronic device structure 100 may be subjected to replacement gate processing. The replacement gate processing may at least partially (e.g., substantially) replace the sacrificial material 106 (FIGS. 1A and IB) of the tiers 108 (FIGS. 1A and IB) of the preliminary stack structure 102 (FIGS. 1A and IB) with conductive material 134. The replacement gate processing may convert the preliminary stack structure 102 (FIGS. 1A and IB) including the preliminary blocks 110 (FIGS. 1A and IB) into a stack structure 132 including blocks 133. The stack structure 132 may include a vertically alternating (e.g., in the Z-direction) sequence of the insulative material 104 and the conductive material 134 arranged in tiers 136. The stack structure 132 may be divided into the blocks 133, and the shapes and dimensions of the blocks 133 may be substantially the same as the shapes and dimensions of the preliminary blocks 110 (FIGS. 1 A and IB) of the preliminary stack structure 102 (FIGS. 1 A and IB). The slots 112 (FIGS. 1 A and IB) may be interposed between horizontally neighboring (e.g., in the Y-direction) blocks 133 of the stack structure 132.The conductive material 134 of the tiers 136 of the stack structure 132 may formed of and include one or more of at least one conductively doped semiconductor material, at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., at last one conductive metal nitride, at least one conductive metal silicide, at least one conductive metal carbide, at least one conductive metal oxide). In some embodiments, the conductive material 134 is formed of and includes W. Optionally, at least one liner material (e.g., at least one insulative liner material, at least one conductive liner materials) may be formed around the conductive material 134. The liner material may, for example, be formed of and include one or more a metal (e.g., titanium, tantalum), an alloy, a metal nitride (e.g., tungsten nitride, titanium nitride, tantalum nitride), and a metal oxide (e.g., aluminum oxide). In some embodiments, the liner material comprises at least one conductive material employed as a seed material for the formation of the conductive material 134. In some embodiments, the liner material comprises titanium nitride (TiNx, such as TiN). In further embodiments, the liner material further includes aluminum oxide (AlOx, such as AI2O3). As a non-limiting example, for each of the tiers 136 of the stack structure 132, AlOx (e.g., AI2O3) may be formed directly adjacent the insulative material 104, TiNx(e.g., TiN) may be formed directly adjacent the AlOx, and W may be formed directly adjacent the TiNx. For clarity and ease of
understanding the description, the liner material is not illustrated in FIG. 2, but it will be understood that the liner material may be disposed around the conductive material 134.Within each block 133 of the stack structure 132, the conductive material 134 of one or more relatively vertically higher tier(s) 136A (e.g., upper tiers) may be employed to form upper select gate structures (e.g., drain side select gate (SGD) structures) for upper select transistors (e.g., drain side select transistors) of the block 133, as described in further detail below. The conductive material 134 of the relatively vertically higher tier(s) 136A may be segmented by one or more filled slot(s) (e.g., filled SGD slot(s)) to form the upper select gate structures of the block 133, as also described in further detail below. In some embodiments, within each block 133 of the stack structure 132, the conductive material 134 of each of less than or equal to eight (8) relatively higher tier(s) 136A (e.g., from one (1) relatively vertically higher tier 136A to eight (8) relatively vertically higher tiers 136A) of the stack structure 132 is employed to form upper select gate structures (e.g., SGD structures) for the block 133. In addition, within each block 133 of the stack structure 132, the conductive material 134 of at least some relatively vertically lower tiers 136B vertically underlying the relatively vertically higher tier(s) 136A may be employed to form access line structures (e.g., word line structures) of the block 133, as also described in further detail below. Moreover, within each block 133 of the stack structure 132, the conductive material 134 of at least a vertically lowest tier 136 may be employed to form as at least one lower select gate structure (e.g., at least one source side select gate (SGS) structure) for lower select transistors (e.g., source side select transistors) of the block 133, as also described in further detail below.The replacement gate processing employed to form the stack structure 132 may include treating the microelectronic device structure 100 with at least one wet etchant formulated to selectively remove portions of the sacrificial material 106 (FIGS. 1A and IB) of the tiers 108 (FIGS. 1 A and IB) of the preliminary stack structure 102 (FIGS. 1 A and IB) through the slots 112 (FIG. 1 A) between the preliminary blocks 110 (FIG. 1A). The wet etchant may be selected to remove the portions of the sacrificial material 106 (FIGS. 1A and IB) without substantially removing portions of the insulative material 104 of the tiers 108 (FIGS. 1 A and IB) of the preliminary stack structure 102 (FIGS. 1 A and IB), and without substantially removing portions of the first dielectric material 126. During the material removal process, the first dielectric material 126 may protect (e.g., mask) the second dielectric material 128 from being removed. In some embodiments wherein the sacrificial material 106 (FIGS. 1A and IB) comprises a dielectric nitride
material (e.g., SiNy, such as SisNfi and the insulative material 104 and the first dielectric material 126 comprise a dielectric oxide material (e.g., SiOx, such as SiCh), the sacrificial material 106 (FIGS. 1A and IB) of the tiers 108 (FIGS. 1A and IB) of the preliminary stack structure 102 (FIGS. 1A and IB) is at selectively removed using a wet etchant comprising H3PO4. Following the selective removal of the portions of the sacrificial material 106 (FIGS. 1A and IB), the resulting recesses may be filled with the conductive material 134 to form the stack structure 132 (including the tiers 136 and the blocks 133 thereof). In addition, following the formation of the stack structure 132, the slots 112 (FIG. 1A) between the blocks 133 of the stack structure 132 may be filled with dielectric material to form filled slots (as described in further detail below with reference to FIG. 4B) horizontally interposed between horizontally neighboring blocks 133 of the stack structure 132.Referring next to FIG. 3, which is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure 100 following the processing stage previously described with reference to FIG. 2, portions of third dielectric material 130, the second dielectric material 128, and the first dielectric material 126 are removed (e.g., etched) to form contact openings 138 (e.g., apertures, vias) vertically extending (e.g., in the Z- direction) therethrough. The contact openings 138 may vertically extend to steps 118 of one or more (e.g., each) of the stadium structures 114, such as steps 118 of the forward staircase structure 116A of one or more of the stadium structures 114 and/or steps 118 of the reverse staircase structure 116B of one or more of the stadium structures 114. A bottom (e.g., lower vertical end) of each contact opening 138 may expose and be defined by an upper surface of the conductive material 134 of an individual tier 136 of the stack structure 132 at an individual step 118 of an individual stadium structure 114 of an individual block 133 of the stack structure 132.As shown in FIG. 3, within a horizontal area of the first stadium structure 114A (e.g., a vertically uppermost stadium structure 114) within an individual block 133 of the stack structure 132, the contact openings 138 may include first contact openings 138A and second contact openings 138B. Within horizontal boundaries of the block 133, the first contact openings 138A may vertically extend to and terminate at the relatively vertically higher tier(s) 136A of the stack structure 132, and the second contact openings 138B may vertically extend to and terminate at the relatively vertically lower tiers 136B of the stack structure 132. The first contact openings 138A may vertically extend to and partially
expose upper select gate structures (e.g., SGD structures) of the block 133 formed by portions of the conductive material 134 of individual relatively vertically higher tier(s) 136A of the stack structure 132. The second contact openings 138B may vertically extend to and partially expose access line structures of the block 133 formed by the conductive material 134 of individual relatively vertically lower tiers 136B of the stack structure 132.Within each block 133 of the stack structure 132, each contact opening 138 may be formed at desired a horizontal position (e.g., in the X-direction and the Y-direction) on or over one of the steps 118 of one of the stadium structures 114. As described in further detail below with reference to FIG. 4C, in some embodiments, within a horizontal area of the first stadium structure 114A, at least some of the second contact openings 138B are horizontally offset in the Y-direction from at least some of the first contact openings 138A. In FIG. 3, such horizontal offset is depicted by way of dashed lines at the boundaries (e.g., horizontal boundaries, vertical boundaries) of the second contact openings 138B. In addition, individual steps 118 of the first stadium structure 114A (e.g., individual steps 118 of the forward staircase structure 116A thereof, individual steps 118 of the reverse staircase structure 116B thereof) may have a single (e.g., only one) contact opening 138 vertically extending thereto, may have multiple (e.g., more than one) contact openings 138 vertically extending thereto, or may have no contact openings 138 vertically extending thereto.The contact openings 138 may each individually be formed to exhibit a desired horizontal cross-sectional shape. In some embodiments, each of the contact openings 138 is formed to exhibit a substantially circular horizontal cross-sectional shape. In additional embodiments, one or more (e.g., each) of the contact openings 138 exhibits a non-circular cross-sectional shape, such as one more of an oblong cross-sectional shape, an elliptical cross- sectional shape, a square cross-sectional shape, a rectangular cross-sectional shape, a tear drop cross-sectional shape, a semicircular cross-sectional shape, a tombstone cross-sectional shape, a crescent cross-sectional shape, a triangular cross-sectional shape, a kite cross-sectional shape, and an irregular cross-sectional shape. In addition, each of the contact openings 138 may be formed to exhibit substantially the same horizontal cross-sectional dimensions (e.g., substantially the same horizontal diameter), or at least one of the contact openings 138 may be formed to exhibit one or more different horizontal cross-sectional dimensions (e.g., a different horizontal diameter) than at least one other of the contact openings 138. In some embodiments, all of the contact openings 138 are formed to exhibit substantially the same horizontal cross-sectional dimensions.
The contact openings 138 may be formed using multiple material removal acts. For example, portions of the third dielectric material 130 may be removed using a first material removal act (e.g., a first etching process) to form preliminary contact openings vertically extending to and exposing portions of the second dielectric material 128; and then portions of the second dielectric material 128 and the first dielectric material 126 within horizontal boundaries of the preliminary contact openings may be removed using a second material removal act (e.g., a second etching process) to vertically extend the preliminary contact openings to the steps 118 of the stadium structures 114 and form the contact openings 138. As a non-limiting example, the first material removal act may comprise a first etching process (e.g., anisotropic dry etching, such as one or more of RIE, deep RIE, plasma etching, reactive ion beam etching, and chemically assisted ion beam etching); and the second material removal act may comprise a second, different etching process (e.g., a so-called “punch through” etch). During the first etching process, the second dielectric material 128 may serve as a so- called “etch stop” material to protect underlying portions of the first dielectric material 126 and the stack structure 132 from removal.Referring next to FIG. 4A, which is a longitudinal cross-sectional view of the portion A of the microelectronic device structure 100 subsequent to the processing stage previously described with reference to FIG. 3, contact structures 140 may be formed within the contact openings 138 (FIG. 3). The contact structures 140 may be substantially confined within boundaries (e.g., horizontal boundaries, vertical boundaries) of the contact openings 138 (FIG. 3), and may substantially fill the contact openings 138 (FIG. 3). Each contact structure 140 may have a geometric configuration (e.g., shape, dimensions) corresponding to (e.g., substantially the same as) a geometric configuration of the contact opening 138 (FIG. 3) filled with the contact structure 140. As shown in FIG. 4A, each contact structure 140 may have an uppermost vertical boundary (e.g., an uppermost surface) substantially coplanar with an uppermost vertical boundary (e.g., an uppermost surface) of the third dielectric material 130, and a lowermost vertical boundary (e.g., a lowermost surface) vertically adjacent an uppermost vertical boundary (e.g., an uppermost surface) of the conductive material 134 of an individual tier 136 of the stack structure 132. Each contact structure 140 may individually contact (e.g., physically contact, electrically contact) the conductive material 134 of the individual tier 136 of the stack structure 132 at an individual step 118 of an individual stadium structure 114 of an individual block 133 of the stack structure 132.
As shown in FIG. 4 A, within a horizontal area of the first stadium structure 114A (e.g., a vertically uppermost stadium structure 114) within an individual block 133 of the stack structure 132, the contact structure 140 may include first contact structures 140A filling the first contact openings 138A (FIG. 3), and second contact structures 140B filling the second contact openings 138B. Within horizontal boundaries of the block 133, the first contact structures 140A may vertically extend to and terminate at the relatively vertically higher tier(s) 136A of the stack structure 132, and the second contact structures 140B may vertically extend to and terminate at the relatively vertically lower tiers 136B of the stack structure 132. The first contact structures 140A may vertically extend to and physically contact upper select gate structures (e.g., SGD structures) of the block 133 formed by portions of the conductive material 134 of individual relatively vertically higher tier(s) 136A of the stack structure 132. The second contact structures 140B may vertically extend to and physically contact local access line structures of the block 133 formed by the conductive material 134 of individual relatively vertically lower tiers 136B of the stack structure 132.The contact structures 140 may be formed of and include conductive material. As a non-limiting example, the contact structures 140 may be formed of and include one or more of at least one metal, at least one alloy, and at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide). A material composition of the contact structures 140 may be substantially the same as a material composition of the conductive material 134 of the tiers 136 of the stack structure 132, or the material composition of the contact structures 140 may be different than the material composition of the conductive material 134 of the tiers 136 of the stack structure 132. In some embodiments, the contact structures 140 are individually formed of and includes W. The contact structures 140 may individually be homogeneous, or the contact structures 140 may individually be heterogeneous.The contact structures 140 may be formed by forming (e.g., non-conformably depositing, such as through one or more of a PVD process and a non-conformal CVD process) conductive material inside and outside of the contact openings 138 (FIG. 3), and then removing (e.g., through an abrasive planarization process, such as a CMP process) portions of the conductive material overlying an uppermost vertical boundary (e.g., an uppermost surface) of the third dielectric material 130.FIG. 4B is a simplified, longitudinal cross-sectional view of the microelectronic device structure 100 at the processing stage described above with reference to FIG. 4A, about
the dashed line B-B illustrated in FIG. 4A. As shown in FIG. 4B, within each block 133 of the stack structure 132, the bridge regions 124 (including the first bridge region 124A and the second bridge region 124B) of the block 133 may horizontally intervene in the Y-direction between the stadium structures 114 (and, hence, the filled trenches 120) of the block 133 and filled slot structures 142 (e.g., dielectric-filled slots) horizontally neighboring the block 133 in the Y-direction. The filled slot structures 142 may comprise the slots 112 (FIG. 1A) filled with at least one dielectric material (e.g., at least one dielectric oxide material, such as SiOx; at least one dielectric nitride material, such as SiNy) following the formation of the stack structure 132 from the preliminary stack structure 102 (FIGS. 1A and IB). Within each block 133 of the stack structure 132, the conductive material 134 of each tier 136 of the stack structure 132 having horizontal ends defining an individual stadium structure 114 may continuously horizontally extend in the X-direction across sides of the stadium structure 114 opposing one another in the Y-direction to form conductive paths extending from and between the crest regions 122 (FIG. 1A) of the block 133 horizontally neighboring the stadium structure 114 in the X-direction.As shown in FIG. 4B, inner horizontal boundaries (e.g., inner sidewalls) of each of the bridge regions 124 (e.g., each of the first bridge regions 124A, each of the second bridge regions 124B) of each block 133 of the stack structure 132 may be oriented substantially nonperpendicular to uppermost vertical boundaries (e.g., uppermost surfaces) of the block 133. For example, the inner horizontal boundaries of the first bridge regions 124A of an individual block 133 may exhibit negative slope, and the inner horizontal boundaries of the second bridge regions 124B of the block 133 may exhibit positive slope. Horizontal widths in the Y- direction of each bridge region 124 (e.g., a first bridge region 124A and a second bridge region 124B) of a pair of bridge regions 124 horizontally neighboring an individual stadium structure 114 (e.g., a first stadium structure 114 A) of the block 133 in the Y-direction may increase in the downward Z-direction (e.g., negative Z-direction) from an uppermost vertical boundary of the stadium structure 114 to a lowermost vertical boundary of the stadium structure 114. Accordingly, relatively vertically lower steps 118 of the stadium structure 114 may have relatively smaller (e.g., narrower) horizontal widths in the Y-direction than relatively vertically higher steps 118 of the stadium structure 114.Still referring to FIG. 4B, the first dielectric material 126 may substantially cover and continuously extend across the inner horizontal boundaries (e.g., inner sidewalls, inner side surfaces) of each of the bridge regions 124 (e.g., each of the first bridge regions 124 A, each of
the second bridge regions 124B) of each block 133 of the stack structure 132. The first dielectric material 126 may also substantially cover and continuously extend across the boundaries (e.g., horizontal boundaries, vertical boundaries) of each stadium structure 114 within each block 133 of the stack structure 132. Furthermore, the second dielectric material 128 may at least partially (e.g., substantially) cover and continuously extend across the first dielectric material 126; and the third dielectric material 130 may substantially cover and continuously extend across the second dielectric material 128. In addition, within a horizontal area of each stadium structure 114, groups of the contact structures 140 may vertically extend through each of the third dielectric material 130, the second dielectric material 128, and the first dielectric material 126; and may land on (e.g., physically contact) at least some of the steps 118 of the stadium structure 114.As shown in FIG. 4B, in some embodiments, for each of the blocks 133 of the stack structure 132, all of the second contact structures 140B are horizontally centered in at least the Y-direction on the steps 118 of the first stadium structure 114A in physical contact therewith. For example, a horizontal center in the Y-direction of each second contact structure 140B may be substantially aligned with a horizontal center in the Y-direction of the step 118 of the first stadium structure 114A that the second contact structure 140B physically contacts. In addition, a horizontal center in the X-direction of each second contact structure 140B may be substantially aligned with a horizontal center in the X-direction of the step 118 of the first stadium structure 114A that the second contact structure 140B physically contacts. In additional embodiments, for each of the blocks 133 of the stack structure 132, one or more of the second contact structures 140B are horizontally offset in the Y-direction from a horizontal center in the Y-direction of the step 118 of the first stadium structure 114A in physical contact therewith, and/or are horizontally offset in the X-direction from a horizontal center in the X- direction of the step 118 of the first stadium structure 114A in physical contact therewith. While not shown in FIG. 4B, horizontal positions of the first contact structures 140A (FIG. 4A) within a horizontal area of each of the blocks 133 of the stack structure 132 are described in further detail below with reference to FIGS. 4C and 4D.FIG. 4C is a simplified, partial top-down view of the microelectronic device structure 100 at the processing stage described above with reference to FIGS. 4A and 4B. As shown in FIG. 4C, in addition to the features (e.g., structures, materials) previously described with reference to FIGS. 4A and 4B, the microelectronic device structure 100 is formed to further include additional filled slot structures 144 and support structures 148 (e.g., support
contacts, support pillars). The additional filled slot structures 144 may be formed to vertically extend (e.g., in the Z-direction) partially through each block 133 of the stack structure 132, and may partially define and horizontally separate (e.g., in the Y-direction) upper select gate structures of each block 133 of the stack structure 132. In addition, the support structures 148 may vertically extend through (e.g., substantially through) individual blocks 133 of the stack structure 132.Each block 133 of the stack structure 132 may individually have a desired distribution of the support structures 148 facilitating support of the insulative material 104 of each of the tiers 108 (FIGS. 1A and IB) of the preliminary stack structure 102 (FIGS. 1A and IB) during replacement of the sacrificial material 106 (FIGS. 1A and IB) of each of the tiers 108 (FIGS. 1A and IB) within the conductive material 134 (FIGS. 4A and 4B) to form the stack structure 132 (FIGS. 4A and 4B), as previously described herein with reference to FIG. 2. As shown in FIG. 4C, in some embodiments, each block 133 of the stack structure 132 includes at least one array of the support structures 148 vertically extending therethrough, including rows of the support structures 148 extending in the X-direction, and columns of the support structures 148 extending to the Y-direction. As a non-limiting example, the array of the support structures 148 may include at least two (2) rows (e.g., at least four (4) rows) of the support structures 148 each extending in the X-direction. In some embodiments, each block 133 of the stack structure 132 individually includes at least one array of the support structures 148 exhibiting at least four (4) rows of the support structures 148. For each block 133, portions of the at least one array of the support structures 148 may be located within horizontal areas of the stadium structures 114 within the block 133. As depicted in FIG. 4C, within horizontal areas of the stadium structures 114, the contact structures 140 may individually be positioned horizontally between support structures 148 horizontally neighboring one another in the X-direction. In addition, in some embodiments, some of the contact structures 140 (e.g., the second contact structures 140B) are also individually positioned horizontally between support structures 148 horizontally neighboring one another in the Y -direction.The support structures 148 may each individually be formed to exhibit a desired horizontal cross-sectional shape. In some embodiments, each of the support structures 148 is formed to exhibit a substantially circular horizontal cross-sectional shape. In additional embodiments, one or more (e.g., each) of the support structures 148 exhibits anon-circular cross-sectional shape, such as one of more of a square cross-sectional shape, a rectangular
cross-sectional shape, an oblong cross-sectional shape, an elliptical cross-sectional shape, a tear drop cross-sectional shape, a semicircular cross-sectional shape, a tombstone cross- sectional shape, a crescent cross-sectional shape, a triangular cross-sectional shape, a kite cross-sectional shape, and an irregular cross-sectional shape. In addition, each of the support structures 148 may be formed to exhibit substantially the same horizontal cross-sectional dimensions (e.g., substantially the same horizontal diameter), or at least one of the support structures 148 may be formed to exhibit one or more different horizontal cross-sectional dimensions (e.g., a different horizontal diameter) than at least one other of the support structures 148. In some embodiments, all of the support structures 148 are formed to exhibit substantially the same horizontal cross-sectional dimensions.The support structures 148 may each individually be formed of and include at least one conductive material, such as one or more of at least one metal (e.g., W, Ti, Mo, Nb, V, Hf, Ta, Cr, Zr, Fe, Ru, Os, Co, Rh, Ir, Ni, Pa, Pt, Cu, Ag, Au, Al), at least one alloy (e.g., a Cobased alloy, an Fe-based alloy, an Ni-based alloy, an Fe- and Ni-based alloy, a Co- and Ni- based alloy, an Fe- and Co-based alloy, a Co- and Ni- and Fe-based alloy, an Al-based alloy, a Cu-based alloy, a Mg-based alloy, a Ti-based alloy, a steel, a low-carbon steel, a stainless steel), at least one conductive metal-containing material (e.g., a conductive metal nitride, a conductive metal silicide, a conductive metal carbide, a conductive metal oxide), and at least one conductively-doped semiconductor material (e.g., conductively-doped Si, conductively- doped Ge, conductively-doped SiGe). In addition, at least one dielectric liner material may be formed to substantially surround (e.g., substantially horizontally and vertically cover) sidewalls of each of the support structures 148. The dielectric liner material may be horizontally interposed between each of the support structures 148 and the tiers 136 (FIGS. 4A and 4B) (including the conductive material 134 and the insulative material 104 thereof) of the stack structure 132. The dielectric liner material may be formed of and include one or more of at least one dielectric oxide material (e.g., one or more of SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, A1OX, HfOx, NbOx, TiOx, ZrOx, TaOx, and MgOx), at least one dielectric nitride material (e.g., SiNy), at least one dielectric oxynitride material (e.g., SiOxNy), at least one dielectric carboxynitride material (e.g., SiOxCzNy), and amorphous carbon. In some embodiments, the dielectric liner material comprises SiCh.Still referring to FIG. 4C, within each block 133 of the stack structure 132, the additional filled slot structures 144 may be formed to horizontally extend in parallel in the X-
direction into a horizontal area of the first stadium structure 114A within the block 133. The additional filled slot structures 144 may, for example, individually horizontally extend in the X-direction through a crest region 122 of the block 133 horizontally neighboring the first stadium structure 114A and partially into a horizontal area of one of the opposing staircases structures 116 (e.g., the reverse staircase structure 116B) of the first stadium structure 114A. In some embodiments, each of the additional filled slot structures 144 horizontally terminates (e.g., horizontally ends) in the X-direction at or proximate a relatively lowest step 118 of the one of the opposing staircase structures 116 (e.g., the reverse staircase structure 116B) within vertical boundaries (e.g., in the Z-direction) of the relatively vertically higher tiers 136A of the stack structure 132. In addition, each of the additional filled slot structures 144 may vertically extend in the Z-direction to and terminate at or within vertical boundaries of a relatively lowest tier 136 of the relatively vertically higher tiers 136A of the stack structure 132. Within the block 133, horizontal ends of the relatively lowest tier 136 of the relatively vertically higher tiers 136A of the stack structure 132 may define the relatively lowest step 118 of the one of the opposing staircase structures 116 (e.g., the reverse staircase structure 116B).Each additional filled slot structure 144 may comprise a slot (e.g., opening, trench, slit) in a block 133 of the stack structure 132 filled with at least one dielectric material. A material composition of the dielectric material of the additional filled slot structures 144 may be substantially the same as a material composition of the dielectric material of the filled slot structures 142, or the material composition of the dielectric material of the additional filled slot structures 144 may be different than the material composition of the dielectric material of the filled slot structures 142. In some embodiments, the additional filled slot structures 144 are formed of and include at least one dielectric oxide material (e.g., SiOx, such as SiCh).Each block 133 of the stack structure 132 may include greater than or equal to one (1) of the additional filled slot structures 144 within a horizontal area thereof, such as greater than or equal to two (2) of the additional filled slot structures 144, or greater than or equal to three (3) of the additional filled slot structures 144. In some embodiments, each block 133 of the stack structure 132 includes three (3) of the additional filled slot structures 144 within a horizontal area thereof. The additional filled slot structures 144 may sub-divide each block 133 into at least two (2) sub-blocks 146. For example, as shown in FIG. 4C, if an individual block 133 includes three (3) of the additional filled slot structures 144 within a horizontal area thereof, the additional filled slot structures 144 may sub-divide the block 133 into four (4) sub-blocks 146, such as a first sub-block 146A, a second sub-block 146B, a third
sub-block 146C, and a fourth sub-block 146D. For an individual block 133, portions of the conductive material 134 (FIGS. 4A and 4B) of each of the relatively vertically higher tiers 136A of the stack structure 132 within horizontal areas of the sub-blocks 146 of the block 133 may form upper select gate structures (e.g., SGD structures) of the block 133. For example, first portions of each of the relatively vertically higher tiers 136A of the stack structure 132 within horizontal boundaries of the first sub-block 146A may form first upper select gate structures of the block 133; second portions of each of the relatively vertically higher tiers 136A of the stack structure 132 within horizontal boundaries of the second subblock 146B may form second upper select gate structures of the block 133; third portions of each of the relatively vertically higher tiers 136A of the stack structure 132 within horizontal boundaries of the third sub-block 146C may form third upper select gate structures of the block 133; and fourth portions of each of the relatively vertically higher tiers 136A of the stack structure 132 within horizontal boundaries of the fourth sub-block 146D may form fourth upper select gate structures of the block 133.Still referring to FIG. 4C, for an individual block 133 of the stack structure 132, within horizontal areas of one of the crest regions 122 of the block 133 and one of the opposing staircase structures 116 (e.g., the reverse staircase structure 116B) of the first stadium structure 114A horizontally neighboring the crest region 122, the additional filled slot structures 144 may separate and isolate the upper select gate structures of each of the subblocks 146 of the block 133 from the upper select gate structures of each other of the subblocks 146 of the block 133. However, due to the bridge regions 124 (e.g., the first bridge regions 124A and the second bridge regions 124B) of the block 133 that horizontally extend from and between horizontally neighboring crest regions 122 of the block 133, some of the upper select gate structures of the block 133 may be shorted together within another of the crest regions 122 of the block 133 horizontally neighboring the other of the opposing staircase structures 116 (e.g., the forward staircase structure 116A) of the first stadium structure 114A. For example, for an individual block 133, the first upper select gate structures of the first subblock 146A thereof may be shorted to the fourth upper select gate structures of the fourth subblock 146D thereof by way of the bridge regions 124 extending from and between two (2) of the crest regions 122 of the block 133 horizontally neighboring the first stadium structure 114A. The first bridge region 124A horizontally neighboring a first side of the first stadium structure 114A may extend conductive paths of the first upper select gate structures of the first sub-block 146A around the first stadium structure 114A; and the second bridge
region 124B horizontally neighboring a second, opposing side of the first stadium structure 114A may extend conductive paths of the forth upper select gate structures of the fourth sub-block 146D around the first stadium structure 114A. In turn, the conductive paths of the first upper select gate structures and the forth upper select gate structures may converge within the crest region 122 that horizontally neighbors the first stadium structure 114A and that does not include the additional filled slot structures 144 therein, to short the first upper select gate structures to the forth upper select gate structures. Such shorting of upper select gate structures of the block 133 is resolved (e.g., broken, destroyed) through subsequent processing of the microelectronic device structure 100, as described in further detail below.As shown in FIG. 4C, each sub-block 146 of an individual block 133 of the stack structure 132 may individual include a row of the first contact structures 140A. For example, if an individual block 133 is formed to include four (4) sub-blocks 146 (e.g., the first subblock 146A, the second sub-block 146B, the third sub-block 146C, and the fourth subblock 146D), each of the four (4) sub-blocks 146 may include one (1) row of the first contact structures 140A within a horizontal area thereof, such that the block 133 includes four (4) rows of the first contact structures 140 A. Each row of the first contact structures 140A may horizontally extend in the X-direction, and may individually include a portion of the first contact structures 140A provided within a horizontal area of the block 133. In additional embodiments wherein an individual block 133 is sub-divided into a different number of subblocks 146, the block 133 may include a different number of rows of the first contact structures 140A equal to the different number of sub-blocks 146. In addition, as depicted in FIG. 4C, within an individual block 133 of the stack structure 132, columns of the first contact structures 140A may horizontally extend in the Y-direction. Each column of the first contact structures 140 A may include first contact structures 140 A provided within different subblocks 146 of the block 133 than one another.For each sub-block 146 of an individual block 133 of the stack structure 132, the first contact structures 140A with a horizontal area of the sub-block 146 may be provided at desired horizontal positions (e.g., in the X-direction and the Y-direction) on the steps 118 of the first stadium structure 114A. Referring to FIG. 4D, which shows a magnified view of a portion C (identified with a dashed box in FIG. 4C) of the simplified, partial top-down view of the microelectronic device structure 100 depicted in FIG. 4C, in some embodiments, within an individual block 133 of the stack structure 132 (FIG. 4C), at least one of the rows of the first contact structures 140A is located horizontally closer to at least
one of the additional filled slot structures 144 than at least one other of the rows of the first contact structures 140 A. For example, a first row of the first contact structures 140 A within a horizontal area of the first sub-block 146A may be positioned horizontally closer to one of the additional filled slot structures 144 horizontally interposed between the first sub-block 146A and the second sub-block 146B than a second row of the first contact structures 140A within a horizontal area of the second sub-block 146B. The first row of the first contact structures 140A may be positioned horizontally closer to the one of the additional filled slot structures 144 than to one of the filled slot structures 142 most proximate thereto. Put another way, a distance between the first row of the first contact structures 140A and the filled slot structure 142 most proximate thereto may be greater than a distance between the first row of the first contact structures 140A and the additional filled slot structure 144 most proximate thereto. As another example, a fourth row of the first contact structures 140 A within a horizontal area of the fourth sub-block 146D may be positioned horizontally closer to one of the additional filled slot structures 144 horizontally interposed between the fourth subblock 146D and the third sub-block 146C than a third row of the first contact structures 140A within a horizontal area of the third sub-block 146C. The fourth row of the first contact structures 140A may be positioned horizontally closer to the one of the additional filled slot structures 144 than to one of the filled slot structures 142 most proximate thereto. Put another way, a distance between the fourth row of the first contact structures 140A and the filled slot structure 142 most proximate thereto may be greater than a distance between the fourth row of the first contact structures 140A and the additional filled slot structure 144 most proximate thereto. Given the first bridge region 124A horizontally adjacent the first sub-block 146A and the second bridge region 124B horizontally adjacent the fourth sub-block 146D, forming the first row of the first contact structures 140A of the first sub-block 146A relatively closer to the additional filled slot structure 144 most proximate thereto may ensure each first contact structure 140A of the first row lands on one of the steps 118 of the first stadium structure 114A (FIG. 4C) (as opposed to on the first bridge region 124A); and forming the fourth row of the first contact structures 140A of the fourth sub-block 146D relatively closer to the additional filled slot structure 144 most proximate thereto may ensure each first contact structure 140A of the fourth row lands on one of the steps 118 of the first stadium structure 114A (FIG. 4C) (as opposed to on the second bridge region 124B).With returned reference to FIG. 4C, each block 133 of the stack structure 132 may individually include a desired distribution of the second contact structures 140B within
horizontal areas of the stadium structure 114. As shown in FIG. 4C, for an individual block 133 of the stack structure 132, the first stadium structure 114A may include at least one (1) row of the second contact structure 140B. Each row of the second contact structures 140B may horizontally extend in the X-direction, and may individually include a portion of the second contact structures 140B provided within a horizontal area of the block 133. For an individual block 133, the second contact structures 140B of each row may land on steps 118 of the first stadium structure 114A within vertical boundaries of the relatively vertically lower tiers 136B of the stack structure 132. In some embodiments, at least one of the opposing staircase structures 116 (e.g., the reverse staircase structure 116B and/or the forward staircase structure 116A) of the first stadium structure 114A includes a single (e.g., only one (1)) row of the second contact structures 140B within a horizontal area thereof. A horizontal centerline of the single row of the second contact structures 140B may be substantially aligned with a horizontal centerline of the block 133 (as depicted in FIG. 4C), or the horizontal centerline of the single row of the second contact structures 140B may be horizontally offset (e.g., in the Y-direction) from the horizontal centerline of the block 133. In additional embodiments, at least one of the opposing staircase structures 116 (e.g., the reverse staircase structure 116B and/or the forward staircase structure 116A) of the first stadium structure 114A includes more than one (1) row of the second contact structures 140B within a horizontal area thereof, such as at least two (2) rows of the second contact structures 140B, at least three (3) rows of the second contact structures 140B, or at least four (3) rows of the second contact structures 140B. In further embodiments, the second contact structure 140B are provided on steps 118 of the first stadium structure 114A in a different arrangement than in one or more of rows horizontally extending in the X-direction. For example, the second contact structure 140B may be arranged in a diagonal pattern extending substantially linearly in the X-direction and the Y-direction on steps 118 of the first stadium structure 114A, or may be arranged in an at least partially non-linear pattern (e.g., a curved pattern, a zigzag pattern, a random pattern, an irregular pattern) on steps 118 of the first stadium structure 114A.Referring next to FIG. 5 A, which is a simplified, longitudinal cross-sectional view of the portion A of the microelectronic device structure 100 following the processing stage previously described with reference to FIGS. 4A through 4D, at least one further filled slot structure 150 may be formed in the microelectronic device structure 100. For each block 133 of the stack structure 132, the further filled slot structure 150 may be positioned within horizontal boundaries in the X-direction of the first stadium structure 114A, may horizontally
extend in the Y-direction through each of the bridge regions 124 (FIGS. 4B through 4D) horizontally neighboring the first stadium structure 114A in the Y-direction, and may at least partially vertically extend in the Z-direction through each of the bridge regions 124 (FIGS. 4B through 4D) horizontally neighboring the first stadium structure 114A. As described in further detail below, for each block 133 of the stack structure 132, the further filled slot structure 150 disrupts (e.g., breaks, destroys) at least some conductive paths extending from and between two (2) of the crest regions 122 horizontally neighboring (e.g., in the X- direction) the first stadium structure 114A by way of the bridge regions 124 (FIGS. 4B through 4D) horizontally neighboring (e.g., in the Y-direction) the first stadium structure 114A. FIG. 5B is a simplified, partial top-down view of the microelectronic device structure 100 at the processing stage depicted in FIG. 5 A. FIG. 5C is a simplified, partial perspective view of portions D (identified with a dashed boxes in FIG. 5B) of the simplified, partial top-down view of the microelectronic device structure 100 depicted in FIG. 5B. For clarity and ease of understanding the drawings and related description, some features (e.g., structures, materials) of the microelectronic device structure 100 located within the boundaries of the portions D are not depicted in FIG. 5C to more clearly illustrate and emphasize other features of the microelectronic device structure 100 located within the boundaries of the portions D.Each further filled slot structure 150 may comprise a slot (e.g., opening, trench, slit) in the microelectronic device structure 100 filled with at least one dielectric material. A material composition of the dielectric material of the further filled slot structure 150 may be substantially the same as a material composition of the dielectric material one or more (e.g., each) of the filled slot structures 142 (FIG. 5B) and the additional filled slot structures 144 (FIG. 5B), or the material composition of the dielectric material of the additional filled slot structures 144 may be different than the material composition of the dielectric material of one or more (e.g., each) of the filled slot structures 142 (FIG. 5B) and the additional filled slot structures 144 (FIG. 5B). In some embodiments, the further filled slot structure 150 is formed of and includes at least one dielectric oxide material (e.g., SiOx, such as SiCh). In additional embodiments, the further filled slot structure 150 is formed of and includes at least one dielectric nitride material (e.g., SiNy, such as SisNr).With collective reference to FIGS. 5A through 5C, for each block 133 of the stack structure 132, each further filled slot structure 150 may vertically extend through and segment (e.g., partition) portions of the conductive material 134 (FIGS. 5A and 5C) of each of the
relatively vertically higher tiers 136A (FIGS. 5A and 5C) of the stack structure 132 within horizontal areas of the bridge regions 124 (FIGS. 5B and 5C) horizontally neighboring the first stadium structure 114A (FIGS. 5 A and 5B). The further filled slot structure 150 may segment first portions of the conductive material 134 (FIGS. 5 A and 5C) of each of the relatively vertically higher tiers 136A within the first bridge region 124A horizontally neighboring a first side of the first stadium structure 114A (FIGS. 5A and 5B); and may also segment second portions of the conductive material 134 (FIGS. 5A and 5C) of each of the relatively vertically higher tiers 136A within the second bridge region 124B horizontally neighboring a second, opposing side of the first stadium structure 114A (FIGS. 5A and 5B). Thus, the further filled slot structure 150 may prevent shorting of first upper select gate structures of the first sub-block 146A (FIG. 5B) of the block 133 with fourth upper select gate structures of the fourth sub-block 146D (FIG. 5B) of the block 133. Namely, the further filled slot structure 150 may destroy conductive paths extending across the bridge regions 124 (FIGS. 5B and 5C) horizontally neighboring the first stadium structure 114A (FIGS. 5 A and 5B) that may otherwise short the first upper select gate structures of the first subblock 146A (FIG. 5B) to the fourth upper select gate structures of the fourth sub-block 146D (FIG. 5B) by way of third portions of the conductive material 134 (FIGS. 5A and 5C) of each of the relatively vertically higher tiers 136A within one of the crest regions 122 (e.g., a crest region 122 free of the additional filled slot structures 144) horizontally neighboring the first stadium structure 114A (FIGS. 5A and 5B).In some embodiments, a lower vertical boundary of each further filled slot structure 150 is substantially coplanar with lower vertical boundaries of the additional filled slot structures 144 (FIG. 5B). For example, the further filled slot structure 150 may vertically extend in the Z-direction to and terminate at or within vertical boundaries of a relatively lowest tier 136 (FIGS. 5B and 5C) of the relatively vertically higher tiers 136A (FIGS. 5A and 5C) of the stack structure 132. In additional embodiments, a lower vertical boundary of the further filled slot structure 150 is vertically offset from (e.g., vertically underlies, vertically overlies) lower vertical boundaries of the additional filled slot structures 144 (FIG. 5B). The further filled slot structure 150 may vertically terminate at or below lower vertical boundaries of vertically lowest upper select gate structures (e.g., SGD structures) of an individual block 133 of the stack structure 132. The further filled slot structure 150 may vertically terminate at or above a relatively highest tier 136 (FIGS. 5B and 5C) of the relatively vertically lower tiers 136B (FIGS. 5B and 5C) of the stack structure 132. The further filled
slot structure 150 may vertically terminate at or above upper vertical boundaries of vertically highest access line structure (e.g., word line structure) of an individual block 133 of the stack structure 132. As shown in FIG. 5 A, for each block 133 of the stack structure 132, the further filled slot structure 150 may also at least partially vertically extend through the filled trench 120 (FIG. 5A) (including the third dielectric material 130, the second dielectric material 128, and the first dielectric material 126 thereol) neighboring the first stadium structure 114A (FIGS. 5A and 5B). In addition, in some embodiments, the further filled slot structure 150 also partially vertically extends through the one or more (e.g., each) of filled slot structures 142 (FIG. 5B) horizontally neighboring one or more of the blocks 133 of the stack structure 132.By terminating the further filled slot structure 150 may vertically terminate at or above a relatively highest tier 136 (FIGS. 5B and 5C) of the relatively vertically lower tiers 136B (FIGS. 5B and 5C) of the stack structure 132, some conductive paths extending from and between the crest regions 122 (FIG. 1 A) neighboring the first stadium structure 114A of an individual block 133 of the stack structure 132 may be maintained (e.g., may not be disrupted) following the formation of the further filled slot structure 150. For example, for an individual block 133 of the stack structure 132, portions of the conductive material 134 of the relatively vertically lower tiers 136B (FIGS. 5B and 5C) of the stack structure 132 within horizontal boundaries of the bridge regions 124 (e.g., the first bridge region 124 A, the second bridge region 124B) may not be segmented by the further filled slot structure 150. Thus, conductive paths of access line structures (e.g., word line structure) formed by the conductive material 134 of the relatively vertically lower tiers 136B (FIGS. 5B and 5C) may continue to extend around the first stadium structure 114A (and the filled trench 120 associated therewith) by way of the bridge regions 124 neighboring first stadium structure 114A. As a result, for an individual block 133 of the stack structure 132, an individual (e.g., single) switching device (e.g., transistor) of a string driver device coupled to the conductive material 134 of an individual relatively vertically lower tier 136B may be employed to drive voltages completely across and/or in opposing directions across the relatively vertically lower tier 136B. Accordingly, the structures and methods of the disclosure may decrease the number of switching devices and/or routing structures required to effectively operate a microelectronic device (e.g., a memory device), and may increase one or more of microelectronic device performance, scalability, efficiency, and simplicity as compared to conventional structures and methods.
In some embodiments, for an individual block 133 of the stack structure, at least some of the second contact structures 140B on the steps 118 of the first stadium structure 114A are employed to ensure continuity of conductive paths formed by the conductive material 134 of the relatively vertically lower tiers 136B (FIGS. 5B and 5C) between the crest regions 122 (FIG. 1A) neighboring the first stadium structure 114A. For example, one or more conductive routing structures may be employed to couple one of the second contact structures 140B on a step 118 of the reverse staircase structure 116B of the first stadium structure 114A to another one of the second contact structure 140B on a counterpart step 118 (e.g., a step at substantially the same vertical position in the stack structure 132) of the forward staircase structure 116A of the first stadium structure 114A. The conductive routing structure(s) may, for example, be formed over, in contact with, and between each of the one of second contact structures 140B and the another one of second contact structures 140B. Accordingly, for an individual block 133 of the stack structure 132, a conductive path of an access line structure (e.g., word line structure) formed by the conductive material 134 of an individual relatively vertically lower tier 136B (FIGS. 5B and 5C) may extend around the first stadium structure 114A (and the filled trench 120 associated therewith) by way of at least two (2) of the second contact structures 140B coupled to the conductive material 134 at two (2) opposing steps 118 of the first stadium structure 114A and the conductive routing structure(s) coupled to the at least two (2) of the second contact structures 140B.Referring to FIG. 5B, each further filled slot structure 150 may be formed to exhibit a desired horizontal geometric configuration (e.g., horizontal cross-sectional shape, and horizontal dimensions) facilitating segmentation of the bridge regions 124 horizontally neighboring the first stadium structures 114A of individual blocks 133 of the stack structure 132. In some embodiments, each further filled slot structure 150 is formed to exhibit an oblong horizontal cross-sectional shape, such as a rectangular cross-sectional shape. For an individual block 133 of the stack structure 132, the further filled slot structure 150 may horizontally extend in the Y-direction completely across the bridge regions 124 horizontally neighboring the first stadium structures 114A. In some embodiments, the further filled slot structure 150 substantially continuously horizontally extends in the Y-direction across at least a maximum width in the Y-direction of an individual block 133 of the stack structure 132. The further filled slot structure 150 may substantially continuously horizontally extend in the Y-direction within and across a single (e.g., only one) block 133 of the stack structure 132, or
the further filled slot structure 150 may substantially continuously horizontally extend in the Y-direction within and across multiple (e.g., more than one) blocks 133 of the stack structure 132. In some embodiments, the further filled slot structure 150 substantially continuously horizontally extends in the Y-direction within and across multiple blocks 133 of the stack structure 132, as well as within and across the filled slot structure(s) 142 horizontally interposed between the multiple blocks 133.Referring collectively to FIGS. 5A through 5C, each further filled slot structure 150 may be formed at a desired horizontal position in the X-direction facilitating segmentation of the bridge regions 124 horizontally neighboring the first stadium structure 114A (FIGS. 5 A and 5B) of individual blocks 133 of the stack structure 132. For example, as shown in FIGS. 5A and 5B, in some embodiments, the further filled slot structure 150 is formed to be positioned within horizontal boundaries in the X-direction of the central region 117 of the first stadium structure 114A within one or more of the blocks 133 of the stack structure 132. The filled slot structure 150 may be formed to be substantially aligned within a center of the central region 117 of the first stadium structure 114A in the X-direction, or may be formed to the horizontally offset from the center of the central region 117 of the first stadium structure 114A in the X-direction. In additional embodiments, the further filled slot structure 150 may be formed at a different horizontal position within horizontal boundaries in the X-direction of the first stadium structure 114A (FIGS. 5A and 5B) of one or more of the blocks 133 of the stack structure 132, as described in further detail below.The further filled slot structure 150 may be formed by subjecting the microelectronic device structure 100 following the processing stage previously described with reference to FIGS. 4A through 4D to at least one material removal process (e.g., anisotropic etching process such as one or more of RIE, deep RIE, plasma etching, reactive ion beam etching, and chemically assisted ion beam etching) to form at least one slot having a horizontal position and geometric configuration corresponding to that of the further filled slot structure 150 to be formed. Thereafter, dielectric material may be formed (e.g., non- conformably deposited) inside and outside of the slot, and then portions of the dielectric material outside of boundaries (e.g., horizontal boundaries, vertical boundaries) of the slot may be removed (e.g., through an abrasive planarization process, such as a CMP process) to form the further filled slot structure 150.Thus, in accordance with embodiments of the disclosure, a microelectronic device comprises a stack structure comprising a vertically alternating sequence of conductive
material and insulative material arranged in tiers. The stack structure has blocks separated from one another by first dielectric slot structures. Each of the blocks comprises two crest regions, a stadium structure interposed between the two crest regions in a first horizontal direction and comprising opposing staircase structures each having steps comprising edges of the tiers of the stack structure, and two bridge regions neighboring opposing sides of the stadium structure in a second horizontal direction orthogonal to the first horizontal direction and having upper surfaces substantially coplanar with upper surfaces of the two crest regions. The microelectronic device further comprises at least one second dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extending through and segmenting each of the two bridge regions.Furthermore, in accordance with embodiments of the disclosure, a method of forming a microelectronic device comprises forming a preliminary stack structure comprising a vertically alternating sequence of sacrificial material and insulative material arranged in tiers. The preliminary stack structure has blocks separated from one another by slots. Each of the blocks comprises two crest regions, two bridge regions horizontally extending in parallel from and between the two crest regions and having upper boundaries substantially coplanar with upper boundaries of the two crest regions, two bridge regions horizontally extending in parallel from and between the two crest regions and having upper boundaries substantially coplanar with upper boundaries of the two crest regions, and a stadium structure interposed between the two crest regions in a first horizontal direction and interposed between the two bridge regions in a second horizontal direction orthogonal to the first horizontal direction. The stadium structure comprises opposing staircase structures each having steps comprising edges of the tiers of the preliminary stack structure. The sacrificial material of the preliminary stack structure is replaced with conductive material to form a stack structure comprising a vertically alternating sequence of the conductive material and the insulative material arranged in the tiers. The stack structure has the blocks separated from one another by the slots. The slots are filled with dielectric material to form first dielectric slot structures. At least one second dielectric slot structure is formed within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extends through and segments each of the two bridge regions.As previously described, the microelectronic device structure 100 may be formed to exhibit a different configuration than that illustrated in FIGS. 5 A through 5C. By way of nonlimiting example, FIGS. 6 through 9 show simplified, partial top-down views of additional
microelectronic device structures formed through methods of the disclosure to have different configurations than the microelectronic device structure 100 (FIGS. 5 A through 5C), in accordance with additional embodiments of the disclosure. Throughout the remaining figures (i.e., FIGS. 6 through 11) and the associated description, functionally similar features (e.g., structures, materials) are referred to with similar reference numerals incremented by 100. To avoid repetition, not all features shown in the remaining figures (i.e., FIGS. 6 through 11) are described in detail herein. Rather, unless described otherwise below, a feature designated by a reference numeral that is a 100 increment of the reference numeral of a previously described feature (whether the previously-described feature is first described before the present paragraph, or is first described after the present paragraph) will be understood to be substantially similar to the previously described feature. By way of non-limiting example, unless described otherwise below, features designated by the reference numerals 244, 344, 444, 544, 644 in FIGS. 6 through 10 will be understood to be substantially similar to the additional filled slot structures 144 previously described herein with reference to FIGS. 4C, 4D, and 5B.FIG. 6 illustrates a simplified, partial top-down view of a microelectronic device structure 200 at a processing stage of a method of forming a microelectronic device (e.g., a memory device, such as a 3D NAND Flash memory device), in accordance with additional embodiments of the disclosure. As shown in FIG. 6, the microelectronic device structure 200 is formed to be similar to the microelectronic device structure 100 at the processing stage previously described with reference to FIGS. 5A through 5C, except that the microelectronic device structure 200 is formed to include multiple (e.g., more than one) further filled slot structures 250 horizontally extending in parallel within one another in the Y-direction. For example, the microelectronic device structure 200 may be formed to include a first filled slot structure 250A horizontally extending in the Y-direction, and a second filled slot structure 250B horizontally neighboring the first filled slot structure 250A in the X-direction and also horizontally extending in the Y-direction. As shown in FIG. 6, in some embodiments, each of the further filled slot structures 250 (e.g., the first filled slot structure 250 A, the second filled slot structure 250B) is formed to exhibit substantially the same geometric configuration (e.g., substantially the same shape, substantially the same dimensions) as each other of the further filled slot structures 250. In additional embodiments, at least one of the further filled slot structures 250 is formed to exhibit a different geometric configuration (e.g., a different shape, one or more different dimensions) than at least one other
of the further filled slot structures 250. Furthermore, in some embodiments, each of the further filled slot structures 250 is formed to be positioned within horizontal boundaries in the X-direction of the central region 217 of the first stadium structure 214A within one or more of the blocks 233 of the stack structure 232. In additional embodiments, at least one of the further filled slot structures 250 is formed to be positioned outside of the horizontal boundaries in the X-direction of the central region 217 of the first stadium structure 214A within one or more of the blocks 233 of the stack structure 232.FIG. 7 illustrates a simplified, partial top-down view of a microelectronic device structure 300 at a processing stage of a method of forming a microelectronic device (e.g., a memory device, such as a 3D NAND Flash memory device), in accordance with additional embodiments of the disclosure. As shown in FIG. 7, the microelectronic device structure 300 is formed to be similar to the microelectronic device structure 100 at the processing stage previously described with reference to FIGS. 5A through 5C, except that the microelectronic device structure 300 is formed to include multiple (e.g., more than one) further filled slot structures 350 horizontally extending in series with one another in the Y-direction. For example, the microelectronic device structure 300 may be formed to include a first further filled slot structure 350A, a second further filled slot structure 350B, and a third further filled slot structure 350C each horizontally extending in series with one another in the Y-direction. Each further filled slot structure 350 may vertically extend (e.g., in the Z-direction) and horizontally extend (e.g., in the Y-direction) through at least one of the bridge regions 324 (e.g., a first bridge region 324A, a second bridge region 324B) horizontally neighboring the first stadium structure 314A within at least one of the blocks 333 of the stack structure 332. The further filled slot structures 350 may together effectively form a discontinuous (e.g., segmented) slot structure in the microelectronic device structure 300.For an individual block 333 of the stack structure 332, at least two (2) of the further filled slot structures 350 may extend through and segment (e.g., partition) portions of the conductive material (e.g., corresponding to the conductive material 134 (FIGS. 5A and 5C)) of each of the relatively vertically higher tiers (e.g., corresponding to the relatively vertically higher tiers 136A (FIGS. 5A and 5C)) of the stack structure 332 within horizontal areas of the bridge regions 324 horizontally neighboring the first stadium structure 314A. In some embodiments, the first further filled slot structure 350A extends through and segments a first bridge region 324A of a first of the blocks 333; the second further filled slot structure 350B extends through and segments a second bridge region 324B of the first of the blocks 333 and a
first bridge region 324A of a second of the blocks 333; and the third further filled slot structure 350C extends through and segments a second bridge region 324B of the second of the blocks 333. Each of the further filled slot structures 350 may also extend (e.g., horizontally extend, vertically extend) through one of the filled slot structures 342 horizontally interposed between horizontally neighboring blocks 333 of the stack structure 332, or at least one of the further filled slot structures 350 may be substantially confined within a horizontal area of one of the blocks 333.As shown in FIG. 7, in some embodiments, the further filled slot structures 350 (e.g., the first further filled slot structure 350A, the second further filled slot structure 350B, and the further third filled slot structure 350C) are substantially aligned with one another in the X- direction. In additional embodiments, at least one of the further filled slot structures 350 is horizontally offset in the X-direction from at least one other of the further filled slot structures 350.FIG. 8 illustrates a simplified, partial top-down view of a microelectronic device structure 400 at a processing stage of a method of forming a microelectronic device (e.g., a memory device, such as a 3D NAND Flash memory device), in accordance with additional embodiments of the disclosure. As shown in FIG. 8, the microelectronic device structure 400 is formed to be similar to the microelectronic device structure 100 at the processing stage previously described with reference to FIGS. 5A through 5C, except that the microelectronic device structure 300 is formed to include multiple (e.g., more than one) further filled slot structures 450 horizontally extending in series with one another in the Y-direction, and the further filled slot structures 450 are positioned outside of horizontal boundaries in the X- direction of the central region 417 of the first stadium structure 414A of each block 433 of the stack structure 432. For example, the microelectronic device structure 400 may be formed to include a first further filled slot structure 450A, a second further filled slot structure 450B, and a third further filled slot structure 450C each horizontally extending in series with one another in the Y-direction and each positioned within horizontal boundaries in the X-direction of one of the opposing staircase structures 416 (e.g., the forward staircase structure 416A) of the first stadium structure 414A of individual blocks 433 of the stack structure 432. Each further filled slot structure 450 may vertically extend (e.g., in the Z-direction) and horizontally extend (e.g., in the Y-direction) through at least one of the bridge regions 424 (e.g., a first bridge region 424A, a second bridge region 424B) horizontally neighboring the first stadium structure 414A within at least one of the blocks 433 of the stack structure 432. The further
filled slot structures 450 may together effectively form a discontinuous (e.g., segmented) slot structure in the microelectronic device structure 400.For an individual block 433 of the stack structure 432, at least two (2) of the further filled slot structures 450 may extend through and segment (e.g., partition) portions of the conductive material (e.g., corresponding to the conductive material 134 (FIGS. 5A and 5C)) of each of the relatively vertically higher tiers (e.g., corresponding to the relatively vertically higher tiers 136A (FIGS. 5A and 5C)) of the stack structure 432 within horizontal areas of the bridge regions 424 horizontally neighboring the first stadium structure 414A. In some embodiments, the first further filled slot structure 450A extends through and segments a first bridge region 424A of a first of the blocks 433; the second further filled slot structure 450B extends through and segments a second bridge region 424B of the first of the blocks 433 and a first bridge region 424A of a second of the blocks 433; and the third further filled slot structure 450C extends through and segments a second bridge region 424B of the second of the blocks 433. Each of the further filled slot structures 450 may also extend (e.g., horizontally extend, vertically extend) through one of the filled slot structures 442 horizontally interposed between horizontally neighboring blocks 433 of the stack structure 432, or at least one of the further filled slot structures 450 may be substantially confined within a horizontal area of one of the blocks 433.As shown in FIG. 8, the further filled slot structures 450 may be formed at least partially within horizontal boundaries in the X-direction of one or more relatively vertically lower (e.g., vertically lowest) steps 418 of the first stadium structure 414A within individual blocks 433 of the stack structure 432. For example, the first further filled slot structure 450A, the second further filled slot structure 450B, and the third further filled slot structure 450C may each be positioned within horizontal boundaries in the X-direction of vertically lower steps 418 (e.g., steps 418 most vertically proximate the central regions 417) of the forward staircase structures 416A of horizontally neighboring blocks 433 of the stack structure 432. For individual blocks 433 of the stack structure 432, one or more of the second contact structures 440B may be horizontally interposed in the Y-direction between further filled slot structures 450 horizontally neighboring one another in the Y-direction. For example, at least one of the second contact structures 440B on one of the steps 418 of the first stadium structure 414A within a first of the blocks 433 may be horizontally interposed between the first further filled slot structure 450A and the second further filled slot structure 450B; and at least one of the second contact structures 440B on one of the steps 418 of the first stadium
structure 414A within a second of the blocks 433 may be horizontally interposed between the second further filled slot structure 450B and the third further filled slot structure 450C.In some embodiments, the further filled slot structures 450 (e.g., the first further filled slot structure 450A, the second further filled slot structure 450B, and the further third filled slot structure 450C) are substantially aligned with one another in the X-direction. In additional embodiments, at least one of the further filled slot structures 450 is horizontally offset in the X-direction from at least one other of the further filled slot structures 450. In addition, as shown in FIG. 8, the further filled slot structures 450 may each be substantially aligned with two (2) or more of the second contact structures 440B substantially aligned with one another in the X-direction. In additional embodiments, at least one of the further filled slot structures 450 is horizontally offset in the X-direction from at least one of the second contact structures 440B most proximate thereto in the Y-direction.FIG. 9 illustrates a simplified, partial top-down view of a microelectronic device structure 500 at a processing stage of a method of forming a microelectronic device (e.g., a memory device, such as a 3D NAND Flash memory device), in accordance with additional embodiments of the disclosure. As shown in FIG. 9, the microelectronic device structure 500 is formed to be similar to the microelectronic device structure 100 at the processing stage previously described with reference to FIGS. 5A through 5C, except that each block 533 of the stack structure 532 includes a different distribution of at least some of the contact structures 540 within the horizontal area thereof. For example, within a horizontal area of at least one of the opposing staircase structures 516 (e.g., the forward staircase structure 516A and/or the reverse staircase structure 516B) of the first stadium structure 514A of each block 533, the microelectronic device structure 500 may include multiple (e.g., more than one) rows of the second contact structures 540B. The multiple rows of the second contact structures 540B may each horizontally extend in the X-direction, and may each be substantially aligned in the Y-direction with at least one row of the first contact structures 540A. By way of non-limiting example, if an individual block 533 of the stack structure 532 includes four (4) rows of the first contact structures 540A, the block 533 may also include four (4) rows of the second contact structures 540B.Microelectronic device structures (e.g., the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to FIGS. 5A through 5C, 6, 7, 8, and 9) of the disclosure may be included in microelectronic devices of the disclosure. For example, FIG. 10 illustrates a partial cutaway perspective view of a portion of a
microelectronic device 601 (e.g., a memory device, such as a 3D NAND Flash memory device) including a microelectronic device structure 600. The microelectronic device structure 600 may be substantially similar to one of the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to FIGS. 5 A through 5C, 6, 7, 8, and 9. For clarity and ease of understanding the drawings and associated description, some features (e.g., structures, materials) of the microelectronic device structures 100, 200, 300, 400, 500 previously described herein are not shown in FIG. 10. However, it will be understood that any features of the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to one or more of FIGS. 5 A through 5C, 6, 7, 8, and 9 may be included in the microelectronic device structure 600 of the microelectronic device 601 described herein with reference to FIG. 10.As shown in FIG. 10, in addition to the features of the microelectronic device structure 600 previously described herein in relation to one or more of the microelectronic device structures 100, 200, 300, 400, 500 (FIGS. FIGS. 5A through 5C, 6, 7, 8, and 9), the microelectronic device 601 may further include cell pillar structures 652 vertically extending through each block 633 of the stack structure 632. The cell pillar structures 652 may be positioned within regions (e.g., memory array regions) of the block 633 horizontally offset (e.g., in the X-direction) from the stadium structures 614 (e.g., the first stadium structure 614A) (and, hence, the bridge regions 624 and the further filled slot structures 650) within the blocks 633. Intersections of the cell pillar structures 652 and the conductive material 634 of the tiers 636 of the stack structure 632 within the horizontal areas of the blocks 633 form strings of memory cells 654 vertically extending through each block 633 of the stack structure 202. For each string of memory cells 654, the memory cells 654 thereof may be coupled in series with one another. Within each block 633, the conductive material 634 of some of the tiers 636 of the stack structure 632 may serve as access line structures (e.g., word line structures) for the strings of memory cells 654 within the horizontal area of the block 633. In some embodiments, within each block 633, the memory cells 654 formed at the intersections of the conductive material 634 of some of the tiers 636 and the cell pillar structures 652 comprise so-called “MONOS” (metal - oxide - nitride - oxide - semiconductor) memory cells. In additional embodiments, the memory cells 654 comprise so-called “TANOS” (tantalum nitride - aluminum oxide - nitride - oxide - semiconductor) memory cells, or so-called “BETANOS” (band/barrier engineered TANOS) memory cells, each of which are subsets of MONOS memory cells. In further embodiments, the memory
cells 654 comprise so-called “floating gate” memory cells including floating gates (e.g., metallic floating gates) as charge storage structures. The floating gates may horizontally intervene between central structures of the cell pillar structures 652 and the conductive material 634 of the different tiers 636 of the stack structure 632.The microelectronic device 601 may further include at least one source structure 660, access line routing structures 664, first select gates 656 (e.g., upper select gates, drain select gates (SGDs)), select line routing structures 666, one or more second select gates 658 (e.g., lower select gates, source select gate (SGSs)), and digit line structures 662. The digit line structures 662 may vertically overlie and be coupled to the cell pillar structures 652 (and, hence, the strings of memory cells 654). The source structure 660 may vertically underlie and be coupled to the cell pillar structures 652 (and, hence, the strings of memory cells 654). In addition, the first contact structures 640A (e.g., select line contact structures) and the second contact structures 640B (e.g., access line contact structures) may couple various features of the microelectronic device 601 to one another as shown (e.g., the select line routing structures 666 to the first select gates 656; the access line routing structures 664 to the conductive materials 634 of the tiers 636 of the stack structure 602 underlying the first select gates 656 and defining access line structures of the microelectronic device 601).The microelectronic device 601 may also include a base structure 668 positioned vertically below the cell pillar structures 652 (and, hence, the strings of memory cells 654). The base structure 668 may include at least one control logic region including control logic devices configured to control various operations of other features (e.g., the strings of memory cells 654) of the microelectronic device 601. As a non-limiting example, the control logic region of the base structure 668 may further include one or more (e.g., each) of charge pumps (e.g., VCCP charge pumps, VNEGWL charge pumps, DVC2 charge pumps), delay-locked loop (DLL) circuitry (e.g., ring oscillators), Vdd regulators, drivers (e.g., string drivers), page buffers, decoders (e.g., local deck decoders, column decoders, row decoders), sense amplifiers (e.g., equalization (EQ) amplifiers, isolation (ISO) amplifiers, NMOS sense amplifiers (NSAs), PMOS sense amplifiers (PSAs)), repair circuitry (e.g., column repair circuitry, row repair circuitry), I/O devices (e.g., local I/O devices), memory test devices, MUX, error checking and correction (ECC) devices, self-refresh/wear leveling devices, and other chip/deck control circuitry. The control logic region of the base structure 668 may be coupled to the source structure 660, the access line routing structures 664, the select line routing structures 666, and the digit line structures 662. In some
embodiments, the control logic region of the base structure 668 includes CMOS (complementary metal-oxide-semiconductor) circuitry. In such embodiments, the control logic region of the base structure 668 may be characterized as having a “CMOS under Array” (“CuA”) configuration.Thus, in accordance with embodiments of the disclosure, a memory device comprises a stack structure comprising tiers each comprising a conductive material and an insulative material vertically neighboring the conductive material. The stack structure is divided into blocks extending in parallel in a first direction and separated from one another in a second direction by dielectric slot structures. Each of the blocks comprises a stadium structure comprising opposing staircase structures individually having steps comprising horizontal ends of at least some the tiers of the stack structure, and a central portion between the opposing staircase structures in the first direction; first elevated regions neighboring opposing ends of the stadium structure in the first direction; and second elevated regions neighboring opposing sides of the stadium structure in the second direction, uppermost surfaces of the second elevated regions substantially coplanar with uppermost surfaces of the first elevated regions. The memory device further comprises at least one additional dielectric slot structure, and strings of memory cells. The at least one additional dielectric slot structure is within horizontal boundaries in the first direction of the central portion of the stadium structure of each of the blocks, and horizontally and vertically extends through the second elevated regions of each of the blocks. The strings of memory cells vertically extend through a portion of each of the blocks neighboring the stadium structure in the first direction.Microelectronic devices structures (e.g., the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to FIGS. 5A through 5C, 6, 7, 8, and 9) and microelectronic devices (e.g., the microelectronic device 601 (FIG. 10)) in accordance with embodiments of the disclosure may be used in embodiments of electronic systems of the disclosure. For example, FIG. 11 is a block diagram of an illustrative electronic system 703 according to embodiments of disclosure. The electronic system 703 may comprise, for example, a computer or computer hardware component, a server or other networking hardware component, a cellular telephone, a digital camera, a personal digital assistant (PDA), portable media (e.g., music) player, a Wi-Fi or cellular-enabled tablet such as, for example, an iPad® or SURFACE® tablet, an electronic book, a navigation device, etc. The electronic system 703 includes at least one memory device 705. The memory device 705 may comprise, for example, one or more of a microelectronic device structure (e.g., one of
the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to FIGS. 5A through 5C, 6, 7, 8, and 9) and a microelectronic device (e.g., the microelectronic device 601 (FIG. 10)) previously described herein. The electronic system 703 may further include at least one electronic signal processor device 707 (often referred to as a “microprocessor”). The electronic signal processor device 707 may, optionally, include one or more of a microelectronic device structure (e.g., one of the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to FIGS. 5A through 5C, 6, 7, 8, and 9) and a microelectronic device (e.g., the microelectronic device 601 (FIG. 10)) previously described herein. While the memory device 705 and the electronic signal processor device 707 are depicted as two (2) separate devices in FIG. 11, in additional embodiments, a single (e.g., only one) memory/processor device having the functionalities of the memory device 705 and the electronic signal processor device 707 is included in the electronic system 703. In such embodiments, the memory/processor device may include one or more of a microelectronic device structure (e.g., one of the microelectronic device structures 100, 200, 300, 400, 500 previously described with reference to FIGS. 5A through 5C, 6, 7, 8, and 9) and a microelectronic device (e.g., the microelectronic device 601 (FIG. 10)) previously described herein. The electronic system 703 may further include one or more input devices 709 for inputting information into the electronic system 703 by a user, such as, for example, a mouse or other pointing device, a keyboard, a touchpad, a button, or a control panel. The electronic system 703 may further include one or more output devices 711 for outputting information (e.g., visual or audio output) to a user such as, for example, a monitor, a display, a printer, an audio output jack, a speaker, etc. In some embodiments, the input device 709 and the output device 711 comprise a single touchscreen device that can be used both to input information to the electronic system 703 and to output visual information to a user. The input device 709 and the output device 711 may communicate electrically with one or more of the memory device 705 and the electronic signal processor device 707.Thus, in accordance with embodiments of the disclosure, an electronic system comprises an input device, an output device, a processor device operably coupled to the input device and the output device, and a memory device operably coupled to the processor device. The memory device comprises at least one microelectronic device structure comprising a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers. The stack structure further comprises at least two blocks
separated by at least one intervening dielectric structure. Each of the at least two blocks comprises two elevated regions, a stadium structure, and two additional elevated regions. The stadium structure is interposed between the two elevated regions in a first horizontal direction and comprises staircase structures opposing one another in the first horizontal direction. The staircase structures each have steps comprising horizontal ends of the tiers of the stack structure. The two additional elevated regions neighbor opposing sides of the stadium structure in a second horizontal direction perpendicular to the first horizontal direction. Upper boundaries of the two additional elevated regions are substantially coplanar with upper boundaries of the two elevated regions. The at least one microelectronic device structure further comprises at least one dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction. The at least one dielectric slot structure horizontally and vertically extends through each of the two additional elevated regions of each of the at least two blocks of the stack structure.The structures, devices, and methods of the disclosure advantageously facilitate one or more of improved microelectronic device performance, reduced costs (e.g., manufacturing costs, material costs), increased miniaturization of components, and greater packaging density as compared to conventional structures, conventional devices, and conventional methods. The structures, devices, and methods of the disclosure may also improve scalability, efficiency, and simplicity as compared to conventional structures, conventional devices, and conventional methods.Additional, non-limiting example embodiments of the disclosure are set forth below:Embodiment 1 : A microelectronic device, comprising: a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers, the stack structure having blocks separated from one another by first dielectric slot structures, each of the blocks comprising: two crest regions; a stadium structure interposed between the two crest regions in a first horizontal direction and comprising opposing staircase structures each having steps comprising edges of the tiers of the stack structure; two bridge regions neighboring opposing sides of the stadium structure in a second horizontal direction orthogonal to the first horizontal direction and having upper surfaces substantially coplanar with upper surfaces of the two crest regions; and at least one second dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extending through and segmenting each of the two bridge regions.
Embodiment 2: The microelectronic device of Embodiment 1, further comprising a filled trench vertically overlying within horizontal boundaries of the stadium structure, the filled opening comprising: a first dielectric material on the opposing staircase structures of the stadium structure and on inner sidewalls of the two bridge regions; a second dielectric material on the first dielectric material and having a different material composition than the first dielectric material; and a third dielectric material on the second dielectric material and having a different material composition than the second dielectric material.Embodiment 3: The microelectronic device of Embodiment 2, wherein: the first dielectric material comprises a dielectric oxide material; and the second dielectric material comprises a dielectric nitride material.Embodiment 4: The microelectronic device of one of Embodiments 2 and 3, wherein a portion of the at least one second dielectric slot structure is positioned within horizontal boundaries of the filled trench, the portion of at least one second dielectric slot structure partially vertically extending through the filled trench.Embodiment 5 : The microelectronic device of Embodiment 4, wherein, for each of the blocks of the stack structure: the portion of the at least one second dielectric slot structure continuously extends in the second horizontal direction from a first of the two bridge regions to a second of the two bridge regions; an additional portion of the at least one second dielectric slot structure continuously extends in the second horizontal direction through the first of the two bridge regions and to a first of the first dielectric slot structures; and a further portion of the at least one second dielectric slot structure continuously extends in the second horizontal direction through the second of the two bridge regions and to a second of the first dielectric slot structures.Embodiment 6: The microelectronic device of any one of Embodiments 1 through 5, wherein the at least one second dielectric slot structure comprises only one second dielectric slot structure continuously extending in the second horizontal direction across more than one of the blocks of the stack structure and across at least one of the first dielectric slot structures interposed between the more than one of the blocks of the stack structure.Embodiment 7: The microelectronic device of any one of Embodiments 1 through 6, wherein the at least one second dielectric slot structure is within boundaries in the first horizontal direction of a central portion of the stadium structure interposed between the opposing staircase structures of the stadium structure.
Embodiment 8: The microelectronic device of any one of Embodiments 1 through 5, wherein the at least one second dielectric slot structure comprises at least two second dielectric slot structures extending in parallel in the second horizontal direction.Embodiment 9: The microelectronic device of any one of Embodiments 1 through 5, wherein the at least one second dielectric slot structure comprises at least two second dielectric slot structures extending in series in the second horizontal direction.Embodiment 10: The microelectronic device of any one of Embodiments 1 through 9, further comprising third dielectric slot structures with a horizontal area of each of the blocks of the stack structure, the third dielectric slot structures partially vertically extending through each of the blocks of the stack structure and horizontally extending in the first horizontal direction through one of the two crest regions of each of the blocks of the stack structure and into one of the opposing staircase structures of the stadium structure of each of the blocks of the stack structure.Embodiment 11 : The microelectronic device of Embodiment 10, wherein each of the third dielectric slot structures is completely horizontally offset from the at least one second dielectric slot structure in the first horizontal direction.Embodiment 12: A method of forming a microelectronic device, comprising: forming a preliminary stack structure comprising a vertically alternating sequence of sacrificial material and insulative material arranged in tiers, the preliminary stack structure having blocks separated from one another by slots, each of the blocks comprising: two crest regions; two bridge regions horizontally extending in parallel from and between the two crest regions and having upper boundaries substantially coplanar with upper boundaries of the two crest regions; and a stadium structure interposed between the two crest regions in a first horizontal direction and interposed between the two bridge regions in a second horizontal direction orthogonal to the first horizontal direction, the stadium structure comprising opposing staircase structures each having steps comprising edges of the tiers of the preliminary stack structure; replacing the sacrificial material of the preliminary stack structure with conductive material to form a stack structure comprising a vertically alternating sequence of the conductive material and the insulative material arranged in the tiers, the stack structure having the blocks separated from one another by the slots; filling the slots with dielectric material to form first dielectric slot structures; and forming at least one second dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction and partially vertically extending through and segmenting each of the two bridge regions.
Embodiment 13: The method of Embodiment 12, further comprising, prior to replacing the sacrificial material of the preliminary stack structure with conductive material: forming a first dielectric material on surfaces of the two crest regions, the two bridge regions, and the opposing staircase structures of the stadium structure; forming a second dielectric material on the first dielectric material, the second dielectric material having a different material composition than the first dielectric material; and forming a third dielectric material on the second dielectric material, the third dielectric material having a different material composition than the second dielectric material.Embodiment 14: The method of Embodiment 13, further comprising: selecting the first dielectric material to comprise silicon dioxide; selecting the second dielectric material to comprise silicon nitride; and selecting the third dielectric material to comprise additional silicon dioxide.Embodiment 15: The method of one of Embodiments 13 and 14, wherein forming at least one second dielectric slot structure further comprises forming the at least one second dielectric slot structure to extend in the second horizontal direction through portions of the first dielectric material, the second dielectric material, the third dielectric material, and the two bridge regions.Embodiment 16: The method of any one of Embodiments 12 through 15, wherein forming at least one second dielectric slot structure further comprises forming the at least one second dielectric slot structure to extend in the second horizontal direction through pairs of the first dielectric slot structures neighboring opposing sides of each of the blocks of the stack structure.Embodiment 17: The method of any one of Embodiments 12 through 15, wherein forming at least one second dielectric slot structure comprises forming at least two second dielectric slot structures positioned in series with one another in the second horizontal direction, a first of the at least two second dielectric slot structures horizontally and vertically extending through a first of the two bridge regions of one of the blocks of the stack structure, and a second of the at least two second dielectric slot structures horizontally and vertically extending through a second of the two bridge regions of the one of the blocks of the stack structure.Embodiment 18: The method of any one of Embodiments 12 through 15, wherein forming at least one second dielectric slot structure comprises forming at least two second dielectric slot structures positioned in parallel with one another in the second horizontal
direction, each of the at least two second dielectric slot structures horizontally and vertically extending through each of the two bridge regions of at least one of the blocks of the stack structure.Embodiment 19: The method of any one of Embodiments 12 through 18, further comprising forming third dielectric slot structures with a horizontal area of each of the blocks of the stack structure, the third dielectric slot structures completely offset from the at least one second dielectric slot structure in the first horizontal direction and extending in the first horizontal direction through one of the two crest regions of each of the blocks of the stack structure and terminating within a horizontal area of one of the opposing staircase structures of each of the blocks of the stack structure.Embodiment 20: The method of Embodiment 19, further comprising forming lower boundaries of the third dielectric slot structures to be substantially coplanar with lower boundaries of the at least one second dielectric slot structure.Embodiment 21 : A memory device, comprising: a stack structure comprising tiers each comprising a conductive material and an insulative material vertically neighboring the conductive material, the stack structure divided into blocks extending in parallel in a first direction and separated from one another in a second direction by dielectric slot structures, each of the blocks comprising: a stadium structure comprising: opposing staircase structures individually having steps comprising horizontal ends of at least some the tiers of the stack structure; and a central portion between the opposing staircase structures in the first direction; first elevated regions neighboring opposing ends of the stadium structure in the first direction; and second elevated regions neighboring opposing sides of the stadium structure in the second direction, uppermost surfaces of the second elevated regions substantially coplanar with uppermost surfaces of the first elevated regions; at least one additional dielectric slot structure within horizontal boundaries in the first direction of the central portion of the stadium structure of each of the blocks, and horizontally and vertically extending through the second elevated regions of each of the blocks; and strings of memory cells vertically extending through a portion of each of the blocks neighboring the stadium structure in the first direction.Embodiment 22: The memory device of Embodiment 21, further comprising, within each of the blocks, a filled trench vertically overlying and within a horizontal area of the stadium structure, the filled trench comprising: a dielectric oxide liner material on the opposing staircase structures and the central portion of the stadium structure, and on inner side
surfaces of the bridge regions; a dielectric nitride liner material on the dielectric oxide liner material; and a dielectric fill material on the dielectric nitride liner material.Embodiment 23: The memory device of one of Embodiments 21 and 22, further comprising, within each of the blocks, further dielectric slot structures extending in parallel with one another in the first direction and completely horizontally offset from the at least one additional dielectric slot structure in the first direction.Embodiment 24: The memory device of any one of Embodiments 21 through 23, further comprising: digit lines overlying the stack structure and electrically coupled to the strings of memory cells; a source structure underlying the stack structure and electrically coupled to the strings of memory cells; conductive contact structures on at least some of the steps of the opposing staircase structures of the stadium structure; conductive routing structures coupled to the conductive contact structures; and a control logic devices coupled to the source structure, the digit lines, and the conductive routing structures.Embodiment 25: An electronic system, comprising: an input device; an output device; a processor device operably coupled to the input device and the output device; and a memory device operably coupled to the processor device and comprising at least one microelectronic device structure comprising: a stack structure comprising a vertically alternating sequence of conductive material and insulative material arranged in tiers, the stack structure comprising at least two blocks separated by at least one intervening dielectric structure, each of the at least two blocks comprising: two elevated regions; a stadium structure interposed between the two elevated regions in a first horizontal direction and comprising staircase structures opposing one another in the first horizontal direction, the staircase structures each having steps comprising horizontal ends of the tiers of the stack structure; two additional elevated regions neighboring opposing sides of the stadium structure in a second horizontal direction perpendicular to the first horizontal direction, upper boundaries of the two additional elevated regions substantially coplanar with upper boundaries of the two elevated regions; and at least one dielectric slot structure within horizontal boundaries of the stadium structure in the first horizontal direction, the at least one dielectric slot structure horizontally and vertically extending through each of the two additional elevated regions of each of the at least two blocks of the stack structure.While the disclosure is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, the disclosure is not limited to the particular forms
disclosed. Rather, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the following appended claims and their legal equivalents. For example, elements and features disclosed in relation to one embodiment of the disclosure may be combined with elements and features disclosed in relation to other embodiments of the disclosure. |
To provide a memory system that includes a nonvolatile (NV) memory device with asymmetry between intrinsic read operation delay and intrinsic write operation delay, where the unmatched delays can create unusable gaps on a command bus or DQ bus, or on both the command and DQ buses, so that, in workloads that have a mix of read and write commands, such gaps impact the maximum achievable bandwidth.SOLUTION: A system 100 can select to perform memory access operations with an NV memory device with the asymmetry included in an array 140. In this case, write operations have a lower delay than read operations. The system can select to perform memory access operations with the NV memory device, where a write operation delay that matches the read operation delay.SELECTED DRAWING: Figure 1 |
A register for storing a value for selecting between two modes, an array of memory cells having an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay, and the write operation delay. A non-volatile (1) mode comprising a register having a first write operation delay that does not match the read operation delay and a second mode having a second write operation delay that matches the read operation delay. NV) Memory device.The NV memory device according to claim 1, wherein the NV memory device is set to the first mode by default.The NV memory device according to claim 1, wherein the NV memory device is set to the second mode by default.The NV memory device according to any one of claims 1 to 3, wherein the register can be dynamically configured during the execution time of the NV memory device.The NV memory device according to any one of claims 1 to 4, wherein the array of memory cells includes an array of three-dimensional crosspoint (3DXP) memory cells.A hardware interface for coupling to a plurality of non-volatile (NV) memory devices having an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay, and for writing the register value of the NV memory device. A scheduler for scheduling commands, the value of which is for selecting between two modes of write operation delay, the first mode having a first write operation delay that does not match the read operation delay. A controller comprising a scheduler, the second mode having a second write operation delay that matches the read operation delay.The controller according to claim 6, wherein the NV memory device is set to the first mode by default.The controller according to claim 6, wherein the NV memory device is set to the second mode by default.Claim that the scheduler schedules the command to write the register in order to select the first mode when the scheduler has a write operation to transmit to the NV memory device in large part. The controller according to any one of 6 to 8.The scheduler schedules the command to write the register in order to select the second mode when the scheduler has a mixture of write and read operations for transmission to the NV memory device. , The controller according to any one of claims 6 to 8.10. The controller of claim 10, wherein when the second mode is selected, the scheduler schedules commands for write operations and commands for read operations in any order.The controller according to any one of claims 6 to 11, wherein the scheduler schedules the command for dynamically writing the value of the register during the execution time of the NV memory device.6. Controller.13. The controller of claim 13, wherein the scheduler switches command transmission between different ranks during the write and read operation delays.The controller according to any one of claims 6 to 14, wherein the NV memory device includes a three-dimensional crosspoint (3DXP) memory device.A method for configuring write operation delay, the step of receiving a first command for setting a register value for selecting between two modes of write operation delay for a non-volatile (NV) memory device. The first mode has a first write operation delay that does not match the read operation delay, the second mode has a second write operation delay that matches the read operation delay, and the NV memory. The write operation, wherein the device has an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay, a receiving stage and a second command for triggering a write operation. A method comprising receiving, which is performed with the write operation delay of the selected first mode or second mode.16. The method of claim 16, further comprising setting a default for the first mode.16. The method of claim 16, further comprising setting a default for the second mode.16. The step of receiving the first command includes the step of receiving the first command during the execution time of the NV memory device in order to dynamically configure the register during the execution time. The method according to any one of 18 to 18.The method of any one of claims 16-19, wherein the NV memory device comprises a three-dimensional crosspoint (3DXP) memory device. |
Configurable write command delay in non-volatile memoryThe description generally relates to memory devices, and a more specific description relates to configurable write command delays in memory with different write command delays and read command delays.Conventional memory devices, such as dynamic random access memory (DRAM), have matching latencies between read and write operations. Therefore, the controller can schedule commands in any order by having a fixed latency between sending commands to the command bus and using the data bus.Emerging three-dimensional (3D) crosspoint (3DXP) media have consistent command and data utilization, including non-uniform read and write latencies. Therefore, read and write commands require the same number of clock cycles (tCK) to send on the command bus, and data requires the same number of clock cycles for read and write commands. However, the delay between the write command and the data on the data (DQ) bus is significantly shorter than the delay between the read command and the data on the DQ bus.The delay between the write command and the data on the DQ bus refers to the time it takes for the controller to drive the data on the bus after sending the write command. Write commands may be run as a background asynchronous process, because the controller can simply send commands and data immediately after the commands are sent, and then perform other work. It means that it can be vacant. The delay between the read command and the data on the DQ bus refers to the time it takes for the memory device to access the data from the storage medium and drive it onto the bus. Mismatched delays can result in unusable gaps on the command or DQ bus, or both the command and DQ buses. For workloads with a mix of read and write commands, such gaps affect the maximum achievable bandwidth.One traditional approach to addressing the bandwidth inefficiency resulting from latency mismatch is to modify the command interface to have twice the bandwidth of the data interface. However, due to higher bit rates, or an increase in the number of command signal lines on the command bus, such an approach requires increased controller and medium power.The following description includes a discussion of figures for which illustrations are given as examples of implementation. The drawings should be understood as an example, not as a limitation. As used herein, reference to one or more examples should be understood as describing a particular feature, structure, or property contained in at least one implementation of the invention. Terms such as "in one example" or "in an alternative" appearing herein provide examples of implementations of the present invention, and not all necessarily refer to the same implementation. However, they are not necessarily mutually exclusive.FIG. 5 is a block diagram of a system that has an inherent difference between read and write delays and can select a write delay mode that does not match or a write delay mode that matches.FIG. 6 is a block diagram of a single rank memory system in which a mismatched write delay mode or a matched write delay mode can be selected.FIG. 5 is a block diagram of a two-rank memory system in which a non-matching write delay mode or a matching write delay mode can be selected.It is a timing diagram of an example of a system having an inherent read delay and a write delay mismatch with selectable write delays.It is a table of an example of bus utilization data exemplifying the improved utilization of a system with an inherent read delay and write delay mismatch that implements selectable write delays.FIG. 5 is a flow diagram of an example of a process for selectable write delays in a system with an inherent read delay and write delay mismatch.It is a block diagram of an example of a memory subsystem in which a selectable write delay can be implemented.It is a block diagram of an example of a computing system in which a selectable write delay can be implemented.It is a block diagram of an example of a mobile device in which a selectable write delay can be implemented.A non-limiting description of the figures that may show some or all examples, as well as specific details and implementation descriptions, including other possible implementations, follow.As described herein, memory systems include non-volatile (NV) memory devices that have an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay. The system can choose to perform a memory access operation using an asymmetric NV memory device, in which case the write operation may have a longer delay to completion than the read operation, but the write operation. The operation has a lower delay than the read operation between the command and the use of the data bus. Alternatively, the system can choose to perform a memory access operation using an NV memory device configured with a write operation delay that matches the read operation delay.In one example, the memory device includes a configurable or selectable mode of the storage medium for either programming the write latency to the component minimum or programming it to match the read latency. When the write latency is at the component minimum, the write latency is based on the architecture of the storage medium itself, which defines the minimum delay required by the medium between the command and the reception of write data. The value to match the read latency causes the memory device to add a delay to match the write delay with the read delay. In one example, the memory device matches the write latency with the read latency by causing the internal storage controller to internally add a delay to the write command to match the data bus usage of the write command with the inherent delay of the read command. Let me. Such an approach may allow the memory device to handle write commands in the same way with respect to access to the medium and data bus, only with a delay due to processing the command after the delay.By programming or setting the write latency to match the read latency, it may be possible for the host controller to issue write or read commands based on the command bus utilization. This is because the DQ utilization rate is the same for both commands. Therefore, by selectively matching write and read latencies for a particular workload, DQ bus utilization can be increased, which improves the bandwidth of the NV memory device. In one example, the NV memory device is a non-volatile storage with a three-dimensional crosspoint (3DXP) memory array. As one embodiment, the NV memory device can be an Optane product available from Intel Corporation. It has been observed that the application of selectively matching write and read latencies improves data bus utilization and increases the overall bandwidth of the 3DXP device by approximately 10%.The host controller or system controller manages access to memory devices. By allowing the host controller to selectively apply matching write and read latencies, the controller scheduler design can be simplified to track when a read or write command is issued to a memory device. can. Matching write / read latencies can increase bandwidth, or achievable performance for customer use. In one example, the host controller can issue any command at any time by recognizing that the read and write latencies are the same rather than inconsistent, based on the intrinsic latency.Some memory technologies, such as DRAM (Dynamic Random Access Memory), have symmetric latencies between reads and writes. Memory techniques with such symmetry do not have the inefficiency of latency mismatch. The programmable write delay takes an asymmetric memory technique and can be executed as a symmetric memory technique. Programmability allows the system to choose between mismatched or matching latencies, depending on the workload. In one example, if the workload has a high percentage of writes, the system can choose a mode that does not match to allow the memory device to use the minimum latency. In one example, if the workload has a mix of reads and writes, the system can select matching latency modes to make read and write latencies symmetrical.FIG. 1 is a block diagram of a system that has an inherent difference between a read delay and a write delay and can select a write delay mode that does not match or a write delay mode that matches. The system 100 includes a host 110 having a memory controller 120 coupled to the memory device 130.The memory device 120 includes a memory array 140 that represents an array of memory cells or storage cells. A memory cell stores a bit of data or multiple bits of a multi-level cell. In one example, the array 140 is divided as a bank of memory or another subset of memory. In one example, the memory device 130 is part of a group of memory devices, and one or more memory devices are organized as ranks of memory. A memory rank is a group of memory resources that share a chip selection or enable signal and are therefore accessed in parallel.In one example, array 140 includes non-volatile memory cells. Non-volatile (NV) memory or NV memory devices maintain their state even when power to the memory is cut off. Volatile memory has a state of uncertainty when power to the memory is cut off. In one example, the NV medium of the array 140 is a 3DXP medium. Array 140 has a read command for the DQ bus delay that does not match the write command for the DQ bus delay. In one example, the array 140 has command and data utilization that matches non-uniform read / write latencies. Due to a command and data usage match, the read and write commands require the same number of tCKs or clock cycles, and the data was sent from the memory device 130 in response to the read, or in connection with the write memory device 130. Requires the same number of tCKs regardless of whether they were sent to.In one example, the array 140 has an NV medium with a command-to-DQ bus utilization write (WR) delay 142, which is significantly shorter than the read (RD) delay 144. The WR delay 142 and the RD delay 144 represent the intrinsic minimum. The minimum value is a value that must be followed to ensure proper operation of the device. A delay can be considered an intrinsic delay if it is a delay related to the physical and operating characteristics of the medium itself. The difference between the WR delay 142 and the RD delay 144 results in an unusable gap on the command or DQ bus, or on both the command and DQ buses. For workloads with a mix of read and write commands, these gaps affect the maximum achievable bandwidth between the memory device 130 and the host 110 or memory controller 120.Host 110 represents a computing platform to which the memory device 130 is coupled. For example, host 110 can be or include a computer or other computing device. The memory controller 120 represents a controller for managing access to the memory device 130. In one example, the memory controller 120 is part of the host processor (not specifically shown) of the host 110. Alternatively, depending on the connection of the memory device 130, the memory controller 120 can be considered as a storage controller. In one example, the non-volatile memory of the memory device 130 may be coupled to a storage bus such as a peripheral component interconnect express (PCIe) bus. In one example, the non-volatile memory of the memory device 130 is non-volatile, but is also byte addressable and randomly accessible, and can be coupled to a system memory bus such as a double data rate (DDR) memory bus.The memory controller 120 includes a scheduler 122 that manages the scheduling of a series of commands and the transmission to the memory device 130. The scheduler 122 contains logic for determining command order and command timing requirements. The memory controller 120 determines which commands are to be transmitted and in what order. The scheduler 122 determines the order of commands to ensure compliance with timing requirements. In one example, the scheduler 122 is based on whether the memory device 130 is configured so that the WR delay 142 matches the RD delay 144, or whether the WR delay 142 does not match the RD delay 144. , Determine in what order the commands to the memory device 130 are scheduled.The memory controller 120 includes command logic 124 that generates commands for transmission to the memory device 120. The command may include a write command and a read command. The memory controller 120 sends a read command over a command bus (not specifically indicated), which may also be referred to as a command and address bus, and after a delay period, the memory device 130 drives data on the data bus (not specifically indicated). It will be. The memory controller 120 sends a write command on the command bus and then sends data to the memory device on the data bus.In one example, the memory controller 120 includes a WR delay mode 126 that indicates the delay mode of the memory device 130. While the memory controller 120 can set the delay mode for the memory device 130, the memory controller 120 can also track which delay mode is applied by the memory device 130. The memory controller 120 needs to recognize which delay mode is applied in order to recognize how the scheduler 122 schedules commands and also when the data bus is being used.The memory device 130 includes a controller 132 that represents logic in the memory device for receiving and decoding commands from the memory controller 132. The controller 132 represents the control logic in the memory device 130 and is separate from the memory controller 120 of the host 110. The controller 132 can trigger an operation within the memory device 130 to execute a command transmitted by the memory controller 120.The memory device 130 includes one or more registers or registers 134 representing storage positions for storing configuration information or values related to the operation of the memory device 130. In one example, register 134 includes one or more mode registers. In one example, the register 134 contains configuration information for controlling the write delay mode of the memory device 130. The WR delay mode 136 represents a write delay mode in the memory device 130.In one example, the WR delay mode 136 includes two modes, a non-matching mode that can be referred to as a first mode and a matching mode that can be referred to as a second mode. The labels for the first and second modes may be swapped in different implementations. The non-matching mode refers to the WR delay mode 136 in which the WR delay is different from the RD delay. The matching mode refers to the WR delay mode 136, which has an additional delay such that the WR delay matches the RD delay. In one example, the WR delay mode can be dynamically configured during the execution time of the memory device 130. For example, the write mode can be dynamically set or configured dynamically by setting register 134 during memory operation.In one example, when the WR delay mode 136 indicates a matching mode and the controller 132 receives a write command, the controller 132 will only differ, or approximately, between the intrinsic RD delay 144 and the intrinsic WR delay 142. The processing of the write command is delayed by the difference. By delaying by the difference between the two delays, the applied WR delay is extended to match the RD delay. Therefore, the WR delay mode 136 can selectively change the asymmetry between the WR delay 142 and the RD delay 144 in the matching mode, and can maintain the unmatched delay in the non-matching mode. In one example, the WR delay mode 136 is defaulted to a non-matching mode. In one example, the WR delay mode 136 is defaulted to the matching mode.In one example, the memory controller 120 determines which mode to set for the memory device 130 based on mixed read and write commands scheduled by the scheduler 122. In one example, if the scheduler has most or primarily write operations to send to the memory device 130, the memory controller 120 sends a command to set the register 134 to select a mode that does not match. Mismatched modes allow the scheduler to schedule commands closer to each other because the DQ bus utilization delay is shorter. In one example, if the scheduler has a mix of read and write commands to send to the memory device 130, the memory controller 120 sends a command to set the register 134 to select a matching mode. Matching modes allow the scheduler to send read and write commands in any order, which can result in improved DQ bus utilization. In either case, it is understood that the memory controller does not need to send a command to change the WR delay mode if the write delay mode is already set to the best mode for the scheduled workload. There will be.FIG. 2A is a block diagram of a single rank memory system in which a mismatched write delay mode or a matched write delay mode can be selected. System 202 represents an element of a computing system. The system 202 may be considered to have a memory subsystem that includes a memory controller 220 and a memory 230. Host 210 represents a hardware platform that controls a memory subsystem. Host 210 includes one or more processors (eg, central processing unit (CPU) or graphics processing unit (GPU)) that generate requests for data stored in memory 230.Host 210 includes a memory controller 220 that can be integrated into the processor device. The memory controller 220 includes an I / O (input / output) 212 for connecting to the memory 230. The I / O includes connectors, signal lines, drivers, and other hardware for interconnecting memory devices to the host 410. The I / O 212 may include a command I / O represented by the command (CMD) bus 242 and a data I / O represented by the DQ (data) bus 244. The CMD bus 242 includes a command signal line that allows the memory controller 220 to send commands to the memory 230. The DQ bus 244 includes a plurality of data signal lines. In the case of an N-bit interface, the DQ bus 244 will include a DQ [0: N-1].The memory controller 220 includes command (CMD) logic 224 for generating commands for memory 230. The command can be a command for data access (eg, read and write), or a command for configuration (eg, mode register command). The memory controller 220 includes a scheduler 222 for scheduling when to send a command in a series of operations. The scheduler 222 may control the timing of the I / O according to a known timing in order to improve the possibility that the I / O is error-free. Timing is set through training. The timing may be adjusted according to the write delay mode of the memory 230.The memory 230 may include individual memory devices or may represent a memory module. System 202 illustrates a single rank of memory device in memory 230. Rank refers to a set or group of memory devices that share a selected line. Therefore, memory devices within a certain rank will perform operations in parallel. Rank [0] is exemplified to include N memory dies (die [(N-1): 0]). For one or more memory dies, N can be any integer greater than or equal to 0.Due to the single rank, the system 202 cannot interleave access between ranks. Without the ability to interleave access, the system 202 can benefit from the ability to choose between matching write delays and unmatched write delays. System 202 can implement write delay selection according to an example of system 100.FIG. 2B is a block diagram of a two-rank memory system in which a non-matching write delay mode or a matching write delay mode can be selected. System 204 represents an example of system 202. The host 210, memory controller 220, scheduler 222, command logic 224, I / O 212, CMD bus 242, and DQ bus 244 can be described as described above for system 202.The memory 230 may include individual memory devices or may represent a memory module. System 204 illustrates two ranks of memory devices in memory 230. Rank [0] is exemplified to include N memory dies (die [(N-1): 0]). For one or more memory dies, N can be any integer greater than or equal to 0. Rank [1] is also exemplified to include N memory dies (die [(N-1): 0]).The two ranks allow the system 204 to interleave access between ranks. Although a two-rank system is illustrated as an example, it is understood that systems with more than two ranks can also benefit from interleaving and are expected to have similar benefits from matching write delay modes. Will be. Interleaving access between ranks means switching access from one rank to another. Therefore, the memory controller 220 can switch the transmission of commands between ranks during the write operation delay and the read operation delay. If the memory system has at least two devices per channel, the memory controller 220 can write to one of the ranks and read from the other. A one-rank system has a single device or a single die or a single set of devices per channel. The ability to interleave access between different ranks allows the system 204 to benefit from the ability to choose between matching and unmatched write delays, and also improves bandwidth utilization. System 204 can implement write delay selection according to an example of system 100.FIG. 3 is a timing diagram of an example of a system with an inherent read delay and a write delay mismatch with selectable write delays. Diagram 310 illustrates a timing diagram of a scenario where there is a latency mismatch between the WR delay and the RD delay. Diagram 310 may illustrate inconsistent WR latency mode selection by an example of system 100. Diagram 320 illustrates a timing diagram of a scenario in which the latencies match between the WR delay and the RD delay. Diagram 320 may illustrate a possible matching WR latency mode selection by an example of system 100.Diagrams 310 and 320 illustrate specific examples of a particular system configuration. It will be appreciated that different timings can be used for different system configurations. In addition, different devices can have different characteristics, which will result in different system behavior.In diagrams 310 and 320, each segment of the timing diagram represents a clock cycle. In the illustrated example, the read or write command requires 8 clock cycles to issue (8 tCK). In addition, 8 clock cycles are required for the data cycle of the write or read command. Although the data cycle can differ from 8 tCK, it will be appreciated that diagrams 310 and 320 provide 8 clock cycles as an example.Reads and writes are continuously labeled to identify the timing flow of a command to the data associated with a particular command on the DQ bus. In the diagram 310, R0 indicates rank [0] and R1 indicates rank [1]. Therefore, it will be appreciated that diagram 310 represents a two-rank system and commands on the command bus indicate switching between the two ranks. In addition, there are DQ [R0] representing a data bus of rank [0] and DQ [R1] representing a data bus of rank [1].Starting from the left of the figure, the first command is a write of rank [0] (command WR0), followed by WR1 of rank [1], WR2 of rank [0], and WR3 of rank [1]. be. The "L" before WR2 indicates the per-WR delay required for the system. The delay may be different for different systems and is illustrated only as an example.In diagram 310, the D0 corresponding to the WR0 appears on the DQ [R0] after approximately 19 tCK, which represents the minimum WR delay of the medium. D1 of DQ [R1], followed by D2 of DQ [R0] and D3 of DQ [R1]. D1, D2, and D3 follow a similar delay for their write commands.Diagram 310 also illustrates RD4 of rank [0], RD5 of rank [1], RD6 of rank [0], RD7 of rank [1], RD8 of rank [0], and R9 of rank [1]. ing. In one example, the access may include an identification signal or other such signal that precedes the read command as a write-to-read transition. Such signaling is not included in this figure as it is not required in all implementations. The white block across the DQ bus is labeled with the WR-RD latency difference, indicating a long delay between the read command and the read data on the DQ bus. It will be observed that this delay is much longer than the minimum delay of the write data. In one example tested, the latency difference was 73 tCK at 2400 MT / s, 97 tCK at 3200 MT / s, and 126 tCK at 4000 MT / s.After the difference in the read and write latency blocks, there is a block labeled "L" on the data bus that represents the intrinsic latency in the system when switching between writes and reads. This delay can vary in different systems. The purpose of exemplifying the delay in Diagram 310 is to show that there can be an intrinsic system delay that is present even when the WR delay is consistent with the RD delay. Not all delays or signalings are illustrated in Diagram 310.After the delay, D4 corresponding to WR4 appears on DQ [R0], followed by D5 of DQ [R1], followed by D6 of DQ [R0], and then D7 of DQ [R1]. D5, D6, and D7 follow a similar delay for their read commands. Data D8 and D9 are not shown in Diagram 310, but follow similarly.Diagram 310 illustrates an example of a 3DXP memory in which the delay between the write command and the write data delay is significantly shorter than the delay of the read command and read data on the DQ bus. Delay asymmetry or mismatch memory that data from a previous read command does not conflict with the time required for a write command, even though the command or CA bus is available to issue the command. It can lead to scenarios that the controller must guarantee. To avoid this conflict, the memory controller traditionally inserts an idle state. The box showing the latency difference is an example of the DQ idle state.In diagram 320, the sequence of write and read commands is identical to that illustrated in diagram 310. Starting from the left of the figure, the first command is a write of rank [0] (command WR0), followed by WR1 of rank [1], WR2 of rank [0], and WR3 of rank [1]. be. In diagram 320, the D0 corresponding to the WR0 appears on the DQ [R0] after a time that coincides with the read delay. The figure may not be completely accurate due to additional signals and delays that may be added, but the delays are shown to be in the range of 100 tCK in the example of Diagram 320. In addition to signaling that may not be shown, delay can vary due to transfer rate or other factors. Regardless of the exact number of cycles, Diagram 320 illustrates that the delay from the WR command to the WR data matches the delay from the RD command to the RD data. D1 of DQ [R1], followed by D2 of DQ [R0] and D3 of DQ [R1]. D1, D2, and D3 follow a similar delay for their write commands.Diagram 320 also illustrates RD4 at rank [0], RD5 at rank [1], RD6 at rank [0], RD7 at rank [1], RD8 at rank [0], and R9 at rank [1]. ing. In one example, the media controller (not shown) adds a delay to eliminate the WR-RD latency difference illustrated in Diagram 310. Therefore, the read and write delays are the same in Diagram 320. In one example, the memory controller selectively arranges the memory devices in a mode that does not match, in which case the read and write timings may be similar to those shown in Diagram 310. The memory controller may selectively arrange the memory devices in matching modes, in which case the read and write timings may be similar to those shown in Diagram 320.In diagram 320, after the same delay is applied from the write command to the write data, the read data appears on the DQ bus after the read command. Therefore, D4 corresponding to WR4 appears on DQ [R0], followed by D5 of DQ [R1], followed by D6 of DQ [R0], and then D7 of DQ [R1]. D4, D5, D6, and D7 follow the same delay that their read commands D0, D1, D2, and D3 follow for their respective write commands. Data D8 and D9 are not shown in Diagram 320, but follow similarly.It will be appreciated that the completion of the write operation (reception and execution of the write command) is longer in the example of diagram 320 than in the example of diagram 310. Despite the longer write completion times, the system's bus utilization will increase due to latency matching when there is a mix of writes and reads. In one example, the system can switch between a non-matching delay mode and a matching delay mode based on the type of scheduled operation. Minimal write latency can be an advantage when most of the operation involves write commands.FIG. 4 is a table of examples of bus utilization data exemplifying improved utilization of systems with intrinsic read and write delay inconsistencies that implement selectable write delays. Table 400 illustrates an example of system test results according to system 100 including NV memory device.Transfer rate at megatransfers per second (MT / s), live bandwidth (BW) at gigabytes per second (GB / s), 2: 1 efficiency of 2 reads per write, and GB / s. Evaluated four criteria for effective BW in. The first two columns of results illustrate the results of a one-rank (1R) system with a 2: 1 read-to-write with a 256-byte workload. The darker shaded columns show the result of minimum WR latency or non-matching delay mode, and the lighter columns show the result of WR / RD latency matching mode.In a one-rank system operating at 2400 MT / s, the raw BW was 19.2 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 57%, while the 2: 1 efficiency of matching latencies was 68%. By changing to the matching latency, the effective bandwidth was improved from 10.9 GB / s in the non-matching mode to 13.1 GB / s in the matching mode.In a two-rank system operating at 2400 MT / s, the raw BW was 19.2 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 77%, while the 2: 1 efficiency of matching latencies was 87%. The ability to switch between ranks has improved bandwidth utilization for single-ranked systems. In the two-rank system, by changing to the matching latency, the effective bandwidth was improved from 14.7 GB / s in the non-matching mode to 16.6 GB / s in the matching mode.In a one-rank system operating at 2800 MT / s, the raw BW was 22.4 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 52%, while the 2: 1 efficiency of matching latencies was 64%. By changing to matching latency, the effective bandwidth was improved from 11.8GB / s in the non-matching mode to 14.4GB / s in the matching mode.In a two-rank system operating at 2800 MT / s, the raw BW was 22.4 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 73%, while the 2: 1 efficiency of matching latencies was 83%. In the two-rank system, by changing to the matching latency, the effective bandwidth was improved from 16.3 GB / s in the non-matching mode to 18.6 GB / s in the matching mode. Therefore, in both 1-rank and 2-rank systems, higher raw bandwidth reduced 2: 1 efficiency, but improved effective bandwidth.In a one-rank system operating at 3200 MT / s, the raw BW was 25.6 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 50%, while the 2: 1 efficiency of matching latencies was 62%. By changing to matching latency, the effective bandwidth was improved from 12.7GB / s in the non-matching mode to 15.8GB / s in the matching mode.In a two-rank system operating at 3200 MT / s, the raw BW was 25.6 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 70%, while the 2: 1 efficiency of matching latencies was 81%. In the two-rank system, by changing to the matching latency, the effective bandwidth was improved from 17.9 GB / s in the non-matching mode to 20.8 GB / s in the matching mode.In a one-rank system operating at 4000 MT / s, the raw BW was 32.0 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 43%, while the 2: 1 efficiency of matching latencies was 55%. By changing to matching latency, the effective bandwidth was improved from 13.8GB / s in the non-matching mode to 17.7GB / s in the matching mode.In a two-rank system operating at 4000 MT / s, the raw BW was 32.0 GB / s for both non-matching and matching cases. The 2: 1 efficiency of non-matching latencies was 63%, while the 2: 1 efficiency of matching latencies was 75%. In the two-rank system, by changing to the matching latency, the effective bandwidth was improved from 20.3 GB / s in the non-matching mode to 24.1 GB / s in the matching mode.In each test situation, the two-rank system provided improvements to the equivalent one-rank system. In addition, the ability to choose the WR delay to match the RD delay also significantly improves system performance for comparable systems that had mismatched WR / RD latencies in both 1-rank and 2-rank systems. It was improved to.FIG. 5 is a flow diagram of an example of a process for selectable write delays in a system that has an inherent read delay and write delay mismatch. Process 500 represents a process that can be applied in a memory system according to any example herein. As a specific example, the process 500 for dynamically selecting the write delay may be applied by the system 100 of FIG. 1, the system 202 of FIG. 2A, or the system 204 of FIG. 2B.The operations shown to the left of the dashed line can be performed by a host, such as a host controller or memory controller. The operation shown to the right of the dashed line can be performed by the memory device itself.In one example, the host identifies traffic patterns for memory access after being scheduled (block 502). The host can identify the write delay state of the memory being accessed (block 504). If the write delay state is not the desired write delay for later memory access, the system can decide to change the write delay mode of the memory device. In one example, the host makes a decision based on which write delay mode the memory device is currently applying and which write delay mode would be preferred for later traffic patterns.When the host changes the write (WR) delay mode (block 506: "YES" branch), in one example, the host sends a command to set the write delay mode or causes memory to set the write delay mode. (Block 508). In one example, the host sends a mode register write command, or another command to change the memory device configuration settings.The memory receives and processes commands (block 510). The memory device can determine whether to set the write delay mode to a non-matching delay mode or a matching delay mode, depending on the mode selected (block 512). A non-matching delay mode is one in which the write and read delays remain asymmetrical. The matching delay mode refers to a mode in which the write delay and the read delay are set to be equal.If the selected mode is a mismatched mode, in one example, the memory can set the configuration register to set the write delay to the minimum delay of the storage medium so that it does not match the read delay ( Block 514). When the selected mode is a matching mode, in one example, the memory can set the configuration register to set the write delay to match the read delay (block 516).The host can send write commands (block 518). In response to the write command, the memory can receive and process the write command (block 520). After sending the write command (block 518), if the selected mode is a mismatched mode (block 522: "mismatched" branch), in one example, the host sends the write data after the media minimum delay (block 518). Block 524). The minimum delay may refer to the inherent delay inherent in the storage medium. Instead of waiting to put the data on the data bus for writing, the host can send the data as soon as the minimum timing is met. The memory receives the command, decodes it, and waits for the minimum WR delay of the data on the DQ bus (block 526). The host can send other write commands with less delay between the command and the data (block 528), but this is due to additional scheduling due to the data bus utilization mismatch for the data commands. Will require complexity. Such complexity can limit scheduler read and write scheduling.If the selected mode is a matching mode (block 522: "matching" branch), in one example, the host sends write data after a read delay (block 530). The read delay is longer than the inherent delay inherent in the storage medium for writing. Instead of sending the data as soon as the minimum timing is met, the host will wait until a later time to send the data so that the data bus utilization has the same delay between the read and write commands. .. The memory receives the command, decodes it, and waits for the data read delay on the DQ bus (block 532).Thus, in one example, whether the host is accessing the data bus to receive data in response to a read or the host is accessing the bus after a write command to send data to memory. The host will have a new guarantee that the use of the data bus will be at the same delay after the command. The host can send other less complex access commands, allowing the scheduler to schedule the commands in any order (block 534).FIG. 6 is a block diagram of an example of a memory subsystem in which selectable write delays can be implemented. System 600 includes elements of a processor and memory subsystem within a computing device. System 600 provides an example of a system according to system 100 of FIG. 1, system 202 of FIG. 2A, or system 204 of FIG. 2B.In one example, the memory array 660 has a memory that has asymmetry between the read command delay for data bus use (read (RD) delay 664) and the write command delay for data bus use (write (WR) delay 662). Represents a medium. In one example, the write delay 662 can be selected to either not match with respect to the read delay 664 or match with respect to the read delay 664. The selection and application of write delay 662 may be in accordance with any of the examples herein. In one example, the scheduler 626 of the memory controller 620 includes a delay timer 628 for applying a timing delay to command and data scheduling based on the write delay mode selected for the memory device 640. The delay logic 654 may represent the logic in memory device 640 for applying a write delay mode to add a delay to the write command to comply with the selected mode.The processor 610 represents a processing unit of a computing platform capable of executing an operating system (OS) and an application, and the processing unit may be collectively referred to as a memory host or user. The OS and applications perform operations that result in memory access. Processor 610 may include one or more separate processors. Each of the separate processors may include a single processing unit, a multi-core processing unit or a combination thereof. The processing unit may be a primary processor such as a CPU (Central Processing Unit), a peripheral processor such as a GPU (Graphics Processing Unit), or a combination thereof. Memory access can also be initiated by a device such as a network controller or hard disk controller. Such devices can be integrated with or attached to a processor in some system via a bus (eg, PCI Express), or a combination thereof. System 600 can be implemented as a SOC (System on Chip) or with stand-alone components.In one example, a reference to a memory device can refer to a non-volatile memory device whose state is fixed even when power is cut off from the device. In one example, the non-volatile memory device is a block addressable memory device such as NAND technology or NOR technology. Therefore, memory devices may also include future generations of non-volatile devices, such as three-dimensional cross-point memory devices and other byte-addressable non-volatile memory devices. The memory device may include a non-volatile byte addressable medium that stores data based on the resistance state of the memory cell or the phase of the memory cell. In one example, the memory device may use a chalcogenide phase change material (eg, chalcogenide glass). In one example, the memory device is a multi-threshold level NAND flash memory, NOR flash memory, single or multi-level phase change memory (PCM) or switchable phase change memory (PCM), resistor memory, nanowire memory, ferroelectric transistor random access. Can be or include memory (FeTRAM), magnetoresistive random access memory (MRAM) incorporating memristor technology, or rotational transfer torque (STT) MRAM, or a combination of any of the above, or other memory. obtain.Memory controller 620 represents one or more memory controller circuits or devices for system 600. The memory controller 620 represents control logic that generates memory access commands in response to the execution of operations by the processor 610. The memory controller 620 accesses one or more memory devices 640. The memory device 640 can be a DRAM device according to any of those mentioned above. In one example, the memory device 640 is organized and managed as a different channel. Each channel is coupled to a bus and signal line that are coupled in parallel to multiple memory devices. Each channel can operate independently. Therefore, each channel is accessed and controlled independently, and timing, data transfer, command and address exchange, and other actions are separate for each channel. Bonds can refer to electrical bonds, communicative bonds, physical bonds or combinations thereof. The physical bond can include direct contact. Electrical coupling includes interfaces or interconnects that allow electrical flow between components, signaling between components, or both. Communication coupling includes connections that allow components to exchange data, including wired or wireless.In one example, the settings for each channel are controlled by separate mode registers or other register settings. In one example, each memory controller 620 manages a separate memory channel, but the system 600 may be configured to have multiple channels managed by a single controller or multiple controllers on a single channel. In one example, the memory controller 620 is part of a host processor or host processor device 610, such as logic that is mounted on the same die or in the same package space as the processor.The memory controller 620 includes I / O interface logic 622 that couples to a memory bus such as the memory channel mentioned above. The I / O interface logic 622 (and the I / O interface logic 642 of the memory device 640) includes pins, pads, connectors, signal lines, traces or wires, or other hardware that connects to the device, or a combination thereof. Can include. The I / O interface logic 622 may include a hardware interface. As illustrated, the I / O interface logic 622 includes at least a driver / transceiver for the signal line. Generally, the wires in an integrated circuit interface combine with pads, pins or connectors to interface with signal lines or traces or other wires between devices. The I / O interface logic 622 may include drivers, receivers, transceivers or terminations or other circuits or combinations of circuits for exchanging signals on the signal lines between devices. The exchange of signals involves at least one of transmission or reception. Although it is shown to couple the I / O 622 from the memory controller 620 to the I / O 642 of the memory device 640, in the implementation of the system 600 where the group of the memory devices 640 is accessed in parallel, multiple memory devices are the memory controller 620. It will be appreciated that it may include an I / O interface to the same interface. The implementation of system 600 includes one or more memory modules 670. The I / O 642 may include interface hardware for the memory module in addition to the interface hardware on the memory device itself. The other memory controller 620 includes a separate interface to the other memory device 640.The bus between the memory controller 620 and the memory device 640 may be implemented as a plurality of signal lines connecting the memory controller 620 to the memory device 640. The bus can typically include at least a clock (CLK) 632, a command / address (CMD) 634 and write data (DQ) and read data (DQ) 636 and zero or more other signal lines 638. In one example, the bus or connection between the memory controller 620 and the memory may be referred to as the memory bus. In one example, the memory bus is a multi-drop bus. Signal lines for CMD are referred to as "C / A buses" (or ADD / CMD buses, or some other symbolic representation indicating the transfer of command (C or CMD) and address (A or ADD) information). The signal lines for gaining, writing and reading DQ can be referred to as the "data bus". In one example, the independent channels have different clock signals, C / A buses, data buses and other signal lines. Therefore, the system 600 can be considered to have multiple "buses" in the sense that independent interface paths can be considered as separate buses. It will be appreciated that in addition to the explicitly indicated lines, the bus may include at least one of a strobe signaling line, an alert line, an auxiliary line or other signal line or a combination thereof. It will also be appreciated that serial bus technology can be used for the connection between the memory controller 620 and the memory device 640. An example of serial bus technology is 8B10B encoding and the transmission of high speed data over an embedded clock via a single differential pair of signals in each direction. In one example, CMD634 represents a signal line shared in parallel with a plurality of memory devices. In one example, multiple memory devices share a CMD634 encoded command signal line, each with a separate chip selection (CS_n) signal line for selecting individual memory devices.In the example of system 600, it will be appreciated that the bus between the memory controller 620 and the memory device 640 includes an auxiliary command bus CMD634 and an auxiliary bus DQ636 for transmitting write and read data. In one example, the data bus may include bidirectional lines for read data and write / command data. In another example, the auxiliary bus DQ636 may include a one-way write signal line for host-to-memory write and data, as well as a one-way line for memory-to-host read data. According to the memory technology and system design chosen, the other signal 638 may accompany a bus or subbus such as a strobe line DQ. Based on the design of the system 600, or an implementation where one design supports multiple implementations, the data bus may have some bandwidth for each memory device 640. For example, the data bus can support memory devices that have either x4 interface, x8 interface, x16 interface or other interface. W in the definition "xW" is an integer indicating the interface size or interface width of the memory device 640, and represents the number of signal lines for exchanging data with the memory controller 620. The interface size of a memory device is a control factor as to how many memory devices can be used simultaneously for each channel in the system 600 or can be coupled in parallel to the same signal line. In one example, a high bandwidth memory device, wide interface device or stacked memory configuration or a combination thereof can enable a wider range of interfaces such as x128 interface, x256 interface, x512 interface, x1024 interface or other data bus interface width.In one example, the memory device 640 and the memory controller 620 exchange data over the data bus in bursts or in a series of continuous data transfers. Burst corresponds to the number of transfer cycles associated with the bus frequency. In one example, the transfer cycle can be the entire clock cycle of the transfer occurring at the same clock or strobe signal edge (eg, rising edge). In one example, the entire clock cycle that references the system clock cycle is divided into multiple unit intervals (UIs). Each UI is a transfer cycle. For example, double data rate transfers are triggered at both edges of the clock signal (eg, rising and falling). Bursts can continue across a configured number of UIs. This can be a register-stored configuration or an on-the-fly triggered configuration. For example, a series of eight consecutive transfer periods can be considered as a burst length of 8 (BL8), and each memory device 640 can transfer data at each UI. Therefore, a x8 memory device operating on BL8 can transfer 64-bit data (8 data signal lines x 8 data bits transferred line by line via burst). It will be understood that this simple example is merely an example and is not limiting.The memory device 640 represents a memory resource for the system 600. In one example, each memory device 640 is a separate memory die. In one example, each memory device 640 can be an interface with multiple (eg, two) channels per device or die. Each memory device 640 includes an I / O interface logic 642 having a bandwidth determined by the device implementation (eg, x16 or x8 or some other interface bandwidth). The I / O interface logic 642 allows the memory device to interface with the memory controller 620. The I / O interface logic 642 may include a hardware interface and may follow the I / O 622 of the memory controller, but at the end of the memory device. In one example, multiple memory devices 640 are connected in parallel to the same command and data bus. In another example, the plurality of memory devices 640 are connected in parallel to the same command bus and connected to different data buses. For example, the system 600 may be configured with a plurality of memory devices 640 coupled in parallel. Each memory device responds to a command and accesses its internal memory resource 660. In the write operation, each memory device 640 can write a portion of the entire data word, and in the read operation, the individual memory device 640 can fetch a portion of the entire data word. The remaining bits of the word are provided or received in parallel by other memory devices.In one example, the memory device 640 is located directly on the motherboard or host system platform of the computing device (eg, the PCB (Printed Circuit Board) on which the processor 610 is located). In one example, the memory device 640 may be organized into memory modules 670. In one example, the memory module 670 represents a dual in-line memory module (DIMM). In one example, memory module 670 represents another organization of multiple memory devices for sharing at least a portion of an access or control circuit. The circuit can be a separate circuit, a separate device or a separate board from the host system platform. The memory module 670 may include multiple memory devices 640 and may include support for multiple separate channels to the included memory devices located therein. In another example, the memory device 640 is incorporated into the same package as a memory controller 620 by a technology such as a multi-chip module (MCM), package-on-package, through silicon via (TSV) or other technology or a combination thereof. obtain. Similarly, in one example, a plurality of memory devices 640 may be integrated into the memory module 670. They themselves can be incorporated into the same package as the memory controller 620. It will be appreciated that for these and other implementations, the memory controller 620 can be part of the host processor 610.Each memory device 640 includes one or more memory arrays 660. The memory array 660 represents the location of addressable memory or the storage location of data. Typically, the memory array 660 is managed as a row of data and is accessed through the control of word lines (rows) and bit lines (individual bits within a row). The memory array 660 can be organized as separate channels, ranks, banks and partitions of memory. The channel can point to an independent control path to a storage location within the memory device 640. Rank can refer to a common location across multiple memory devices arranged in parallel (eg, the same row address within different devices). The bank may refer to a subarray of memory locations within the memory device 640. In one example, a bank of memory is divided into subbanks that have at least a portion of a shared circuit for subbanks (eg, drivers, signal lines, control logic). This allows for separate addressing and access. It will be appreciated that channels, ranks, banks, subbanks, bank groups, or other organizations of memory locations, and combinations of such organizations, can overlap in their application to physical resources. For example, the same physical memory location can be accessed via a particular channel as a particular bank, which may also belong to a rank. Therefore, the organization of memory resources will be understood comprehensively, not exclusively.In one example, memory device 640 includes one or more registers 644. Register 644 represents one or more storage devices or storage locations that provide configurations or settings for the operation of memory devices. In one example, register 644 can provide a storage location for memory device 640 for storing data accessed by memory controller 620 as part of a control or management operation. In one example, register 644 includes one or more mode registers. In one example, register 644 includes one or more multipurpose registers. Depending on the configuration of the location in register 644, the memory device 640 may be configured to operate in a different "mode". The command information can trigger different actions within the memory device 640 based on the mode. In addition, or alternative, different modes can also trigger different actions from address information or other signal lines, depending on the mode. The register 644 settings may indicate a configuration of I / O settings (eg, timing, termination or ODT (on-die termination) 646, driver configuration or other I / O settings).In one example, memory device 640 includes ODT646 as part of the interface hardware associated with I / O 642. The ODT646 may be configured as described above and may provide an impedance setting applied to the interface to the specified signal line. In one example, ODT646 is applied to the DQ signal line. In one example, ODT646 applies to command signal lines. In one example, ODT646 applies to address signal lines. In one example, ODT646 may be applied to any combination of those mentioned above. The ODT settings can be changed based on whether the memory device is the selected target or non-target device for access behavior. The ODT646 setting can affect the timing and reflection of signaling on the end line. Careful control over the ODT646 may allow for faster operation with improved impedance and load matching applied. The ODT646 may be applied to a specific signal line of the I / O interfaces 642,622 (eg, an ODT for the DQ line or an ODT for the CA line), but not necessarily all signal lines.The memory device 640 includes a controller 650 that represents control logic within the memory device for controlling internal operations within the memory device. For example, the controller 650 decodes the command transmitted by the memory controller 620 and generates an internal action to execute or satisfy the command. The controller 650 may be referred to as the internal controller and is separate from the host memory controller 620. The controller 650 can determine which mode is selected based on register 644 and can configure the internal execution of the operation or other operation for accessing the memory resource 660 based on the selected mode. .. Controller 650 generates control signals to control the routing of bits in memory device 640, provides the appropriate interface for the selected mode, and sends commands to the appropriate memory location or address. Controller 650 includes command logic 652 capable of decoding command encodings received on the command and address signal lines. Therefore, the command logic 652 can be a command decoder or can include a command decoder. By using command logic 652, the memory device can identify the command and generate an internal action to execute the requested command.With reference to the memory controller 620 again, the memory controller 620 includes a command (CMD) logic 624 representing a logic or circuit for generating a command to be transmitted to the memory device 640. Command generation can refer to preparing to send a pre-scheduled command or a queued command. In general, signaling within a memory subsystem includes address information within or associated with a command for a memory device to indicate or select one or more memory locations to execute a command. In response to the transaction scheduling of the memory device 640, the memory controller 620 can issue commands via the I / O 622 to cause the memory device 640 to execute the command. In one example, the controller 650 of the memory device 640 receives and decodes the command and address information received from the memory controller 620 via the I / O 642. Based on the received command and address information, the controller 650 can execute the command by controlling the timing of the operation of the logic and the circuit in the memory device 640. Controller 650 is responsible for compliance with standards or specifications within memory device 640, such as timing and signaling requirements. Memory controller 620 can implement standards or specification compliance by scheduling and controlling access.The memory controller 620 includes a scheduler 626 that represents the logic or circuit for generating and instructing transactions to be sent to the memory device 640. From one point of view, it can be said that the main function of the memory controller 620 is to schedule memory access to the memory device 640 and other transactions. Such scheduling may generate the transaction itself to implement the request for the data by the processor 610 and to maintain the integrity of the data (eg, by using commands related to refresh). Can include. Since a transaction can include one or more commands, it can result in the transfer of commands and / or data in one or more timing cycles, such as clock cycles or unit intervals. Transactions can be for access such as read or write or related commands or combinations thereof. Other transactions may include memory management commands for configuration, configuration, data integrity, or other commands or combinations thereof.The memory controller 620 typically includes logic such as a scheduler 626 to allow transaction selection and ordering to improve the performance of the system 600. Therefore, the memory controller 620 can select which of the outstanding transactions should be sent to the memory device 640 in what order. This is usually achieved with much more complex logic than a simple first-in, first-out algorithm. The memory controller 620 manages the transmission of the transaction to the memory device 640 and also manages the timing associated with the transaction. In one example, a transaction has a decisive timing. The timing can be managed by the memory controller 620 and can be used in determining how to schedule transactions with the scheduler 626.FIG. 7 is a block diagram of an example computing system in which selectable write delays can be implemented. System 700 represents a computing device according to any example herein, in laptop computers, desktop computers, tablet computers, servers, game control systems or entertainment control systems, embedded computing devices or other electronic devices. could be. System 700 provides an example of a system according to system 100 of FIG. 1, system 202 of FIG. 2A, or system 204 of FIG. 2B.In one example, the memory 730 represents a memory having an asymmetry between the delay of the read command for the use of the data bus (read delay) and the delay of the write command for the use of the data bus (write delay). RD-DLY represents a read delay and WR-DLY represents a write delay. RD-DLY / WR-DLY792 represents the delay applied for writing to memory 730. In one example, the write delay can be selected to either not match the read delay or match the read delay. The selection and application of write delays may be in accordance with any of the examples herein. The delay logic 790 represents the logic in the memory subsystem 720 for selecting the write delay mode and applying the selected write delay to the memory access transaction. Delay logic 790 may represent logic in memory controller 722 for applying write delays, scheduling commands, and deciding when to send write data on the data bus. Delay logic 790 may represent logic in memory 730 for applying write delay modes to add delay to write commands to comply with the selected mode.System 700 includes processor 710. Processor 710 can be any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core or other processing hardware or any type of microprocessor to provide processing or execution of instructions for system 700. It may include a combination thereof. The processor 710 controls the entire operation of the system 700 and is one or more programmable general purpose or application specific microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic. It can be a device (PLD) or a combination of such devices, or can include them.In one example, system 700 includes interface 712 coupled to processor 710. Interface 712 may represent a faster or higher throughput interface for a system component that requires a higher bandwidth connection, such as memory subsystem 720 or graphic interface component 740. Interface 712 represents an interface circuit that can be a stand-alone component or can be integrated into a processor die. Interface 712 can be integrated on the processor die as a circuit or system-on-chip as a component. If present, the graphics interface 740 interfaces with a graphics component to provide a visual display to the user of system 700. The graphics interface 740 can be a stand-alone component or can be integrated into a processor die or system-on-chip. In one example, the graphics interface 740 can drive a high definition (HD) display or an ultra high definition (UHD) display that provides output to the user. In one example, the display may include a touch screen display. In one example, the graphics interface 740 produces a display based on data stored in memory 730, based on actions performed by processor 710, or both.Memory subsystem 720 represents the main memory of system 700 and provides storage for data values used to execute code or routines executed by processor 710. The memory subsystem 720 is of one or more types of random access memory (RAM) such as read-only memory (ROM), flash memory, DRAM, 3DXP (three-dimensional crosspoint) or other memory device or such device. It may include one or more memory devices 730, such as combinations. The memory 730 stores and hosts an operating system (OS) 732, among other things, to provide a software platform for executing instructions within the system 700. In addition, application 734 can be run from memory 730 on the software platform of OS 732. Application 734 represents a program. The program has their own operating logic for performing one or more functions. Process 736 represents an agent or routine that provides auxiliary functionality to OS 732 or one or more applications 734 or a combination thereof. OS 732, application 734 and process 736 provide software logic to provide functionality for system 700. In one example, the memory subsystem 720 includes a memory controller 722, which is a memory controller that generates commands and issues them to memory 730. It will be appreciated that the memory controller 722 can be a physical part of the processor 710 or a physical part of the interface 712. For example, the memory controller 722 can be an integrated memory controller that is integrated into the circuit with the processor 710, for example integrated into a processor die or system on chip.Although not specifically exemplified, it will be appreciated that the system 700 may include one or more bus or bus systems between devices, such as memory buses, graphics buses, interface buses or others. The bus or other signal line can connect the components communicably or electrically to each other, or the components can communicate and electrically connect to each other. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers or other circuits or combinations thereof. Buses are, for example, system buses, peripheral component interconnect (PCI) buses, hypertransport architecture buses or industry standard architecture (ISA) buses, small computer system interface (SCSI) buses, universal serial buses (USB) or other buses. It may include one or more of them or a combination thereof.In one example, system 700 includes interface 714 that can be coupled to interface 712. Interface 714 can be a slower interface than interface 712. In one example, interface 714 represents an interface circuit that may include stand-alone components and integrated circuits. In one example, multiple user interface components and / or peripheral components combine with interface 714. The network interface 750 provides the system 700 with the ability to communicate with a remote device (eg, a server or other computing device) over one or more networks. Network interface 750 may include Ethernet® adapters, wireless interconnect components, cellular network interconnect components, USB (Universal Serial Bus) or other wired or wireless standard-based or proprietary interfaces. The network interface 750 can exchange data with remote devices. The exchange may include sending data stored in memory or receiving data stored in memory.In one example, the system 700 includes one or more input / output (I / O) interfaces 760. The I / O interface 760 may include one or more interface components. Through the interface component, the user interacts with the system 700 (eg, voice, alphanumeric, tactile / touch or other interface scheme). Peripheral interface 770 may include any hardware interface not specifically described above. Periphery generally refers to devices that are subordinately connected to system 700. A subordinate connection is one in which the system 700 provides a software or hardware platform, or an operation is performed on that platform and the user interacts with it.In one example, system 700 includes a storage subsystem 780 for storing data in a non-volatile manner. In one example, in a particular system implementation, at least certain components of storage 780 may overlap with components of memory subsystem 720. The storage subsystem 780 includes a storage device 784. The storage device 784 can be any conventional medium for storing large amounts of data in a non-volatile manner, such as one or more magnetic disks, solid state disks, 3DXP disks or optical base disks, or a combination thereof. Alternatively, the medium may be included. The storage 784 holds the code or instruction and data 786 in a permanent state (that is, the value is held even if the power to the system 700 is cut off). Storage 784 can generally be considered as "memory", but memory 730 is typically execution or operating memory for providing instructions to processor 710. Although the storage 784 is non-volatile, the memory 730 may include a volatile memory (ie, if power to the system 700 is cut off, the value or state of the data will be uncertain). In one example, the storage subsystem 780 includes a controller 782 for interfacing with the storage 784. In one example, the controller 782 can be a physical part of interface 714 or processor 710, or can include circuits or logic in both processor 710 and interface 714.The power supply 702 powers the components of the system 700. More specifically, the power supply 702 typically interfaces with one or more power supplies 704 in the system 700 to power the components of the system 700. In one example, the power supply 704 includes an AC-DC (alternating current-direct current) adapter for plugging into a wall outlet. Such an AC power source can be a renewable energy (eg, photovoltaic) power source 702. In one example, the power supply 702 includes a DC power supply such as an external AC-DC converter. In one example, the power source 702 or power source 704 includes wireless charging hardware for charging via proximity to the charging magnetic field. In one example, the power source 702 may include an internal battery or fuel cell power source.FIG. 8 is a block diagram of an example of a mobile device in which selectable write delays can be implemented. System 800 represents a mobile computing device such as a computing tablet, mobile phone or smartphone, wearable computing device, or other mobile device, or embedded computing device. Although certain of these components are generally shown, it will be appreciated that not all components of such devices are shown within System 800. System 800 provides an example of a system according to system 100 of FIG. 1, system 202 of FIG. 2A, or system 204 of FIG. 2B.In one example, the memory 862 represents a memory having an asymmetry between the delay of the read command for the use of the data bus (read delay) and the delay of the write command for the use of the data bus (write delay). RD-DLY represents a read delay and WR-DLY represents a write delay. RD-DLY / WR-DLY892 represents the delay applied for writing to memory 862. In one example, the write delay can be selected to either not match the read delay or match the read delay. The selection and application of write delays may be in accordance with any of the examples herein. The delay logic 890 represents the logic in the memory subsystem 860 for selecting the write delay mode and applying the selected write delay to the memory access transaction. The delay logic 890 may represent logic in the memory controller 864 for applying write delays, scheduling commands, and deciding when to send write data on the data bus. Delay logic 890 may represent logic in memory 862 for applying write delay modes to add delay to write commands to comply with the selected mode.The system 800 includes a processor 810 that performs the main processing operations of the system 800. Processor 810 may include one or more physical devices such as microprocessors, application processors, microcontrollers, programmable logic devices or other processing means. The processing operations performed by the processor 810 include the execution of the operating platform or operating system on which the application and device functions are performed. The processing operations are operations related to I / O (input / output) with a human user or other device, operations related to power management, operations related to the connection of the system 800 to another device, or a combination thereof. including. Processing operations may also include operations related to voice I / O, display I / O or other interfaces or combinations thereof. The processor 810 can execute the data stored in the memory. Processor 810 can write or edit data stored in memory.In one example, system 800 includes one or more sensors 812. Sensor 812 represents an embedded sensor or interface to an external sensor or a combination thereof. Sensor 812 allows the system 800 to monitor or detect one or more conditions of the environment or device in which the system 800 is mounted. Sensors 812 include environmental sensors (temperature sensors, motion detectors, optical detectors, cameras, chemical sensors (eg, carbon monoxide sensors, carbon dioxide sensors or other chemical sensors), pressure sensors, accelerators, gyroscopes, etc.) , Medical sensors or physiology sensors (eg, biosensors, heart rate monitors or other sensors for detecting physiological attributes) or other sensors or combinations thereof. Sensor 812 may also include sensors for biometric systems such as fingerprint recognition systems, face detection or recognition systems or other systems that detect or recognize user features. The sensor 812 should be broadly understood and should not be understood as a limitation for many different types of sensors that can be implemented with the system 800. In one example, one or more sensors 812 are coupled to processor 810 via a front-end circuit integrated into processor 810. In one example, one or more sensors 812 couple to processor 810 via another component of system 800.In one example, system 800 comprises a voice subsystem 820 that represents hardware (eg, voice hardware and voice circuits) and software (eg, drivers, codecs) components associated with providing voice capabilities to computing devices. include. Audio features may include speaker output or headphone output and microphone input. Devices for such functions may be integrated into or connected to system 800. In one example, the user interacts with system 800 by providing voice commands received and processed by processor 810.The display subsystem 830 represents hardware (eg, display device) and software components (eg, drivers) that provide a visual display for display to the user. In one example, the display includes tactile components or touch screen elements for the user to interact with the computing device. The display subsystem 830 includes a display interface 832 that includes a particular screen or hardware device used to provide a display to the user. In one example, the display interface 832 includes logic separate from the processor 810 (such as a graphics processor) for performing at least some display-related processing. In one example, the display subsystem 830 includes a touch screen device that provides both output and input to the user. In one example, the display subsystem 830 includes a high definition (HD) display or an ultra high definition (UHD) display that provides output to the user. In one example, the display subsystem comprises or drives a touch screen display. In one example, the display subsystem 830 generates display information based on data stored in memory, based on actions performed by processor 810, or both.The I / O controller 840 represents hardware devices and software components associated with interaction with the user. The I / O controller 840 may operate to manage hardware that is part of the voice subsystem 820 and / or display subsystem 830. In addition, the I / O controller 840 illustrates a connection point for additional devices that connect to system 800. Through the connection point, the user can interact with the system. For example, a device that can be attached to the system 800 may include a microphone device, a speaker system or stereo system, a video system or other display device, a keyboard device or keypad device, a button / switch, or a specific application such as a card reader. It may include other I / O devices for use, or other devices.As mentioned above, the I / O controller 840 can interact with the voice subsystem 820 and / or the display subsystem 830. For example, input through a microphone or other audio device may provide input or commands for one or more applications or functions of System 800. In addition, audio output may be provided on behalf of or in addition to the display output. In another example, if the display subsystem includes a touch screen, the display device also functions as an input device that can be managed, at least in part, by the I / O controller 840. Additional buttons or switches to provide I / O functionality managed by the I / O controller 840 may also be present on the system 800.In one example, the I / O controller 840 is a device such as an accelerometer, camera, optical sensor or other environmental sensor, gyroscope, Global Positioning System (GPS) or other hardware or sensor 812 that may be included in System 800. To manage. The input can not only be part of a direct user interaction, but also affect its behavior (noise filtering, display adjustment for brightness detection, application of flash for the camera or other features, etc.). It can be to provide environment input to the system.In one example, system 800 includes power management 850 that manages battery power usage and functions related to battery charging and power saving operations. The power management 850 manages the power from the power source 852 that provides power to the components of the system 800. In one example, the power supply 852 includes an AC-DC (alternating current-direct current) adapter for plugging into a wall outlet. Such an AC power source can be renewable energy (eg, photovoltaic, motion-based power). In one example, the power supply 852 includes only DC power that can be provided by a DC power source such as an external AC-DC converter. In one example, the power supply 852 includes wireless charging hardware for charging via proximity to the charging magnetic field. In one example, the power source 852 may include an internal battery or fuel cell power source.The memory subsystem 860 includes a memory device 862 for storing information in the system 800. The memory subsystem 860 is a non-volatile (the state does not change when the power to the memory device is cut off) memory device or a volatile (the state becomes uncertain when the power to the memory device is cut off) memory. It may include devices or combinations thereof. The memory 860 is not only application data, user data, music, photos, documents or other data, but also system data related to the execution of applications and functions of system 800 (long-term or temporary). Can store (with or without). In one example, the memory subsystem 860 includes a memory controller 864, which can be considered part of the controls of the system 800 and potentially part of the processor 810. The memory controller 864 includes a scheduler that generates and issues commands to control access to the memory device 862.The connection 870 is a hardware device (eg, a wireless connector or wired connector and communication hardware or a combination of wired hardware and wireless hardware) and a software component that allows the system 800 to communicate with an external device (eg, a wireless connector or a combination of wired hardware and wireless hardware). For example, driver, protocol stack). External devices can be other computing devices, separate devices such as wireless access points or base stations, and peripherals such as headsets, printers or other devices. In one example, the system 800 exchanges data with an external device for storage in memory or for display on a display device. The data exchanged may include data stored in memory for reading, writing or editing data or data already stored in memory.The connection 870 may include a plurality of different types of connections. For generalization, the system 800 is exemplified with a cellular connection 872 and a wireless connection 874. Cellular connection 872 is generally a cellular network connection or a variant or derivative thereof provided by a wireless carrier, such as provided via GSM® (Global System for Mobile Communications), or CDMA (Code Division Multiple Access). ) Or its variants or derivatives, TDM (Time Division Multiple Access) or its variants or derivatives, LTE (Long Term Evolution, also known as "4G", "5G"), or other cellular service standards. .. Wireless connection 874 refers to a non-cellular wireless connection, such as a personal area network (such as Bluetooth®), a local area network (such as WiFi®), a wide area network (such as WiMAX), or other wireless communication or. It may include a combination thereof. Wireless communication refers to the transfer of data through the use of modulated electromagnetic radiation through non-solid media. Wired communication is generated by solid-state communication media.Peripheral connection 880 includes hardware interfaces and connectors as well as software components (eg, drivers, protocol stacks) for making peripheral connections. It will be appreciated that system 800 can be both a peripheral device ("outside" 882) to another computing device and a peripheral device connected to system 800 ("outside" 884). The system 800 generally has a "docking" connector that connects to other computing devices for purposes such as managing content on the system 800 (eg, downloading, uploading, modifying, synchronizing). In addition, the docking connector may allow the system 800 to connect to specific peripherals that allow the system 800 to control content output to, for example, an audiovisual system or other system.In addition to proprietary docking connectors or other proprietary connectivity hardware, the system 800 may make peripheral connections 880 via common connectors or standards-based connectors. Common types include Universal Serial Bus (USB) connectors (which may include any of a number of different hardware interfaces), Mini DisplayPort (MDP), High Definition Multimedia Interfaces (HDMI®), DisplayPort or Other types may be included.In general, with respect to the description herein, in one example, a value for selecting between two modes: an array of memory cells having an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay, and a write operation delay. The first mode has a first write operation delay that does not match the read operation delay, and the second mode has a second write operation delay that matches the read operation delay. A non-volatile (NV) memory device that includes a register.In one example, the NV memory device is set to the first mode by default. In one example, the NV memory device is set to the second mode by default. In one example, the registers can be dynamically configured during the execution time of the NV memory device. In one example, an array of memory cells includes an array of three-dimensional crosspoint (3DXP) memory cells.In general, with respect to the description herein, in one example, a hardware interface for coupling to a plurality of non-volatile (NV) memory devices having an asymmetry between an intrinsic read operation delay and an intrinsic write operation delay, and the above. It is a scheduler for scheduling a command for writing the register value of the NV memory device, and the above value is for selecting between two modes of write operation delay, and the first mode is read operation delay. A controller, including a scheduler, having a first write operation delay that does not match and a second mode has a second write operation delay that matches the read operation delay.In one example, the NV memory device is set to the first mode by default. In one example, the NV memory device is set to the second mode by default. In one example, the scheduler schedules a command to write a register value in order to select a first mode when the scheduler has a write operation to mostly send to an NV memory device. In one example, the scheduler schedules a command to write a register in order to select a second mode when the scheduler has a mix of write and read operations for transmission to an NV memory device. In one example, when the second mode is selected, the scheduler schedules commands for write and read operations in any order. In one example, the scheduler schedules commands to dynamically write register values during the execution time of the NV memory device. In one example, the NV memory device is organized as a plurality of ranks of memory device, and the scheduler schedules a command for selecting a write operation delay for each rank. In one example, the scheduler switches command transmission between different ranks during write and read delays. In one example, the NV memory device includes a three-dimensional crosspoint (3DXP) memory device.In general, with respect to the description herein, in one example, it is the stage of receiving a first command for setting the value of a register to select between two modes of write operation delay for a non-volatile (NV) memory device. The first mode has a first write operation delay that does not match the read operation delay, the second mode has a second write operation delay that matches the read operation delay, and the NV memory device is the intrinsic cause. The write operation was selected, with a receiving step having an asymmetry between the sex read action delay and the intrinsic write action delay, and the receiving step of receiving a second command to trigger the write action. A method comprising a receiving step, performed with a write operation delay of a first mode or a second mode.In one example, the method further includes the step of defaulting to the first mode. In one example, the method further comprises setting a second mode by default. In one example, the step of receiving the first command includes the step of receiving the first command during the run time of the NV memory device in order to dynamically configure the registers during the run time. In one example, the NV memory device includes a three-dimensional crosspoint (3DXP) memory device.The flow chart illustrated herein provides an example of a series of various processing operations. Flow diagrams can show physical operations as well as operations performed by software or firmware routines. The flow diagram may illustrate an example of a finite state machine (FSM) state implementation that can be implemented in hardware and / or software. The order of operations is shown in a particular sequence or order, but may be modified unless otherwise specified. Therefore, the illustrated figure should be understood only as an example, the operations can be performed in different order, and some operations can be performed in parallel. In addition, one or more operations may be omitted. Therefore, not all implementations perform all actions.To the extent that various actions or functions are described herein, they may be described or defined as software code, instructions, configurations and / or data. Content can be directly executable ("object" or "executable"), source code or diff code ("delta" or "patch" code). Software content as described herein may be provided via the product in which the content is stored or via a method of operating a communication interface to transmit data via the communication interface. Machine-readable storage media can allow machines to perform the functions or operations described and are recordable / non-recordable media (eg, read-only memory (ROM), random access memory (RAM), magnetic disk storage). Includes any mechanism that stores information in a format accessible by machines (eg, computing devices, electronic systems, etc.) such as media, optical storage media, flash memory devices, etc. The communication interface includes any mechanism that provides an interface to any medium such as hardwired, wireless, optical, etc. for communicating with another device, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. .. The communication interface may be configured by providing configuration parameters and / or transmitting signals so that the communication interface prepares for the provision of data signals describing software content. The communication interface may be accessed via one or more commands or signals transmitted to the communication interface.The various components described herein can be means for performing the actions or functions described. Each component described herein includes software hardware or a combination thereof. Components include software modules, hardware modules, application-specific hardware (eg, application-specific integrated circuits (ASIC), digital signal processors (DSPs), etc.), embedded controllers, hard-wired circuits, etc. Can be implemented.In addition to what has been described herein, various modifications may be made to the disclosure and implementation of the invention without departing from their scope. Therefore, the examples and examples herein should be construed in an exemplary sense and not in a limited sense. The scope of the invention should only be evaluated by reference to the claims below. [Other possible items] [Item 1] A value for selecting between two modes: an array of memory cells having an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay, and a write operation delay. The first mode has a first write operation delay that does not match the read operation delay, and the second mode has a second write operation delay that matches the read operation delay. A non-volatile (NV) memory device comprising a register. [Item 2] The NV memory device according to item 1, wherein the NV memory device is set to the first mode by default. [Item 3] The NV memory device according to item 1, wherein the NV memory device is set to the second mode by default. [Item 4] The NV memory device according to item 1, wherein the register can be dynamically configured during the execution time of the NV memory device. [Item 5] The NV memory device according to item 1, wherein the array of memory cells includes an array of three-dimensional crosspoint (3DXP) memory cells. [Item 6] A hardware interface for coupling to a plurality of non-volatile (NV) memory devices having an asymmetry between the intrinsic read operation delay and the intrinsic write operation delay, and the value of the register of the NV memory device. A scheduler for scheduling a command to write, the above value is for selecting between two modes of write operation delay, the first mode is the first write that does not match the read operation delay. A controller that includes a scheduler, which has an operation delay and a second mode has a second write operation delay that matches the read operation delay. [Item 7] The controller according to item 6, wherein the NV memory device is set to the first mode by default. [Item 8] The controller according to item 6, wherein the NV memory device is set to the second mode by default. [Item 9] The command for the scheduler to write the value of the register in order to select the first mode when the scheduler has a write operation for mostly transmitting to the NV memory device. 6. The controller according to item 6. [Item 10] In order to select the second mode when the scheduler has both a write operation and a read operation for transmitting to the NV memory device, the scheduler writes the register. The controller according to item 6, which schedules commands. [Item 11] The controller according to item 10, wherein when the second mode is selected, the scheduler schedules a command for a write operation and a command for a read operation in an arbitrary order. [Item 12] The controller according to item 6, wherein the scheduler schedules the command for dynamically writing the value of the register during the execution time of the NV memory device. [Item 13] The controller according to item 6, wherein the NV memory device is organized as a memory device of a plurality of ranks, and the scheduler schedules the command for selecting the write operation delay for each rank. [Item 14] The controller according to item 13, wherein the scheduler switches command transmission between different ranks during the write operation delay and the read operation delay. [Item 15] The controller according to item 6, wherein the NV memory device includes a three-dimensional crosspoint (3DXP) memory device. [Item 16] The stage of receiving the first command for setting the value of the register for selecting between the two modes of write operation delay for the non-volatile (NV) memory device, and the first mode is The NV memory device has a first write operation delay that does not match the read operation delay, a second mode has a second write operation delay that matches the read operation delay, and the NV memory device has an intrinsic read operation delay. A receiving step having an asymmetry between the intrinsic write action delay and a second command for triggering the write action, wherein the write action is the selected first step. A method comprising a receiving step, which is performed with the write operation delay of the mode or the second mode. [Item 17] The method according to item 16, further comprising a step of setting a default in the first mode. [Item 18] The method according to item 16, further comprising a step of setting a default for the second mode. [Item 19] The step of receiving the first command includes the step of receiving the first command during the execution time of the NV memory device in order to dynamically configure the register during the execution time. , Item 16. 20. The method of item 16, wherein the NV memory device includes a three-dimensional crosspoint (3DXP) memory device. |
Some embodiments of the present invention are directed to OLED materials useful in display devices and processes for making such OLED materials. The OLED materials may comprise polar compounds integrated with one or more substrates. When the polar compounds are simultaneously cured and exposed to an applied voltage or electric field, the polar compounds may be oriented in the direction of the voltage. Such orientation may result in the light emitted from the OLED material radiating in a single direction. Additional embodiments are directed to a system comprising a display device having a polar light-emitting layer whose dipoles are oriented in a single direction. |
1.A process for preparing an organic light emitting diode (OLED) structure includes:a coating the substrate with a conductive material to form an anode;b. coating the anode with a hole transport material to form a coated substrate;c. optionally applying friction to the coated substrate to form an irregular surface alignment layer;d. laying a polar organic compound on the coated substrate, and optionally allowing the polar organic compound to fill the irregular surface alignment layer formed in (c) to form a treated tape coating Layer substratee. curing the treated coated substrate while exposing the treated coated substrate to an electric field.2、The process of claim 1, wherein the cured coated substrate is exposed to an electric field of less than 5 volts during the curing of the coated substrate.3、The process of claim 1, comprising:a coating the substrate with a conductive material to form an anode;b. coating the anode with a polyimide material to form a coated substrate;c. applying friction to the coated substrate to form an irregular surface alignment layer;d. laying a polar organic compound on the surface of the coated substrate, and allowing the polar organic compound to fill the groove formed in (c) to form a treated coated substrate;e. curing the treated coated substrate while exposing the treated coated substrate to an electric field.4、The process of one of claims 1 or 3, wherein the coated substrate is exposed to an electric field to align the polar organic compounds in a single orientation.5、The process of one of claims 1 or 3, wherein the electric field is between about 1 volt and 7 volts.6、A device including an organic light emitting diode structure includes:a anode integrated into the anode substrate and connected to the power source;b. a conductive layer coated on the anode;c. a hole transport material coated on the anode for forming a coated substrate;d. an optional irregular surface alignment layer formed on the coated substrate;e. a polar organic compound applied to the surface of the coated substrate, the compound optionally filling the irregular surface alignment layer formed in (c) to form a treated tape Coated substratef. an electron transport layer disposed on the polar organic compound;g. a cathode disposed on the electron transport layer and supported by a cathode substrate;h. A power source connected to the anode and cathode, wherein when a voltage from the power source is applied to the anode and cathode, the dipoles in the polar organic compound are oriented in the same direction.7、The apparatus of claim 6, wherein the anode is coated with a polyimide material to form a coated substrate.8、The apparatus of claim 6, wherein the anode substrate and the cathode substrate are selected from the group consisting of glass, plastic, quartz, plastic film, metal, ceramic, and polymer.9、The device according to claim 6, wherein the conductive layer is selected from the group consisting of indium tin oxide, indium zinc oxide, aluminum doped zinc oxide, indium doped zinc oxide, magnesium indium oxide, nickel tungsten oxide , Gallium nitride, zinc selenide, and zinc sulfide.10、The device according to claim 6, wherein the hole-transporting material is selected from the group consisting of monoarylamine, diarylamine, triarylamine, polyarylamine, polyN-vinylcarbazole, polythiophene, poly A group consisting of pyrrole, polyaniline and their copolymers.11、The device of claim 6, wherein the polar organic compound is selected from the group consisting of a fluorescent dye, a phosphorescent compound, a transition metal complex, an iridium complex of phenylpyridine, coumarin, polyfluorene, and polyarylene Group consisting of ethylene.12、The device according to claim 6, wherein the electron-transporting layer is an 8-hydroxyquinoline compound that chelate a metal.13、A system including:A central processing unit for executing at least one set of machine-readable instructions;A storage device for sharing the machine-readable instructions; andA display including an OLED structure, the OLED structure including at least one polar light-emitting layer containing dipoles oriented in a single direction, wherein the display device is configured to display an image in response to the set of machine-readable instructions .14、The system of claim 13, wherein the OLED structure comprises:a anode integrated into the anode substrate and connected to the power source;b. a conductive layer coated on the anode;c. a hole transport material coated on the anode for forming a coated substrate;d. an optional irregular surface alignment layer formed on the coated substrate;e. a polar organic compound applied to the surface of the coated substrate, the compound optionally filling the irregular surface alignment layer formed in (c) to form a treated tape coating Layer substratef. an electron transport layer disposed on the polar organic compound;g. a cathode disposed on the electron transport layer and supported by a cathode substrate;h. A power source connected to the anode and cathode, wherein when a voltage from the power source is applied to the anode and cathode, the dipoles in the polar organic compound are oriented in the same direction.15、The system of claim 14, wherein the anode is coated with a polyimide material to form a coated substrate.16、The system of claim 14, wherein the anode substrate and the cathode substrate are selected from the group consisting of glass, plastic, quartz, plastic film, metal, ceramic, and polymer.17、The system according to claim 14, wherein the conductive layer is selected from the group consisting of indium tin oxide, indium zinc oxide, aluminum doped zinc oxide, indium doped zinc oxide, magnesium indium oxide, nickel tungsten oxide , Gallium nitride, zinc selenide, and zinc sulfide.18、The system according to claim 14, wherein the hole-transporting material is selected from the group consisting of monoarylamine, diarylamine, triarylamine, polyarylamine, polyN-vinylcarbazole, polythiophene, poly A group consisting of pyrrole, polyaniline and their copolymers.19、The system of claim 14, wherein the polar organic compound is selected from the group consisting of a fluorescent dye, a phosphorescent compound, a transition metal complex, an iridium complex of phenylpyridine, coumarin, polyfluorene, and polyarylene. Group consisting of ethylene.20、The system according to claim 14, wherein the electron transport layer is an 8-hydroxyquinoline compound that chelate metal. |
Low-power OLED materials for display applicationsBackground technique[00001]Liquid crystal displays (LCDs) are commonly used in flat-panel displays such as laptops, personal digital assistants, and cellular phones. Displays made with LCDs often use cold cathode fluorescent lamps (CCFL) or similar devices as the backlight light source for LCD displays in order to provide optical images to users. CCFL and similar devices are made of fragile and relatively inefficient materials that require converters and consume a lot of power (up to 35% of the power of a laptop computer system). The use of CCFL (made of glass or other rigid materials) makes the display module fragile, difficult to produce and maintain, and makes it expensive to repair after damage. The specifications of these materials also result in larger displays and increase the weight of systems that integrate such displays. Because these displays are commonly used in portable devices, users demand more lightweight and durable devices.[00002]In a move to reduce the weight of displays and increase their durability, some manufacturers use organic light emitting diode (OLED) materials as backlight light sources for mobile devices. OLED is a thin film material that emits light when excited by an electric current. Because OLEDs emit different colors of light, they can be used to make displays. As a result, displays made with OLED materials do not require additional backlight sources, thereby eliminating the need for CCFLs made of fragile glass and eliminating factors that make display modules bulky. OLEDs are generally lighter and can operate efficiently at relatively low voltages, thereby consuming less system power. The versatility of light-emitting OLED materials has convinced some manufacturers that they will replace LCDs in mobile display devices in the near future.[00003]Although OLEDs can generate light efficiently, more than half of the light is trapped inside the device, resulting in that light does not contribute to the function of the device. Since the light emitted by the OLED is not deflected in the emission direction, that is, it emits light equally in all directions, some light is emitted forward to the viewer, some light is emitted to the rear of the device, and is reflected to the viewing It may be absorbed by the environment, and some light is emitted to the side of the device, and is intercepted and absorbed by the various layers constituting the device. Generally, up to 80% of the light generated by OLED materials may be lost within the system, which makes them never reach the viewer.[00004]Therefore, there is a need for an organic light emitting diode structure that can avoid the above problems and improve the efficiency of displays, especially displays in portable devices. The invention relates to a new method for improving the power efficiency of an organic light emitting diode display by changing a device manufacturing process related to an OLED material.BRIEF DESCRIPTION OF THE DRAWINGS[00005]Figure 1 shows the OLED structure.[00006]Figure 2 shows an OLED structure with a grooved substrate.[00007]FIG. 3 shows an OLED structure integrated with a display device.detailed description[00008]Some embodiments of the invention relate to OLED structures used in display devices and processes for manufacturing these OLED structures. The OLED structure may include polar compounds that have dielectric anisotropy and may be aligned with respect to one or more substrates of the display unit. When these polar compounds are exposed to an applied voltage or electric field, these compounds will react to this, and their molecules will be aligned in a certain orientation relative to the direction of the above-mentioned electric field or voltage. The orientation can be calibrated in such a way that the light emitted from the OLED material radiates in a certain dominant direction.[00009]Throughout the specification, "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment has been included in at least one embodiment of the present invention. Thus, the appearance of the phrases "in one embodiment" or "in an embodiment" in various places throughout the specification does not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics described above may be combined in one or more embodiments in any suitable manner.[00010]An exemplary embodiment of the present invention includes an OLED material that includes polar functional groups and entities as their molecular constituents, and when exposed to an electric field, the aforementioned constituents are oriented in a dominant direction determined by the electric field, thereby emitting The light is oriented in a single specific direction.[0011]Referring now to the drawings, in which like elements are designated by like reference numerals, the structure of an OLED material 10 is shown in FIG. 1 (not to scale) in accordance with certain embodiments of the present invention. In the OLED material structure 10, an anode conductive layer 20 is integrated on a substrate 30. A hole transporting layer 40 is stacked on the anode coating. A polar light emitting material layer 50 is provided on the hole transport layer 40. An electron transport layer 60 is provided on the light emitting layer 50. Finally, the substrate 90 may support a cathode 70 including a conductive film. A cathode 70 is additionally provided on the electron transport layer 60. The anode 20 and the cathode 70 are connected to a power source 80. When the power is turned on, holes are injected from the anode 20 into the hole transport layer 40, and in the light emitting layer 50, the holes are combined with electrons from the cathode 70 and generate visible light.[0012]Substrates 30 and 90 can be made of any material that can support anode 20 and cathode 70 in the form of a conductive coating, and these substrates are either flexible or cannot be bent. Examples of the above materials include, but are not limited to, plastic, glass, quartz, plastic film, metal, ceramic, polymer, and the like. Non-limiting examples of flexible plastic films and plastics include polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyethersulfone (PES), polyetherimide, polyetherimide A film or sheet made of ether ether ketone, polyphenylene sulfide, polyaryl compound, polyimide, polycarbonate (PC), cellulose triacetate (TAC), and cellulose acetate-propionate. In addition, the substrate material 30 is transparent or light-transmitting, so that light generated by the OLED material can pass through the above-mentioned device and become visible light.[0013]By optionally coating the substrate with a transparent conductive coating material, the anode conductive layer 20 may be formed. For example, but not limited to, transparent conductive coating materials include indium tin oxide (ITO), indium zinc oxide (IZO), and other tin oxides (such as, but not limited to, aluminum or indium-doped zinc oxide, magnesium indium oxide Compounds, nickel tungsten oxides), metal nitrides (such as but not limited to gallium nitride), metal selenides (such as but not limited to zinc selenide), metal sulfides (such as but not limited to zinc sulfide).[0014]Above the anode conductive layer 20 is a hole transport material 40. The hole transport material may include an amine (such as, but not limited to, an aromatic tertiary amine). In one form, the aromatic tertiary amine may be an aryl compound such as, but not limited to, a monoarylamine, a diarylamine, a triarylamine, or a polyarylamine. In addition, polymerized hole transport materials include poly-N-vinylcarbazole (PVK), polythiophene, polypyrrole, polyaniline, and copolymers such as polydioxyethylthiophene / poly-p-styrenestyrenesulfonic acid (also known as PEDOT / PSS).[0015]A polar light-emitting layer 50 is formed on the hole-transporting layer 40, and the layer includes a polar fluorescent material and / or a phosphorescent material, in which an electron-hole pair is recombined in this region, resulting in the above-mentioned material. Electroluminescence. The polar light emitting layer 50 may be composed of a single material or a host material doped with one or more guest compounds, wherein light emission mainly originates from a dopant and may be of any color. In one illustrative example, the light emitting layer emits white light. As described below, the host material in the polar light emitting layer 50 may be an electron transporting material, or may be a hole transporting material (as described above), or another material or a combination of materials supporting electron-hole recombination. . Dopants can be selected from highly fluorescent dyes, but phosphorescent compounds such as transition metal complexes are also useful. Phenylpyridine's iridium complex and its derivatives are very useful luminescent dopants. The polar light emitting layer 50 may include a dye or coumarin, or may be a polymeric material in nature. Polymeric materials such as polyfluorene and polyarylene (such as polyparaphenylene vinylene (PPV)) can also be used as host materials. At the molecular level, small molecule dopants can be dispersed into the polymerized host material Alternatively, the above-mentioned dopants can be added by copolymerizing auxiliary ingredients into the host polymer. Any polar light-emitting dopants known to those skilled in the art can be used herein.[0016]An electron transport layer 60 is formed on the polar light emitting layer 50. The electron-transporting material may be any material known to those skilled in the art for this purpose. These compounds help inject and transport electrons, show higher performance, and are easily made into thin films. Examples include, but are not limited to, 8-hydroxyquinoline compounds that chelate metals, including chelates of 8-hydroxyquinoline itself (also commonly referred to as 8-hydroxyquinoline).[0017]Finally, a cathode 70 is provided on the electron transport layer 60, and the cathode 70 is supported by the substrate 90. The cathode may be made of a transparent or light-transmitting, opaque or reflective material, and may include almost any conductive material. A suitable cathode material has good film formation characteristics to ensure good contact with the organic layer below it. It promotes electron injection at low voltages and has good stability. The cathode materials used usually include metals with a lower work function (<4.0 eV) or metal alloys.[0018]As described above, the substrate 90 may be composed of any material capable of supporting the cathode conductive coating 70, which is either flexible or inflexible. Examples of such substrates include, but are not limited to, plastic, glass, quartz, plastic films, metals, ceramics, polymers, and the like. Non-limiting examples of flexible plastic films and plastics include polyethylene terephthalate (PET), polyethylene naphthalate (PEN), polyethersulfone (PES), polyetherimide, polyetherimide A film or sheet made of ether ether ketone, polyphenylene sulfide, polyaryl compound, polyimide, polycarbonate (PC), cellulose triacetate (TAC), and cellulose acetate-propionate. In addition, the substrate material 90 may be transparent or light-transmitting, opaque, reflective, or a variation of the above.[0019]When a potential (ie, a voltage) is applied from the power source 80 to the above device, electrons are injected from the light emitting layer 50 into the electron transporting layer 60, and then these electrons recombine with holes in the electron transporting layer and form light emission. The cathode 70 reflects the generated light back to the organic layer. By using a multi-color OLED panel known to those skilled in the art, a white light or image including some or all of the colors is formed using field sequential color technology.[0020]Exemplary OLED materials of the present invention include a polar light emitting layer material. By exposing the material of the polar light-emitting layer to an electric field or an applied voltage, the polar light-emitting layer can be polarized (that is, arranged in a row in the direction of the electric field). Such polarization enables the polar material to be oriented in a certain direction and directs the light emitted by the light-emitting layer in the same direction, thereby optimizing the emitted light and reducing problems related to light scattering and channel effects . The polarity of the material originates from the organic light emitting material itself (dopant host material or dopant). Compounds that can be used as luminescent materials, dopant host materials, or dopants include those described above and those known to those skilled in the art. Non-limiting examples of organic light emitting materials include amines (including aromatic tertiary amines, but also arylamines (such as, but not limited to, monoarylamines, diarylamines, triarylamines, or polyarylamine polyimides)) Thiophene (including but not limited to poly-N-vinylcarbazole (PVK), polypyrrole, polyaniline) and copolymers (e.g., polydioxyethylthiophene / poly-p-styrenestyrenesulfonic acid (also known as PEDOT / PSS)) And other amines mentioned above.[0021]Another exemplary embodiment of the invention is shown in Figure 2 (not to scale). In the OLED structure 10, the anode conductive layer 20 may be integrated on a substrate 30 having an irregular non-smooth surface 35 (also referred to as an alignment layer). The alignment layer 35 may provide an irregular, non-smooth surface for subsequent layers. A hole transport layer 4 is provided on the anode coating layer 20 and the alignment layer 35. A polar light emitting material layer 50 is provided on the hole transport layer 40. An electron transport layer 60 is provided on the light emitting layer 50. Finally, a cathode including a conductive film and supported by the substrate 90 is provided on the electron transport layer 60. The above-mentioned irregular and non-smooth surface of the alignment layer 35 can perform the above-mentioned deposition process and exist in all the layers of the OLED structure. For example, the light emitting layer 50 may fill a part of the irregular surface of the alignment layer 35. In one embodiment, the polar luminescent compound may fill the alignment layer with a portion of molecules extending below the surface of the alignment layer and a portion of molecules extending above the surface of the alignment layer. The anode 20 and the cathode 70 may be connected to a power source 80 that generates an applied voltage. When the power is turned on, holes are injected from the anode 20 into the hole transport layer 40, and then, in the light emitting layer 50, the holes are recombined with electrons from the cathode 70, and visible light is generated. Because the molecules of the light-emitting layer have polarity, the dipoles of these molecules are aligned in the same direction by applying a voltage. For example, all the positive polarity ends of the molecules are anchored to the surface of the alignment layer, and these All negative polarity ends of the molecules point in a direction away from the surface of the alignment layer, or vice versa during the curing process.[0022]Once the chemical materials are disposed on the alignment layer 35 or the substrate 30, these chemical materials undergo a curing process. In the curing process, a voltage is applied to the OLED material at the same time, which aligns the polar luminescent compounds in all layers in the OLED material. During the curing cycle, the voltage promotes the dipole alignment of the light emitting layer inside the material.[0023]The applied voltage used to orient the light-emitting dipole is typically less than about 7 volts. In one embodiment, the voltage is between 1 volt and about 7 volts. In another embodiment, the voltage is between 3 volts and about 5 volts.[00024]The irregular, non-smooth surface of the alignment layer 35 may be formed on the substrate 30 by any technique known in the art. A non-limiting example of an irregular non-smooth surface forming the alignment layer 35 includes a friction process or a friction transfer. Friction transfer includes: by pressing a solid structure (such as, but not limited to, a sheet, strip, block, rod, rod, etc.) made of an alignment material on a substrate, a thin layer of the alignment material is transferred to the liner sufficiently Under the action of the bottom pressure, the solid alignment material is pulled over the structure in a selected direction. The above-mentioned selected direction of the friction transfer provides a positioning direction for the subsequent alignment of the layers. Optionally, the substrate may be heated to optimize the initial behavior of the alignment layer.[00025]The thickness of the alignment layer is such that it is sufficient to provide alignment for subsequent layers. This thickness can be thin enough so that the layer is not fully insulated. An exemplary thickness of the alignment layer of the present invention is 0.1 to 20 microns. One embodiment of the present invention provides an alignment layer with a thickness between 1 and 10 microns, while another embodiment provides an alignment layer with a thickness between 5 and 7 microns.[00026]The thickness of the polar luminescent material is between 100 Angstroms and 2000 Angstroms. In one embodiment of the present invention, the thickness of the polar light emitting layer is between 300 and 2000 angstroms. In another embodiment, the thickness of the polar light emitting layer is between 800 and 2000 Angstroms.[00027]At room temperature or higher, the polar luminescent compound 50 may be set to the irregular non-smooth surface of the alignment layer 35 (showing the topology of the layers 20 and 40) or the surface of the substrate 30 to enhance light emission Uniformity of the compound layer.[00028]Other embodiments of the invention include a process for preparing an OLED material for use in a display device. FIG. 2 illustrates an exemplary process that includes: coating a substrate 30 with a conductive layer 20 and / or a hole transport layer 40 to form a coated substrate; and rubbing the coated substrate to form a coated substrate. Align the grooves or other irregular surfaces of the layer; set the polar luminescent compound 50 to the irregular surface of the coated substrate, and use the compound to fill the grooves or irregular structures formed by rubbing; then cure the tape The substrate is coated while being exposed to an electric field.[00029]Another exemplary embodiment of the present invention includes: coating the substrate 30 with the conductive layer 20 and / or the hole transporting layer 40 to form a coated substrate; and providing a polar light emitting compound 50 to the coated substrate The surface of the coated substrate is then cured while exposing it to an electric field.[00030]Another exemplary embodiment of the present invention includes an OLED material integrated into a display device. Figure 3 (not to scale) illustrates this exemplary embodiment. When a voltage is applied from the power source 80 to the OLED structure, the light 300 emitted by the OLED structure 10 is transmitted to the display 100 in the direction of the applied voltage. Because more light emitted by the OLED structure 10 can be transmitted to the user, the display 100 can operate at a lower power than the display power currently known in the industry.[00031]The display device may include a light distribution device such as a lens, a polarizer, or an optical viewing element. In the case of integrating the OLED material of the present invention, the display 100 may be any element that sends light from the OLED to a viewer. The display 100 also includes other elements, such as but not limited toFor processors, memory, power supplies, or other peripheral devices, where these devices are given individually or in combination.[00032]Those skilled in the art should understand that other light distribution devices can also be used, such as, but not limited to, optical waveguides, prisms, lenses, Fresnel lenses, diffusers, interferometers, or any other device that can evenly and efficiently distribute white light. To the optics on the display device. Moreover, other optical elements (such as, but not limited to, a polarizer, a refractive element, a diffractive element, a band-pass filter, etc.) may be conveniently disposed outside or near the OLED structure 10. By using multiple OLED panels as the light source, the size of the OLED structure 10 can be further reduced, and the required electric power can be minimized. By using a multi-color OLED panel, it is possible to form white light or an image having a part or all of the colors using field sequential color technology. Alternatively, light may be passed through a light distribution device, which scatters the light to uniformly illuminate the display device 100.[00033]Those skilled in the art will further understand that, optionally, the OLED structure 10 of the present invention may be provided in the display device 10 together with other OLED structures. The OLED structures 10 may be arranged in a random manner or in a certain pattern, or these structures may be arranged in a stacked or serial manner, or they may be arranged adjacent to each other. The setting of the OLED structure depends on several factors including, but not limited to, the size of the display, the lighting requirements of the display, the color, and so on. In addition, those skilled in the art will understand that the OLED material may be in the form of, for example, but not limited to, a strip, a film, a block, and the like.[00034]The light emitted from the OLED structure 10 of the present invention can be manipulated by the structure of the OLED structure 10 itself, and the structure can emit white light or colored light. It is possible to combine an OLED that emits colored light and an OLED that emits white light, and then integrate both into the display device 10.[00035]In the embodiment of the present invention, by adjusting the current and driving voltage applied to the OLED structure 10, the brightness of the light and the brightness of the color sent to the display device 100 can be changed. A proportional change in current may be applied to each layer in the stack or to each OLED structure 10 in series in order to change the color perceived by the viewer in an alternative manner.[00036]The voltage required to display the light from the OLED structure 10 on the display device 100 may be less than about 15 volts. In one embodiment of the invention, the voltage required to display light from the OLED structure 10 is between about 1 volt and about 12 volts. By changing the voltage applied to the OLED structure 10, the brightness of the displayed light from the OLED structure 10 can be changed.[00037]The OLED structure 10 of the present invention can be integrated into any system that benefits from an image display. The OLED structure 10 may be integrated into a display other than (or in lieu of) those LCD displays or other displays known in the industry. Systems that include display devices include, but are not limited to, those used in laptops, personal digital assistants, cellular phones, and the like.[00038]In addition to the display device 100, the system also includes (but is not limited to) a processing unit, a system memory, and a system bus that connects various system devices including the system memory to the processing unit. The system bus can be any of several bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any bus architecture. For example, but not limited to, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and peripheral component interconnect ( PCI) bus (also called Mezzanine bus).[00039]System memory includes computer storage media in the form of volatile and / or non-volatile memory, such as read-only memory (ROM) and random access memory (RAM). Basic input / output systems that contain basic programs that facilitate the transfer of information between elements within the system during startup are usually stored in ROM. The RAM typically contains data program modules and / or computer executable instructions that are directly accessible by the processing unit and / or are currently being executed by the processing unit.[00040]Although the invention has been particularly shown and described in connection with exemplary embodiments, those skilled in the art will understand that the foregoing and other changes can be made to these embodiments in form and detail without departing from the scope of the invention Scope and spirit. Accordingly, the invention is not limited to the precise forms described and shown, but falls within the scope of the appended claims. |
An apparatus and method are described for performing SIMD reduction operations. For example, one embodiment of a processor comprises: a value vector register containing a plurality of data element values to be reduced; an index vector register to store a plurality of index values indicating which values in the value vector register are associated with one another; single instruction multiple data (SIMD) reduction logic to perform reduction operations on the data element values within the value vector register by combining data element values from the value vector register which are associated with one another as indicated by the index values in the index vector register; and an accumulation vector register to store results of the reduction operations generated by the SIMD reduction logic. |
CLAIMSWhat is claimed is:1 . A processor comprising:a value vector register to store a plurality of data element values to be reduced; an index vector register to store a plurality of index values indicating which values in the value vector register are associated with one another;single instruction multiple data (SIMD) reduction logic to perform reduction operations on the data element values within the value vector register by combining data element values from the value vector register which are associated with one another as indicated by the index values in the index vector register; andan accumulation vector register to store results of the reduction operations generated by the SIMD reduction logic.2. The processor as in claim 1 wherein to perform the reduction operations the SIMD reduction logic is to determine groups of data element values which have the same index value and to combine the data elements having the same index values to generate a plurality of results, each result of the plurality comprising a arithmetic combination of a group of data element values sharing the same index value.3. The processor as in claim 2 wherein the SIMD reduction logic is to store each result within a specified data element location of the accumulation vector register.4. The processor as in claim 3 wherein the SIMD reduction logic is to perform the reduction operations by performing a plurality of combination iterations on element values sharing the same index value, each of the combination iterations combining pairs of data element values until a final result is reached in a final iteration.5. The processor as in claim 3 wherein each specified data element location in the accumulation register comprises a location corresponding to a location of an associated index value having a most significant location relative to others of the same index value in the index vector register or a location corresponding to a location of an associated index value having a least significant location relative to others of the same index value in the index vector register.6. The processor as in claim 1 wherein each of the data element values within the value vector register is associated with a SIMD lane in the processor and wherein performing the reduction operations further comprises:calculating conflicts across each of the lanes to generate conflict results and storing the conflict results in a conflict destination register.7. The processor as in claim 6 wherein performing the reduction operations further comprises:marking each lane with the same index value as left and right children in their respective reduction trees to generate a bit sequence.8. The processor as in claim 7 wherein performing the reduction operations further comprises:using the bit sequence as a mask which marks the left children as active or which marks the right children as active.9. The processor as in claim 8 wherein the reduction operations further comprise, for each lane, calculating a bit-index of a most significant 1 indicating a leftmost lane with an equal index value to the right if the mask marks the left children as active or indicating a rightmost lane with an equal index value to the left if the mask marks the right children as active.10. The processor as in claim 9 wherein the reduction operations further comprise moving right children into alignment with left children if the mask marks the left children as active or moving left children into alignment with right children if the mask marks the right children as active to generate a temporary result and placing the temporary result in a temporary location.1 1 . The processor as in claim 10 further comprising applying a reduction operation to the temporary result with original data to combine left and right children to generate a new result, and placing the new result in the lane associated with the left child if the mask marks the left children as active or placing the new result in the lane associated with the right child if the mask marks the right children as active.12. The processor as in claim 10 wherein performing the reduction operations further comprises:performing a bitwise AND operation of the mask and the conflict results, thereby clearing bits in the conflicts destination register associated with one or more right children and removing those right children from consideration in future iterations if the mask marks the left children as active or performing a bitwise AND operation of the mask and the conflict results, thereby clearing bits in the conflicts destination register associated with one or more left children and removing those left children from consideration in future iterations if the mask marks the right children as active.13. The processor as in claim 2 wherein the SIMD reduction logic is to determine groups of data element values which have the same index value and to combine the data elements by adding the data elements having the same index values to generate a plurality of results, each result of the plurality comprising a sum of a group of data element values sharing the same index value.14. A method comprising:storing a plurality of data element values to be reduced in a value vector register; storing a plurality of index values indicating which values in the value vector register are associated with one another in an index vector register;perform reduction operations on the data element values within the value vector register by combining data element values from the value vector register which are associated with one another as indicated by the index values in the index vector register; andstoring results of the reduction operations in an accumulation vector register.15. The method as in claim 14 wherein to perform the reduction operations, determining groups of data element values which have the same index value and to combine the data elements having the same index values to generate a plurality of results, each result of the plurality comprising a arithmetic combination of a group of data element values sharing the same index value.1 6. The method as in claim 15 further comprising storing each result within a specified data element location of the accumulation vector register.17. The method as in claim 16 further comprising performing the reduction operations by performing a plurality of combination iterations on element values sharing the same index value, each of the combination iterations combining pairs of data element values until a final result is reached in a final iteration.18. The method as in claim 16 wherein each specified data element location in the accumulation register comprises a location corresponding to a location of an associated index value having a most significant location relative to others of the same index value in the index vector register or a location corresponding to a location of an associated index value having a least significant location relative to others of the same index value in the index vector register.19. The method as in claim 14 wherein each of the data element values within the value vector register is associated with a SIMD lane in a processor and wherein performing the reduction operations further comprises:calculating conflicts across each of the lanes to generate conflict results and storing the conflict results in a conflict destination register.20. The method as in claim 19 wherein performing the reduction operations further comprises:marking each lane with the same index value as left and right children in their respective reduction trees to generate a bit sequence.21 . The method as in claim 20 wherein performing the reduction operations further comprises:using the bit sequence as a mask which marks the left children as active or which marks the right children as active.22. The method as in claim 21 wherein the reduction operations further comprise, for each lane, calculating a bit-index of a most significant 1 indicating a leftmost lane with an equal index value to the right if the mask marks the left children as active or indicating a rightmost lane with an equal index value to the left if the mask marks the right children as active.23. The method as in claim 22 wherein the reduction operations further comprise moving right children into alignment with left children if the mask marks the left children as active or moving left children into alignment with right children if the mask marks the right children as active to generate a temporary result and placing the temporary result in a temporary location.24. The method as in claim 23 further comprising applying a reduction operation to the temporary result with original data to combine left and right children to generate a new result, and placing the new result in the lane associated with the left child if the mask marks the left children as active or placing the new result in the lane associated with the right child if the mask marks the right children as active.25. The method as in claim 23 wherein performing the reduction operations further comprises:performing a bitwise AND operation of the mask and the conflict results, thereby clearing bits in the conflicts destination register associated with one or more right children and removing those right children from consideration in future iterations if the mask marks the left children as active or performing a bitwise AND operation of the mask and the conflict results, thereby clearing bits in the conflicts destination register associated with one or more left children and removing those left children fromconsideration in future iterations if the mask marks the right children as active. |
METHOD AND APPARATUS FOR PERFORMINGREDUCTION OPERATIONS ON A SET OF VECTOR ELEMENTSBACKGROUNDField of the Invention[0001] This invention relates generally to the field of computer processors. More particularly, the invention relates to a method and apparatus for performing reduction operations on a set of vector elements.Description of the Related Art[0002] An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term "instruction" generally refers herein to macro-instructions - that is instructions that are provided to the processor for execution - as opposed to micro-instructions or micro- ops - that is the result of a processor's decoder decoding macro-instructions. The micro-instructions or micro-ops can be configured to instruct an execution unit on the processor to perform operations to implement the logic associated with the macro- instruction.[0003] The ISA is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. For example, the same register architecture of the ISA may be implemented in different ways in different microarchitectures using well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file). Unless otherwise specified, the phrases register architecture, register file, and register are used herein to refer to that which is visible to the software/programmer and the manner in which instructions specify registers. Where a distinction is required, the adjective "logical," "architectural," or "software visible" will be used to indicate registers/files in the register architecture, while different adjectives will be used to designate registers in a given microarchitecture (e.g., physical register, reorder buffer, retirement register, register pool).[0004] An instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, location of bits) to specify, among other things, the operation to be performed and the operand(s) on which that operation is to be performed. Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. A given instruction is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies the operation and the operands. An instruction stream is a specific sequence of instructions, where each instruction in the sequence is an occurrence of an instruction in an instruction format (and, if defined, a given one of the instruction templates of that instruction format).BRIEF DESCRIPTION OF THE DRAWINGS[0005] A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:[0006] FIGS. 1 A and 1 B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention;[0007] FIG. 2A-D is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention;[0008] FIG. 3 is a block diagram of a register architecture according to one embodiment of the invention; and[0009] FIG. 4A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-orderissue/execution pipeline according to embodiments of the invention;[0010] FIG. 4B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according toembodiments of the invention; [0011] FIG. 5A is a block diagram of a single processor core, along with its connection to an on-die interconnect network;[0012] FIG. 5B illustrates an expanded view of part of the processor core in FIG 5A according to embodiments of the invention;[0013] FIG. 6 is a block diagram of a single core processor and a multicore processor with integrated memory controller and graphics according to embodiments of the invention;[0014] FIG. 7 illustrates a block diagram of a system in accordance with one embodiment of the present invention;[0015] FIG. 8 illustrates a block diagram of a second system in accordance with an embodiment of the present invention;[0016] FIG. 9 illustrates a block diagram of a third system in accordance with an embodiment of the present invention;[0017] FIG. 10 illustrates a block diagram of a system on a chip (SoC) inaccordance with an embodiment of the present invention;[0018] FIG. 11 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention;[0019] FIG. 12 illustrates how conflict detection operations may be performed in accordance with one embodiment of the invention;[0020] FIG. 13 illustrates one embodiment of the invention for performing reduction operations on data elements within a value vector register;[0021] FIG. 14 illustrates additional details of how conflicts are detected using index values and stored within a vector register;[0022] FIG. 15 illustrates additional details related to the performance of reduction operations in accordance with one embodiment of the invention; and[0023] FIG. 16 illustrates a method in accordance with one embodiment of the invention.DETAILED DESCRIPTION[0024] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.EXEMPLARY PROCESSOR ARCHITECTURES AND DATA TYPES[0025] An instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, location of bits) to specify, among other things, the operation to be performed (opcode) and the operand(s) on which that operation is to be performed. Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (sourcel /destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme, has been , has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developers Manual, October 201 1 ; and see Intel® Advanced Vector Extensions Programming Reference, June 201 1 ).Exemplary Instruction Formats[0026] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.A. Generic Vector Friendly Instruction Format[0027] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format. [0028] Figures 1 A-1 B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. Figure 1 A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while Figure 1 B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 100 for which are defined class A and class B instruction templates, both of which include no memory access 105 instruction templates and memory access 120 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.[0029] While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 1 6 doubleword-size elements or alternatively, 8 quadword- size elements); a 64 byte vector operand length (or size) with 1 6 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 1 6 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 1 6 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 1 6 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[0030] The class A instruction templates in Figure 1 A include: 1 ) within the no memory access 105 instruction templates there is shown a no memory access, full round control type operation 1 10 instruction template and a no memory access, data transform type operation 1 15 instruction template; and 2) within the memory access 120 instruction templates there is shown a memory access, temporal 125 instruction template and a memory access, non-temporal 130 instruction template. The class B instruction templates in Figure 1 B include: 1 ) within the no memory access 105 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1 12 instruction template and a no memory access, write mask control, vsize type operation 1 17 instruction template; and 2) within the memory access 120 instruction templates there is shown a memory access, write mask control 127 instruction template. [0031] The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figures 1A-1 B.[0032] Format field 140 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.[0033] Base operation field 142 - its content distinguishes different base operations.[0034] Register index field 144 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 1 6x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).[0035] Modifier field 146 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 105 instruction templates and memory access 120 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.[0036] Augmentation operation field 150 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 1 68, an alpha field 152, and a beta field 154. The augmentation operation field 150 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.[0037] Scale field 1 60 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale *index + base). [0038] Displacement Field 162A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale *index + base + displacement).[0039] Displacement Factor Field 1 62B (note that the juxtaposition of displacement field 1 62A directly over displacement factor field 162B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale *index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. The displacement field 1 62A and the displacement factor field 1 62B are optional in the sense that they are not used for the no memory access 105 instruction templates and/or different embodiments may implement only one or none of the two.[0040] Data element width field 1 64 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[0041] Write mask field 170 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 170 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 170 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 170 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 170 content to directly specify the masking to be performed.[0042] Immediate field 172 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.[0043] Class field 1 68 - its content distinguishes between different classes of instructions. With reference to Figures 1 A-B, the contents of this field select between class A and class B instructions. In Figures 1 A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1 68A and class B 168B for the class field 1 68 respectively in Figures 1A-B).Instruction Templates of Class A[0044] In the case of the non-memory access 105 instruction templates of class A, the alpha field 152 is interpreted as an RS field 152A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 152A.1 and data transform 152A.2 are respectively specified for the no memory access, round type operation 1 10 and the no memory access, data transform type operation 1 15 instruction templates), while the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 105 instruction templates, the scale field 1 60, the displacement field 1 62A, and the displacement scale filed 1 62B are not present.No-Memory Access Instruction Templates - Full Round Control Type Operation[0045] In the no memory access full round control type operation 1 10 instruction template, the beta field 154 is interpreted as a round control field 154A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 154A includes a suppress all floating point exceptions (SAE) field 156 and a round operation control field 158, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 158). [0046] SAE field 156 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 156 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.[0047] Round operation control field 158 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards- zero and Round-to-nearest). Thus, the round operation control field 158 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 150 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type Operation[0048] In the no memory access data transform type operation 1 15 instruction template, the beta field 154 is interpreted as a data transform field 154B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).[0049] In the case of a memory access 120 instruction template of class A, the alpha field 152 is interpreted as an eviction hint field 152B, whose content distinguishes which one of the eviction hints is to be used (in Figure 1 A, temporal 152B.1 and non- temporal 152B.2 are respectively specified for the memory access, temporal 125 instruction template and the memory access, non-temporal 130 instruction template), while the beta field 154 is interpreted as a data manipulation field 154C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 120 instruction templates include the scale field 1 60, and optionally the displacement field 1 62A or the displacement scale field 1 62B.[0050] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - Temporal[0051] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely. Memory Access Instruction Templates - Non-Temporal[0052] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1 st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class B[0053] In the case of the instruction templates of class B, the alpha field 152 is interpreted as a write mask control (Z) field 152C, whose content distinguishes whether the write masking controlled by the write mask field 170 should be a merging or a zeroing.[0054] In the case of the non-memory access 105 instruction templates of class B, part of the beta field 154 is interpreted as an RL field 157A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 157A.1 and vector length (VSIZE) 157A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1 12 instruction template and the no memory access, write mask control, VSIZE type operation 1 17 instruction template), while the rest of the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 105 instruction templates, the scale field 1 60, the displacement field 1 62A, and the displacement scale filed 1 62B are not present.[0055] In the no memory access, write mask control, partial round control type operation 1 10 instruction template, the rest of the beta field 154 is interpreted as a round operation field 159A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).[0056] Round operation control field 159A - just as round operation control field 158, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 159A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 150 content overrides that register value.[0057] In the no memory access, write mask control, VSIZE type operation 1 17 instruction template, the rest of the beta field 154 is interpreted as a vector length field 159B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[0058] In the case of a memory access 120 instruction template of class B, part of the beta field 154 is interpreted as a broadcast field 157B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 154 is interpreted the vector length field 159B. The memory access 120 instruction templates include the scale field 1 60, and optionally the displacement field 1 62A or the displacement scale field 1 62B.[0059] With regard to the generic vector friendly instruction format 100, a full opcode field 174 is shown including the format field 140, the base operation field 142, and the data element width field 1 64. While one embodiment is shown where the full opcode field 174 includes all of these fields, the full opcode field 174 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 174 provides the operation code (opcode).[0060] The augmentation operation field 150, the data element width field 1 64, and the write mask field 170 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[0061] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.[0062] The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1 ) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.B. Exemplary Specific Vector Friendly Instruction Format[0063] Figure 2 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. Figure 2 shows a specific vector friendly instruction format 200 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 200 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 1 into which the fields from Figure 2 map are illustrated.[0064] It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 200 in the context of the generic vector friendly instruction format 100 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 200 except where claimed. For example, the generic vector friendly instruction format 100 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 200 is shown as having fields of specific sizes. By way of specific example, while the data element width field 164 is illustrated as a one bit field in the specific vector friendly instruction format 200, the invention is not so limited (that is, the generic vector friendly instruction format 100 contemplates other sizes of the data element width field 164).[0065] The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figure 2A. [0066] EVEX Prefix (Bytes 0-3) 202 - is encoded in a four-byte form.[0067] Format Field 140 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 140 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).[0068] The second-fourth bytes (EVEX Bytes 1 -3) include a number of bit fields providing specific capability.[0069] REX field 205 (EVEX Byte 1 , bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1 , bit [7] - R), EVEX.X bit field (EVEX byte 1 , bit [6] - X), and 157BEX byte 1 , bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1 s complement form, i.e. ZMM0 is encoded as 1 1 1 1 B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.[0070] REX' field 1 10 - this is the first part of the REX' field 1 10 and is the EVEX.R' bit field (EVEX Byte 1 , bit [4] - R') that is used to encode either the upper 1 6 or lower 1 6 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well- known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 1 1 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 1 6 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.[0071] Opcode map field 215 (EVEX byte 1 , bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[0072] Data element width field 1 64 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).[0073] EVEX.vvvv 220 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1 ) EVEX.vvvv encodes the first source register operand, specified in inverted (1 s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1 s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1 1 1 1 b. Thus, EVEX.vvvv field 220 encodes the 4 low-order bits of the first source register specifier stored in inverted (1 s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[0074] EVEX.U 1 68 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.UO; if EVEX.U = 1 , it indicates class B or EVEX.U1 .[0075] Prefix encoding field 225 (EVEX byte 2, bits [1 :0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.[0076] Alpha field 152 (EVEX byte 3, bit [7] - EH; also known as EVEX. EH,EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.[0077] Beta field 154 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-o, EVEX.r2-0, EVEX.rrl , EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.[0078] REX' field 1 10 - this is the remainder of the REX' field and is the EVEX. V bit field (EVEX Byte 3, bit [3] - V) that may be used to encode either the upper 1 6 or lower 1 6 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 1 6 registers. In other words, V'VVVV is formed by combining EVEX.V, EVEX.vvvv.[0079] Write mask field 170 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In oneembodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[0080] Real Opcode Field 230 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[0081] MOD R/M Field 240 (Byte 5) includes MOD field 242, Reg field 244, and R/M field 246. As previously described, the MOD field's 242 content distinguishes between memory access and non-memory access operations. The role of Reg field 244 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 246 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.[0082] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 150 content is used for memory address generation. SIB.xxx 254 and SIB.bbb 256 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.[0083] Displacement field 1 62A (Bytes 7-10) - when MOD field 242 contains 10, bytes 7-10 are the displacement field 1 62A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.[0084] Displacement factor field 1 62B (Byte 7) - when MOD field 242 contains 01 , byte 7 is the displacement factor field 1 62B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1 62B is a reinterpretation of disp8; when using displacement factor field 1 62B, the actual displacement is determined by the content of thedisplacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1 62B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1 62B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).[0085] Immediate field 172 operates as previously described.Full Opcode Field[0086] Figure 2B is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the full opcode field 174 according to one embodiment of the invention. Specifically, the full opcode field 174 includes the format field 140, the base operation field 142, and the data element width (W) field 1 64. The base operation field 142 includes the prefix encoding field 225, the opcode map field 215, and the real opcode field 230.Register Index Field[0087] Figure 2C is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the register index field 144 according to one embodiment of the invention. Specifically, the register index field 144 includes the REX field 205, the REX' field 210, the MODR/M.reg field 244, the MODR/M.r/m field 246, the WW field 220, xxx field 254, and the bbb field 256.Augmentation Operation Field[0088] Figure 2D is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the augmentation operation field 150 according to one embodiment of the invention. When the class (U) field 1 68 contains 0, it signifies EVEX.U0 (class A 1 68A); when it contains 1 , it signifies EVEX.U1 (class B 1 68B). When U=0 and the MOD field 242 contains 1 1 (signifying a no memory access operation), the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 152A. When the rs field 152A contains a 1 (round 152A.1 ), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 154A. The round control field 154A includes a one bit SAE field 156 and a two bit round operation field 158. When the rs field 152A contains a 0 (data transform 152A.2), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 154B. When U=0 and the MOD field 242 contains 00, 01 , or 10 (signifying a memory access operation), the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 152B and the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 154C.[0089] When U=1 , the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 152C. When U=1 and the MOD field 242 contains 1 1 (signifying a no memory access operation), part of the beta field 154 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 157A; when it contains a 1 (round 157A.1 ) the rest of the beta field 154 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the round operation field 159A, while when the RL field 157A contains a 0 (VSIZE 157.A2) the rest of the beta field 154 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the vector length field 159B (EVEX byte 3, bit [6-5]- Li-0). When U=1 and the MOD field 242 contains 00, 01 , or 10 (signifying a memory access operation), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 159B (EVEX byte 3, bit [6-5]- L1 -0) and the broadcast field 157B (EVEX byte 3, bit [4]- B).C. Exemplary Register Architecture[0090] Figure 3 is a block diagram of a register architecture 300 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 310 that are 512 bits wide; these registers are referenced as zmmO through zmm31 . The lower order 256 bits of the lower 1 6 zmm registers are overlaid on registers ymmO-1 6. The lower order 128 bits of the lower 1 6 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmmO-15. The specific vector friendly instruction format 200 operates on these overlaid register file as illustrated in the below tables.[0091] In other words, the vector length field 159B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 159B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 200 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0092] Write mask registers 315 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 315 are 1 6 bits in size. As previously described, in oneembodiment of the invention, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[0093] General-purpose registers 325 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0094] Scalar floating point stack register file (x87 stack) 345, on which is aliased the MMX packed integer flat register file 350 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0095] Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.D. Exemplary Core Architectures, Processors, and Computer Architectures[0096] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1 ) a general purpose in-order core intended for general-purposecomputing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1 ) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1 ) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.[0097] Figure 4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 4A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[0098] In Figure 4A, a processor pipeline 400 includes a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 41 6, a write back/memory write stage 418, an exception handling stage 422, and a commit stage 424.[0099] Figure 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both are coupled to a memory unit 470. The core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[00100] The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. The decode unit 440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 440 or otherwise within the front end unit 430). The decode unit 440 is coupled to a rename/allocator unit 452 in the execution engine unit 450.[00101] The execution engine unit 450 includes the rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler unit(s) 456. The scheduler unit(s) 456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 456 is coupled to the physical register file(s) unit(s) 458. Each of the physical register file(s) units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 458 is overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 454 and the physical register file(s) unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. The execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, physical register file(s) unit(s) 458, and execution cluster(s) 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[00102] The set of memory access units 464 is coupled to the memory unit 470, which includes a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, the memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 472 in the memory unit 470. The instruction cache unit 434 is further coupled to a level 2 (L2) cache unit 476 in the memory unit 470. The L2 cache unit 476 is coupled to one or more other levels of cache and eventually to a main memory.[00103] By way of example, the exemplary register renaming, out-of-orderissue/execution core architecture may implement the pipeline 400 as follows: 1 ) the instruction fetch 438 performs the fetch and length decoding stages 402 and 404; 2) the decode unit 440 performs the decode stage 406; 3) the rename/allocator unit 452 performs the allocation stage 408 and renaming stage 410; 4) the scheduler unit(s) 456 performs the schedule stage 412; 5) the physical register file(s) unit(s) 458 and the memory unit 470 perform the register read/memory read stage 414; the execution cluster 460 perform the execute stage 41 6; 6) the memory unit 470 and the physical register file(s) unit(s) 458 perform the write back/memory write stage 418; 7) various units may be involved in the exception handling stage 422; and 8) the retirement unit 454 and the physical register file(s) unit(s) 458 perform the commit stage 424. [00104] The core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 490 includes logic to support a packed data instruction set extension (e.g., AVX1 , AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00105] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®Hyperthreading technology).[00106] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1 ) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[00107] Figures 5A-B illustrate a block diagram of a more specific exemplary in- order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[00108] Figure 5A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 502 and with its local subset of the Level 2 (L2) cache 504, according to embodiments of the invention. In one embodiment, an instruction decoder 500 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 506 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 508 and a vector unit 510 use separate register sets (respectively, scalar registers 512 and vector registers 514) and data transferred between them is written to memory and then read back in from a level 1 (L1 ) cache 506, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[00109] The local subset of the L2 cache 504 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 504. Data read by a processor core is stored in its L2 cache subset 504 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 504 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.[00110] Figure 5B is an expanded view of part of the processor core in Figure 5A according to embodiments of the invention. Figure 5B includes an L1 data cache 506A part of the L1 cache 504, as well as more detail regarding the vector unit 510 and the vector registers 514. Specifically, the vector unit 510 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 528), which executes one or more of integer, single- precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 520, numeric conversion with numeric convert units 522A-B, and replication with replication unit 524 on the memory input. Write mask registers 526 allow predicating resulting vector writes.[00111] Figure 6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 6 illustrate a processor 600 with a single core 602A, a system agent 610, a set of one or more bus controller units 61 6, while the optional addition of the dashed lined boxes illustrates an alternative processor 600 with multiple cores 602A-N, a set of one or more integrated memory controller unit(s) 614 in the system agent unit 610, and special purpose logic 608. [00112] Thus, different implementations of the processor 600 may include: 1 ) a CPU with the special purpose logic 608 being integrated graphics and/or scientific(throughput) logic (which may include one or more cores), and the cores 602A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 602A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 602A-N being a large number of general purpose in-order cores. Thus, the processor 600 may be a general- purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor,GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 600 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00113] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 606, and external memory (not shown) coupled to the set of integrated memory controller units 614. The set of shared cache units 606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system agent unit 610/integ rated memory controller unit(s) 614, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 606 and cores 602-A-N.[00114] In some embodiments, one or more of the cores 602A-N are capable of multi-threading. The system agent 610 includes those components coordinating and operating cores 602A-N. The system agent unit 610 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic andcomponents needed for regulating the power state of the cores 602A-N and the integrated graphics logic 608. The display unit is for driving one or more externally connected displays.[00115] The cores 602A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 602A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.[00116] Figures 7-10 are block diagrams of exemplary computer architectures.Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[00117] Referring now to Figure 7, shown is a block diagram of a system 700 in accordance with one embodiment of the present invention. The system 700 may include one or more processors 710, 715, which are coupled to a controller hub 720. In one embodiment the controller hub 720 includes a graphics memory controller hub (GMCH) 790 and an Input/Output Hub (lOH) 750 (which may be on separate chips); the GMCH 790 includes memory and graphics controllers to which are coupled memory 740 and a coprocessor 745; the lOH 750 is couples input/output (I/O) devices 760 to the GMCH 790. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 740 and the coprocessor 745 are coupled directly to the processor 710, and the controller hub 720 in a single chip with the lOH 750.[00118] The optional nature of additional processors 715 is denoted in Figure 7 with broken lines. Each processor 710, 715 may include one or more of the processing cores described herein and may be some version of the processor 600.[00119] The memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 720 communicates with the processor(s) 710, 715 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such asQuickPath Interconnect (QPI), or similar connection 795.[00120] In one embodiment, the coprocessor 745 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 720 may include an integrated graphics accelerator. [00121] There can be a variety of differences between the physical resources 710, 715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[00122] In one embodiment, the processor 710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 745.Accordingly, the processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 745. Coprocessor(s) 745 accept and execute the received coprocessor instructions.[00123] Referring now to Figure 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in Figure 8, multiprocessor system 800 is a point-to-point interconnect system, and includes a first processor 870 and a second processor 880 coupled via a point-to- point interconnect 850. Each of processors 870 and 880 may be some version of the processor 600. In one embodiment of the invention, processors 870 and 880 are respectively processors 710 and 715, while coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are respectively processor 710 coprocessor 745.[00124] Processors 870 and 880 are shown including integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes as part of its bus controller units point-to-point (P-P) interfaces 876 and 878; similarly, second processor 880 includes P-P interfaces 886 and 888. Processors 870, 880 may exchange information via a point-to-point (P-P) interface 850 using P-P interface circuits 878, 888. As shown in Figure 8, IMCs 872 and 882 couple the processors to respective memories, namely a memory 832 and a memory 834, which may be portions of main memory locally attached to the respective processors.[00125] Processors 870, 880 may each exchange information with a chipset 890 via individual P-P interfaces 852, 854 using point to point interface circuits 876, 894, 886, 898. Chipset 890 may optionally exchange information with the coprocessor 838 via a high-performance interface 839. In one embodiment, the coprocessor 838 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. [00126] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00127] Chipset 890 may be coupled to a first bus 81 6 via an interface 896. In one embodiment, first bus 81 6 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.[00128] As shown in Figure 8, various I/O devices 814 may be coupled to first bus 81 6, along with a bus bridge 818 which couples first bus 81 6 to a second bus 820. In one embodiment, one or more additional processor(s) 815, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphicsaccelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 81 6. In one embodiment, second bus 820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 820 including, for example, a keyboard and/or mouse 822, communication devices 827 and a storage unit 828 such as a disk drive or other mass storage device which may include instructions/code and data 830, in one embodiment. Further, an audio I/O 824 may be coupled to the second bus 820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 8, a system may implement a multi-drop bus or other such architecture.[00129] Referring now to Figure 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in Figures 8 and 9 bear like reference numerals, and certain aspects of Figure 8 have been omitted from Figure 9 in order to avoid obscuring other aspects of Figure 9.[00130] Figure 9 illustrates that the processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. Thus, the CL 872, 882 include integrated memory controller units and include I/O control logic. Figure 9 illustrates that not only are the memories 832, 834 coupled to the CL 872, 882, but also that I/O devices 914 are also coupled to the control logic 872, 882. Legacy I/O devices 915 are coupled to the chipset 890.[00131] Referring now to Figure 10, shown is a block diagram of a SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 10, an interconnect unit(s) 1002 is coupled to: an application processor 1010 which includes a set of one or more cores 202A-N and shared cache unit(s) 606; a system agent unit 610; a bus controller unit(s) 616; an integrated memory controller unit(s) 614; a set or one or more coprocessors 1020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1020 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[00132] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[00133] Program code, such as code 830 illustrated in Figure 8, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00134] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00135] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. [00136] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00137] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00138] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00139] Figure 11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 11 shows a program in a high level language 1 102 may be compiled using an x86 compiler 1 104 to generate x86 binary code 1 106 that may be natively executed by a processor with at least one x86 instruction set core 1 1 1 6. The processor with at least one x86 instruction set core 1 1 1 6 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1 ) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1 104 represents a compiler that is operable to generate x86 binary code 1 106 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1 1 1 6. Similarly, Figure 11 shows the program in the high level language 1 102 may be compiled using an alternative instruction set compiler 1 108 to generate alternative instruction set binary code 1 1 10 that may be natively executed by a processor without at least one x86 instruction set core 1 1 14 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1 1 12 is used to convert the x86 binary code 1 106 into code that may be natively executed by the processor without an x86 instruction set core 1 1 14. This converted code is not likely to be the same as the alternative instruction set binary code 1 1 10 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up ofinstructions from the alternative instruction set. Thus, the instruction converter 1 1 12 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1 106.METHOD AND APPARATUS FOR PERFORMING REDUCTION OPERATIONS ON A SET OF VECTOR ELEMENTS[00140] "Sparse updates" are an important algorithmic pattern for which vectorization would be beneficial. Here, a read-modify-write operation may be performed on an indirectly addressed memory location (e.g., load A[B[i]], add something to it, and store the value back in A[B[i]]). Vectorizing this type of operation involves performing a gather-modify-scatter operation. By way of example, such an operation may involve performing 1 6 indirect loads of the form A[B[i]] for 1 6 consecutive values of i via a gather operation, performing a single instruction multiple data (SIMD) computation, and scattering the new values back to memory. However, this vectorization assumes that a single gather/scatter instruction will access each memory location no more than once. If, for example, two consecutive values of B[i] are the same, then the read-modify-write for the second one is dependent on the first. As such, doing these simultaneously in a SIMD fashion violates these dependencies and may result in an incorrect result.[00141] One embodiment of the invention utilizes a conflict detection instruction such as VPCONFLICT that compares the elements within a vector register to detect duplicates. In particular, the instruction may test each element of its vector register input for equality with all earlier elements of that input (e.g., all elements closer to the least significant bit (LSB)), and outputs the results of these comparisons as a set of bit vectors. The conflict detection instruction provides a way to determine whether or not an element has a data dependence that involves other elements within the same SIMD register.[00142] Figure 12 illustrates an example with an input vector register 1220comprising a set of data elements 1200-1203 and an output register 1230 to store the results 1210-1213 of the conflict detection instruction. In operation, the conflict detection instruction compares each of the data elements 1200-1203 to the data elements that precede it. The first element 1200 is not compared with another element (because no elements precede it) and the result is stored as 0000 in the first element in the output vector register 1230, indicating no conflict. The second element 1201 is compared with the first element 1200. Because the elements are not equal, the result is also 0000 (no conflicts) stored in the second location 121 1 of the output vector register 121 1 . Because the third element 1202 is equal to the first element 1200, the result of 0001 is stored in the third output location 1212 of the output vector register 1230. In one embodiment, the 0001 is a binary value and the 1 in the first position of the result indicates that the third element 1202 is equal to the first element 1200 of the input vector register 1220. Finally, because the fourth element 1203 is equal to both the first element 1200 and the third element 1202, a value of 0101 is set in the fourth location 1213 of the ouput vector register 1230 (with the first 1 in the first bit position indicating the equality with the first data element 1200 and the second 1 in the third bit position indicating the equality with the third data element 1202).[00143] The ability to identify duplicate values within separate elements of a SIMD register allows scalar code to be vectorized in cases where possible data dependencies across SIMD register elements might otherwise prevent vectorization. For example, dependencies can be enforced by determining a subset of elements with unique indices, computing those in SIMD fashion, and then looping back to retry the remaining elements, thus serializing the computation on elements with the same index. In the above example, the first two elements would be computed simultaneously, followed by the third element by itself (retrieving the input value from the output value of the first element), and the last element by itself (retrieving the input value from the output value of the third element). This approach is represented in the following example loop which performs an operation ("Compute") on an array of N data elements and is vectorized to operate on SIMD_WIDTH elements per iteration:for (i=0; i<N; i+=SIMD_WIDTH) {indices = vload(&B[i]);comparisons = vpconflict(indices);permute_indices = vsub(all_31 s, vplzcnt(comparisons));// do gather + compute on all elements, assuming no dependencesdata = Gather_Compute(indices);// generate mask with Ί ' for elements with any dependenceelements_left_mask = vptestm(comparisons, all_ones);// if any dependences, recompute corresponding elements of "data"while (elements_left_mask != 0) {do_these = Compute_Mask_of_Unique_Remaining_lndices( comparisons, elements_left_mask);data = vperm(data, permute_indices, do_these);data = Compute(data, do_these);elements_left_maskΛ= do_these;}scatter(indices, data);}[00144] A discussion of the Compute_Mask_of_Unique_Remaining_lndices function has been omitted for brevity.[00145] While the above code example is vectorized, the vectorized version of the loop can sometimes result in lower performance than its scalar equivalent, making it hard to predict whether vectorization will be beneficial or not. In particular, the performance boost provided by vectorization is dependent on how many elements in the index SIMD register (Indices') have duplicate values. This approach works well when there are a few instances of any given index - i.e., when the common case is to have a few iterations of the while loop. However, when there are many instances of the same index, the execution time may be worse than scalar execution because the maximum number of 'while' loop iterations is equal to the SIMD width. [00146] To address these issues, the embodiments of the invention described below include techniques to perform multiple tree reductions in parallel, one reduction per unique index value, on the elements within a SIMD register. This approach has at most log2SIMD_WIDTH compute steps. In particular, certain embodiments of the invention are capable of performing an arbitrary number of binary tree reductions in parallel across a set of values having an arbitrary ordering within a SIMD register. The information-rich output of conflict detection instructions such as VPCONFLICT may be used to iteratively identify and combine the partial results from pairs of SIMD elements with the same index. A new instruction, VPOPCNT, may be used for this approach, as it allows each of the elements that share an index to be ordered. One embodiment of the VPOPCNT instruction counts the number of set bits (i.e., 1 's) in each SIMD element.[00147] Within a single SIMD register, there may be multiple values that need to be combined via one or more reduction patterns. For example, an application may have a set of values { aO, bO, a1 , a2, b1 , a3, a4, b2 } within a single SIMD register that need to be combined so that all of the 'a' values are summed and all of the 'b' values are summed, yielding just two values {a0+a1 +a2+a3+ a4, b0+b1 +b2 }. While there are multiple ways to do this, the most efficient way given a reduction operation with only two inputs (e.g., an add instruction in a processor) is to perform multiple binary tree reductions in parallel across the elements of the SIMD register.[00148] The embodiments of the invention address the problem of performing multiple in-register reductions across the lanes of a vector register without having to either (A) serialize the reduction operations for each of the independent reductions or (B) count the number of instances of each unique index value within an associated "index" vector. This may be accomplished by generating a first output which identifies the independent reductions and generating a second output which may be used to identify left vs. right children in the binary reduction trees, as described in detail below. In one embodiment, the first output is generated using the VPCONFLICT instruction and the second output is generated using the VPOPCNT instruction.[00149] As shown in Figure 13, one embodiment of the SIMD tree reduction logic 1305 takes two vector registers as input: a "value" vector register 1302 containing the values to be reduced (e.g. summed) and an "index" vector register 1301 indicating which values (or lanes) in the "value" vector are associated with one another. If two lanes in the "index" vector register 1301 have equal values, then they are involved in the same tree reduction. If two lanes in the "index" vector register 1302 have different values, then they are involved in separate reductions. The output of the SIMD tree reduction logic 1305 is an accumulation vector register 1303 containing the result of each reduction in the left-most lane (i.e. closest to the most significant byte) containing an instance of the index value associated with that reduction.[00150] While the embodiments disclosed herein utilize an arrangement in which the most significant bits/bytes of each register are to the "left" and the least significant bits/bytes are to the "right," the underlying principles of the invention are not limited to such an arrangement. For example, in an alternate embodiment, the least significant bits/bytes are to the "left" and the most significant bits/bytes are to the "right." For this embodiment, any reference to "left" or "leftmost" in the present disclosure may be replaced with "right" or "rightmost" and vice versa.[00151] In the example in Figure 13, the values A, B, C, and D within the index vector register 1301 represent arbitrary (unique) integer values. Figure 13 also illustrates how, with each iteration (iterations 0-2 are shown), different sets of the values from the value vector 1302 are summed by the SIMD tree reduction logic to perform the reduction operation. For example, each instance of A in the index vector register 1301 identifies the set of values in the value vector register to be reduced: d15, d14, d8, d3, and dO. After the final iteration, these values have been summed to form a single value, a, which is stored in the leftmost data element position of the accumulation vector 1303 (consistent with the position of the left-most A in the index vector). The value for β is formed in the same manner using values associated with each instance of B from the index vector (d13, d1 1 , d10, d9, d6, d5, d4, and d1 ) and the final value for β is stored at the third data element position from the left in the accumulation vector register 1303 (consistent with the position of the left-most B in the index vector).[00152] The following pseudocode represents the in-register tree reductions which may be performed by the SIMD tree reduction logic 1305 based on index values:accum_vec = value_vec;// only required if the loop should not clobber value_vecvc_vec = vpconflict(index_vec);// detect duplicates that are earlier in index_vecwhile (!all_zeros(vc_vec)) {// enter loop if any duplicate indices haven't been reducedpc_vec = vpopcnt(vc_vec);// assign elements w/ same index value unique sequence #s eo_mask = vptestm(pc_vec, vpbroadcastm(Oxl));// =1 for odd elements w/ same index (circled values in Figure 15) i_vec{eo_mask} = vpsub(vpbroadcast(31), vplzcnt(vc_mask));// index of sibling in treetmp_vec = vpermd(accum_vec, i_vec);// move right sibling into same lane as leftaccum_vec{eo_mask} = VEC_OP(tmp_vec, accum_vec);// apply reduction on siblings, storing result in left siblingvc_vec = vpand(vc_vec, vpbroadcastm(eo));// clear conflict bits for reduced elements}[00153] In operation, the vector register "value_vec" (the value vector register 1302) contains the values to be reduced, and the vector register "index_vec" (the index vector register 1301 ) contains the indexes or associations of those values. For example, in one embodiment, equal values within "index_vec" means that the corresponding values in "value_vec" belong to the same reduction. The VEC_OP function represents any operation that would normally be used in a reduction, which is typically a commutative and associative mathematical operation such as integer addition. Left-side values with brackets (e.g. "cnt_vec{eo_mask}") represent vector operations performed under mask. For the "i_vec{eo_mask}" operation, any inactive lanes should be zeroed. For the "accum_vec{eo_mask}" operation, any inactive lanes should retain the previous value of "accum_vec."[00154] Once completed, the "accum_vec" vector contains the results of all the reductions that occurred in parallel, one for each unique value contained in "index_vec." The result of each reduction will be in the left-most lane (closest to the MSB) of the "accum_vec" register 1303 that had an index value associated with that reduction in "index_vec" (as illustrated in Figure 13).[00155] In circumstances where all values in the "index" vector are unique (i.e., a "conflict-free" case) the cost of these techniques is fairly minimal (the cost ofVPCONFLICT and the initial "while" loop condition test which will be false, and the loopback branch). In the case where all values in the "index" vector are the same (i.e. "most conflicted" case), these techniques will iterate 'log2N' times, where N is the vector width. This is in contrast to prior implementations mentioned above which would instead execute N iterations because each reduction is effectively serialized (e.g., accumulating one value/lane at a time in each reduction). In general, the embodiments of the invention execute O(log2N)' iterations to perform an arbitrary number of reductions in parallel across the "value" vector 1302, where N is the number of instances of the value in "index" vector 1301 that has the most instances. For example, in Figure 13, the value "B" has the most instances in the "index" vector, with a total of N=8 instances (there are 5 instances of A, 1 instance of C, and 2 instances of D). For this example, the techniques described herein would iterate 3 times (log2N), while the previous algorithm would iterate 8 times (N).[00156] A specific example will now be described with respect to Figures 14 and 15. For clarity, this detailed example execution follows the example shown in Figure 13. As used herein, the least significant bit (LSB) and least significant lane (LSL) are the rightmost values shown (e.g., vector register = { lane 1 5, lane 14, ... , lane 0 }). For mask values, underscores are used to group bits visually, for clarity.[00157] Input values along with the result of the first conflict detection operation (e.g., VPCONFLICT) are as follows, where A, B, C, and D represent unique and arbitrary integer values, and dO through d15 represent the values involved in the reductions: index_vec = {A, A, B, C, B, B, B, A, D, B, B, B, A, D, B, A}value_vec = {d15, d14, d13, d12, d1 1 , d10, d9, d8, d7, d6, d5, d4, d3, d2, d1 , d0} accum_vec = value_vec; # Only required if should not clobber value_vec register vc_vec = { 0x4109, 0x0109, 0x0E72, 0x0000, 0x0672, 0x0272, 0x0072, 0x0009,0x0004, 0x0032, 0x0012, 0x0002, 0x0001 , 0x0000, 0x0000, 0x0000 }[00158] Figure 14 illustrates the conflict detection operations (e.g., implemented with VPCONFLICT) that create the initial "vc_vec" value within output vector register 1402. In the illustrated embodiment, the output vector register 1402 stores 1 6 data elements, each associated with one of the index data elements stored within the index data register, with the value of the element representing the earlier conflicts associated with the corresponding lane. As mentioned above, each element in the index vector register 1301 is compared with all of the other elements closer to the least significant lane/bit. Thus, an index data element in position # 4 (a B in the example) is compared with data elements in position # 3 (A), position # 2 (D), position # 1 (B), and position # 0 (A). If the data element is equal to any of the data elements closer to the least significant lane, then a corresponding bit is set within the output vector register 1402. So, for example, the second B from the left in the index vector register 1301 generates the output 1 1001 1 10010, with 1 s indicating the positions of the other Bs in the index vector register 1301 . This value is then stored in the output vector register 1402 (represented in the example by hexadecimal value 0x0672) at a location corresponding to the location of the B for which the comparisons are performed, as illustrated. Similar operations are performed for each index value stored in the index vector register 1301 . [00159] Next, the "while" loop set forth above is iterated as long as there is at least one bit set in the "vc_vec" value in the output vector register 1302. For the sake of the illustrated example, the reduction operation is addition (e.g., VEC_OP = vpadd).Consequently, the results of iteration 0 are as follows:pc_vec = { 4, 3, 7, 0, 6, 5, 4, 2, 1 , 3, 2, 1 , 1 , 0, 0, 0 }eo_mask = 01 10_0100_1 101 _1000 # individual bits, underscores used for clarity i_vec = { 0, 8, 1 1 , 0, 0, 9, 0, 0, 2, 5, 0, 1 , 0, 0, 0, 0 }tmp_vec = { dO, d8, d1 1 , dO, dO, d9, dO, dOd2, d5, dO, d1 , dO, dO, dO, dO }accum_vec = { d15, d14+d8, d13+d1 1 , d12, d1 1 , d10+d9, d9, d8,d7+d2, d6+d5, d5, d4+d1 , d3+d0, d2, d1 , d0 } vc_vec = { 0x4008, 0x0008, 0x0450, 0x0000, 0x0450, 0x0050, 0x0050, 0x0008,0x0000, 0x0010, 0x0010, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000 }[00160] Figure 15 illustrates how the pc_vec values are determined for iteration 0 and stored as data elements within a vector register 1501 . In particular, each data element in the pc_vec vector register 1501 corresponds to an index in the index vector register 1301 and has a value equal to the number of instances of the index value which are stored in the index vector register 1301 closer to the least significant lane/bit. For example, the left-most value of 4 in the pc_vec vector register 1501 is associated with the left-most instance of index A in the index vector register 1301 and indicates that there are 4 other instances of index A in the index vector register 1301 (i.e., to the right of the left-most instance of A). Similarly, the value of 7 in the pc_vec vector register1501 is associated with the instance of index B located in a corresponding position in the index vector register (i.e., 2 positions from the left in the illustrated example). The value of 7 indicates that there are 7 instances of index B stored to the right in the index vector register 1301 .[00161] In addition, Figure 15 illustrates how the bits within the eo_mask register1502 are updated. In particular, a bit associated with each index value is set to 1 to indicate an odd number of other instances of that index value to the right within the index vector register 1301 . Thus, for a given index value, the bits associated with that index value will alternate between 1 and 0 within the eo_mask register 1502.[00162] Following iteration 0, since there are still bits set in the "vc_vec" value in the output vector register 1402 another iteration is performed ("iteration 1 "):pc_vec = { 2, 1 , 3, 0, 3, 2, 2, 1 , 0, 1 , 1 , 0, 0, 0, 0, 0 }eo_mask = 01 10_1001_01 10_0000 i_vec = {0,3, 10,0, 12, 0, 0, 3, 0, 4, 4, 0, 0, 0,0,0}tmp_vec = { dO, d3+d0, d10+d9, dO, d12, dO, dO, d3+d0,dO, d4+d1, d4+d1,dO, dO, dO, dO, dO }accum_vec = { d15, d14+d8+d3+d0, d13+d11+d10+d9, d12,d12+d11, d10+d9, d9, d8+d3+d0,d7+d2, d6+d5+d4+d1, d5+d4+d1, d4+d1, d3+d0, d2, d1, dO } vc_vec = { 0x4000, 0x0000, 0x0040, 0x0000, 0x0040, 0x0040, 0x0040, 0x0000,0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000 }[00163] Following iteration 1 , since there are still bits set in the "vc_vec" value in the output vector register 1402 another iteration is performed:pc_vec = {1,0, 1,0, 1, 1, 1,0,0,0,0,0,0,0,0,0}eo_mask = 1010_1110_0000_0000i_vec = {14,0,6, 0, 6, 6, 6, 0, 0, 0, 0, 0, 0, 0,0,0}tmp_vec = { d14+d8+d3+d0, dO, d6+d5+d4+d1, dO,d6+d5+d4+d1, d6+d5+d4+d1, d6+d5+d4+d1, dO,dO, dO, dO, dO,dO, dO, dO, dO}{ d15+d14+d8+d3+d0, d14+d8+d3+d0, d13+d11 +d10+d9+d6+d5+d4+d1 , d12,d12+d11+d6+d5+d4+d1, d10+d9+d6+d5+d4+d1,d9+d6+d5+d4+d1 , d8+d3+d0,d7+d2, d6+d5+d4+d1,d5+d4 +d1, d4+d1,d3+d0, d2,d1, dO}vc_vec = { 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000,0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000, 0x0000 }[00164] Since the "vc_vec" in the output vector register 1402 now contains all zeroes, the loop exits. The result of the loop is as follows, with input repeated for reference: index_vec = { A, A, B, C, B, B, B, A, D, B, B, B, A, D, B, A }value_vec = { d15, d14, d13, d12, d11 , d10, d9, d8,d7, d6, d5, d4, d3, d2, d1,d0}accum_vec = { d15+d14+d8+d3+d0, d14+d8+d3+d0, d13+d11 +d10+d9+d6+d5+d4+d1. d12,d12+d1 1 +d6+d5+d4+d1 , d10+d9+d6+d5+d4+d1 ,d9+d6+d5+d4+d1 , d8+d3+d0,d7+d2, d6+d5+d4+d1 ,d5+d4+d1 , d4+d1 ,d3+d0, d2,d1 , dO }[00165] Values are bolded in index_vec to highlight which lanes represent the final reduction results and values are bolded in accum_vec above to match the bolding of index_vec. Notice that the result of each reduction is in the left-most lane with an index value associated with that reduction. In this example, the left-most index value "A" is associated with the result "d15+d14+d8+d3+d0" (lane 15) the left-most index value "B" is associated with the result "d13+d1 1 +d10+d9+d6+d5+ d4+d1 " (lane 13) the left-most index value "C" is associated with the result "d12" (lane 12), and the leftmost index value "D" is associated with the result "d7+d2" (lane 7). This matches the final state presented in Figure 13, marked as "after iteration 2."[00166] Having the result in the left-most lane (or most significant lane (MSL)) is advantageous on some architectures (e.g., such as I A) because of the scatter instruction definition. In cases where multiple elements in the scatter have the same index (i.e. write to the same memory location), the left most lane's (MSL) value overwrites any others. While the left-most is preferred for this specific embodiment, the underlying principles of the invention are not limited to using the left-most lane for the result. The result for a given index value may be stored in either the left-most or rightmost lane associated with that index value since scatter instructions are often defined to give deterministic results by preferring either the left-most or right-most valueassociated with that index value when duplicates occur. In the example code presented above, the left-most lane associated with a given index value is preferred.[00167] A method in accordance with one embodiment of the invention is illustrated in Figure 16. The method may be implemented within the context of the architectures described above, but is not limited to any specific system or processor architecture.[00168] At 1 601 , conflicts are detected across index lanes (e.g., equal index values further to the least significant bit/lane) and the results are stored in a VC_VEC register. For example, in one embodiment, the conflicts are detected using a conflict detection instruction such as VPCONFLICT (see, e.g., Figure 12 and associated text). [00169] A determination is made at 1 602 as to whether any conflicts exist. This may be determined, for example, by checking whether VC_VEC has any bits currently set. If not, then the process terminates. If so, then at 1603, lanes with the same index value are marked as left and right children in their respective reduction trees. In one embodiment, this is accomplished with VPOPCNT(VC_VEC) & 0x1 (as described above). In one embodiment, this bit sequence is used as a mask (LSB per lane) which marks the left children as active (e.g., left children have an odd number of conflicts to the right while right children have an even number).[00170] At 1 604, for each lane, the bit index is calculated for the most significant 1 indicating the leftmost lane (MSL) with an equal index value to the right of this lane (LSL). At 1605, the right children are moved into alignment with left children, placing the result in a temporary location. In one embodiment, this is accomplished using a vector permute/shuffle instruction.[00171] At 1 606, the reduction operation is applied to the temporary result from 1 605 with the original data to combine left and right children, placing the result in the lane of the left child. At 1 607 the mask created in 1 603 is broadcast and bitwise ANDed with the current values in the VC-VEC register, updating the VC-VEC register and thereby clearing the bits in the VC_VEC register associated with the right children (i.e., removing those children from consideration in future iterations). The process then returns to 1602 which determines whether any conflicts remain (e.g., checking whether VC_VEC has any bits set to 1 ). If not, the process terminates; if so, another iteration through 1 603-1 607 is performed.[00172] One application of the above techniques is in a "histogram" style operation, one example of which is shown below. Histogram operations are common in various applications, including image processing.// Simple histogram loopfor (int i = 0; i < N; i++) {a[b[i]] += 1 ;}[00173] In a loop such as the "histogram" loop above, a complicating factor preventing naive vectorization of this loop is that the values of "b[j]" and "b[k]" may be equal, causing a race condition on the same element of "a" within a single simple vectorized loop iteration. This is referred to as a "conflict." Using the above techniques removes any conflicts by first combining (reducing) any conflicting values into a single value per unique index value. [00174] In the case of the simple histogram above, the "index" vector would be vector-width "b[i]" values and the "value" vector would have a value of "1 " in every lane. If the right-hand side of the "+=" operation were the result of a calculation, rather than just a constant of "1 ," then the "value" vector would hold the result of that vectorized calculation. Our reduction loop could then be used in conjunction with gather and scatter instructions to vectorize the above histogram loop.[00175] In the foregoing specification, the embodiments of invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[00176] Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by anycombination of programmed computer components and custom hardware components.[00177] As described herein, instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) configured to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the Figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readablecommunication media (e.g., electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, digital signals, etc.). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine- readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network trafficrespectively represent one or more machine-readable storage media and machine- readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. Accordingly, the scope and spirit of the invention should be judged in terms of the claims which follow. |
A system includes a multiplexer (710, 720), an input/output (I/O) pin (719), a logic circuit (712, 722), and a control register (330). The multiplexer (710, 720) has multiple inputs, an output, and a selection input. The logic circuit (712, 722) is coupled between the multiplexer (710, 720) and the I/O pin (719). The logic circuit (712, 722) has a first input. The control register (330) includes first and second bit fields corresponding to the I/O pin. The first bit field is coupled to the selection input of the multiplexer (710, 720), and the second bit field is coupled to the first input of the logic circuit (712, 722). |
CLAIMSWhat is claimed is:1. A system, comprising: a multiplexer having a set of inputs, an output, and a selection input; an input/output (I/O) pin; a logic circuit coupled between the multiplexer and the I/O pin, the logic circuit having a first input; and a control register including: a first bit field corresponding to the I/O pin, the first bit field coupled to the selection input of the multiplexer; and a second bit field corresponding to the I/O pin, the second bit field coupled to the first input of the logic circuit.2. The system of claim 1, wherein the logic circuit includes a second input coupled to the output of the multiplexer.3. The system of claim 1, further comprising a device coupled to one of the inputs of the multiplexer, the device having an identifier, and the first bit field is configured to store the identifier of the device.4. The system of claim 3, wherein the identifier is an m-bit identifier, and the system further includes a re-encoder configured to convert an n-bit address of the device to the m-bit identifier, wherein m is less than n.5. The system of claim 1, wherein the second bit field stores a state of the I/O pin.6. The system of claim 1, wherein the first bit field is configured to store an identifier of a device and the second bit field is configured to store a state of the I/O pin, and the system further includes an authenticator coupled to the control register, the authenticator configured to, responsive to occurrence of a write transaction having a first write address and first write data, the first write address being an address of the I/O pin and the first write data being an address of a device: update the first bit field to store an identifier of the device; and update the second bit field to the I/O pin as being in a HANDOVER state with respect to the device whose identifier is in the first bit field.7. The system of claim 6, wherein, responsive to occurrence of a second write transaction having a second write address and second write data, the second write address being the address of22
the device and the second write data containing a new state of the I/O pin, the authenticator is configured to update the second bit field to being the new state of the I/O pin as contained in the second write data.8. The system of claim 6, further including an update register coupled to the authenticator, the update register configured to store the address of the device.9. The system of claim 8, further including a re-encoder configured to convert the device address from the update register to a shorter identifier of the device.10. The system of claim 1, wherein the system is a system-on-chip (SoC).11. A system, comprising: an input/output (I/O) cell circuit; an I/O cell access control circuit coupled to the I/O cell circuit; an authenticator coupled to a system bus; and a control register coupled between the authenticator and the I/O cell access control circuit; wherein the authenticator is configured to authenticate a first request to map the I/O cell circuit to a device specified in the first request if the I/O cell circuit is not presently mapped to another device, and to update the control register to associate the device specified in the first request with the I/O cell circuit.12. The system of claim 11, wherein the first request is a write transaction having a write address and write data, the write address being an address corresponding to the I/O cell circuit and the write data containing an address of the device.13. The system of claim 12, wherein the authenticator includes a re-encoder configured to convert the address of the device to a shorter identifier for the device.14. The system of claim 11, wherein, the authenticator is configured to authenticate a second request to connect the I/O cell circuit to the device specified in the second request if the I/O cell circuit presently is mapped to the device, and to update the control register to change a state of the I/O cell circuit to indicate that the I/O cell circuit is connected to the device.15. The system of claim 11, further including: a multiplexer having a set of inputs, an output, and a selection input; a logic circuit coupled between the multiplexer and the I/O cell circuit, the logic circuit having a first input; and a control register including:
a first bit field corresponding to the I/O cell circuit, the first bit field coupled to the selection input of the multiplexer; and a second bit field corresponding to the I/O cell circuit, the second bit field coupled to the first input of the logic circuit.16. The system of claim 15, wherein: the first bit field is configured to store an identifier of the device; and the second bit field is configured to store a state of the I/O cell circuit.17. A method, comprising: receiving a first request to access an input/output (I/O) pin, the I/O pin having a state, and the first request specifying a device to communicate with the I/O pin; responsive to the I/O pin’s state indicating that the I/O pin is not assigned to any device, updating the state of the I/O pin in a control register to indicate that the I/O pin is associated with the device and storing an identifier of the device in the control register; receiving a second request for connection of the I/O pin to the device; and responsive to the control register storing an identifier of the device, updating the state of the I/O pin in the control register to specify a state contained in the second request.18. The method of claim 17, further comprising updating the control register to configure an I/O cell circuit couple to the I/O pin.19. The method of claim 17, wherein the first request is a write transaction having a first write address and first write data, the first write address is an address of the I/O pin and the first write data is an address of the device.20. The method of claim 19, wherein the second request is a write transaction having a second write address and second write data, the second write address is the address of the device, and the second write data indicates a state for the I/O pin. |
HARDWARE-BASED SECURITY AUTHENTICATIONBACKGROUND[0001] Many computing systems employ security to protect access to various resources such as memory and other types of peripheral devices within the system. For example, firewalls may be implemented to provide security. However, some types of resources, such as input/ output (I/O) pins typically are not protected by way of firewalls.SUMMARY[0002] In at least one example, a system includes a multiplexer, an input/output (EO) pin, a logic circuit, and a control register. The multiplexer has multiple inputs, an output, and a selection input. The logic circuit is coupled between the multiplexer and the I/O pin. The logic circuit havs a first input. The control register includes first and second bit fields corresponding to the UO pin. The first bit field is coupled to the selection input of the multiplexer, and the second bit field is coupled to the first input of the logic circuit.[0003] In another example, a system includes an input/output (I/O) cell circuit and an UO cell access control circuit coupled to the I/O cell circuit. The system further includes an authenticator coupled to a system bus, and a control register coupled between the authenticator and the I/O cell access control circuit. The authenticator is configured to authenticate a first request to map the I/O cell circuit to a device specified in the first request if the I/O cell circuit is not presently mapped to another device, and to update the control register to associate the device specified in the first request with the I/O cell circuit.[0004] In yet another example, a method includes receiving a first request to access an input/output (I/O) pin. The I/O pin has a state. The first request specifies a device. The method further includes, responsive to the I/O pin’s state indicating that the I/O pin is not assigned to any device, updating the state of the I/O pin in a control register to indicate that the I/O pin is associated with the device and storing an identifier of the device in the control register. Further, the method includes receiving a second request for connection of the I/O pin to the device, and, responsive to the control register storing an identifier of the device, updating the state of the I/O pin in the control register to specify a state contained in the second request.
BRIEF DESCRIPTION OF THE DRAWINGS[0005] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:[0006] FIG. 1 illustrates a computing system employing destination-side firewalls and including an input/output (I/O) multiplexer to provide secure access to one or more I/O pins.[0007] FIG. 2 illustrates another computing system employing source-side firewalls and including an I/O multiplexer to provide secure access to one or more VO pins.[0008] FIG. 3 is an example block diagram of the VO multiplexer of FIGS. 1 and 2.[0009] FIG. 4 illustrates the content of the control register associated with each VO pin.[0010] FIG. 5 illustrates an example of an authenticator included within the VO multiplexer of FIG. 3.[0011] FIG. 6 illustrates an example of an address authenticator included with the authenticator of FIG. 5.[0012] FIG. 7 illustrates an example of an I/O cell access control circuit included within the VO multiplexer of FIG. 3.[0013] FIG. 8 shows an example state diagram implemented by a state machine within the authenticator of FIG. 5.[0014] FIG. 9 shows an example method implemented within the VO multiplexer of FIG. 3.[0015] FIGS. 10 A- 10C shows an example by which a specific peripheral device function connects to a specific I/O pin.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0016] For a secure system, authorized software may be allowed to access granular resources such as VO pins (data or control information), evenVinterrupt signals, etc. However, to build a traditional firewall system to manage which software can access a specific VO pin may be area-intensive with substantial latency because the memory maps of the firewall are packed in continuous address ranges to control structures not associated with which software process could own them. The described embodiments implement a security system for resources such as VO pins that are not otherwise directly protected by firewalls. The described embodiments include hardware circuits that provide secure access to VO pins.[0017] FIG. 1 shows an example of an electronic system 100 which includes a system direct memory access (DMA) controller 102, central processing units (CPUs) 104, system memory 110,
peripheral devices 112, and I/O cells 118. The hardware components internal to each CPU 104 are partitioned into a secure portion and a non-secure portion, and each such portion has its own stack pointers and error handling hardware. Runtime code on a given CPU 104 generally does not have access to that CPU’s secure portion. The system DMA controller 102 and CPUs 104 are coupled to a system bus 106. Each VO cell 118 is a circuit that provides control of, and connection to, a corresponding VO pin 119. The I/O pins 119 are accessible to devices external to computing system 100 as well as, as will be described below, peripheral devices 112 within the electronic system 100. Each VO cell 118 is a configurable circuit. For example, each I/O cell circuit 118 may have a glitch filter that can be enabled or disabled, a programmable slew rate, a programmable drive strength, a selectable pull-up or pull-down resistor, etc. In one implementation, computing system 100 is a system-on-chip (SoC) and VO pins 119 are extemally-accessible pins connected to pads on the semiconductor die of the SoC.[0018] Two CPUs 104 are shown in the example of FIG. 1, but any number (one or more) of CPUs may be included. Similarly, two peripheral devices 112 are shown in FIG. 1, but any number of peripheral devices 112 can be included. At least one VO pin is secured as described herein. Each peripheral device 112 provides one or more addressable peripheral device functions. A peripheral device function that is addressable permits software executed by a CPU 104 to issue bus transactions (reads and writes) on the system bus 106(?) targeting that particular function. For example, the system 100 may include two serial peripheral interface (SPI) controllers, each having one or more addressable peripheral device functions. Each SPI controller may provide, for example, an output clock and each such SPI clock is addressable. Further examples of a peripheral device include a universal asynchronous receiver/transmitter (UART) and an inter-integrated circuit (I2C) bus transceiver. The transmit data output (TXD) or receive data input (RXD) of a UART are addressable in one implementation as is the I2C clock of an I2C bus transceiver.[0019] The term “peripheral device” refers to a device that can receive and respond to requests from, for example, a CPU 104 or the system DMA 102, but also can send data to or receive data from an I/O pin 119 via the VO multiplexer 120 (described below). Accordingly, a peripheral device 112 may be an endpoint for a transaction from a CPU 102 or function as an intermediary between the CPU 104 and an endpoint (not shown) external to the electronic system 100 via an VO pin 119. [0020] For security reasons, the system memory 110 and each peripheral device 112 has an associated firewall. Firewall 109 protects the system memory 110 and firewalls 111 and 113 protect
the corresponding peripheral device 112. The firewalls 109, 111, and 113 are coupled between the system bus 106 and the corresponding system memory 110 and peripheral devices 112. Each firewall 109, 111, and 113 is configured with any of variety of rules to control which transactions can be provided through the firewall to the destination device (e.g., system memory 110, peripheral device 112). FIG. 1 is an example of destination-side firewalls in that the firewalls directly protect the destination devices (e.g., the system memory 110 and peripheral devices 112).[0021] FIG. 2 shows an electronic system 200, similar to computing device 100 of FIG. 1, but with its firewalls implemented in a source-side firewall configuration. Electronic system 200 includes a firewall 209 coupled between the system DMA controller 102 and the system bus 106, and firewalls 211 and 213 coupled between the corresponding CPUs 104 and the system bus. Each firewall 209, 211, and 213 is configured to block any transactions from reaching the system bus 106 from its respective source device (system DMA 102, CPU 104) if such transaction fails to comply with the firewall rules implemented within the firewall.[0022] Firewalls, however, are typically not implemented to protect access to I/O pins. In accordance with the described examples and as illustrated in FIGS. 1 and 2, the electronic system 100 includes an VO multiplexer 120 coupled between the VO cells 118 and the peripheral devices 112 and system bus 106. As will be described below, the VO multiplexer 120 is a hardware circuit that can be used to implement security with respect to VO cell access. For example, software executing on a CPU 104 can issue one or more transactions on the system bus 106 which collectively cause a particular peripheral device function to be connected to a particular VO cell 118 and its VO pin 119. After the peripheral device function is connected to the VO pin, software may command the peripheral device function to control the logic state of the VO pin. The I/O multiplexer 120 controls whether a given peripheral device function is able to access a given VO cell 118 on behalf of a software request. For example, the VO multiplexer 120 includes multiple control registers — one control register for each I/O cell 118. A control register can be programmed to include an identifier of a given peripheral function. Only the peripheral function identified in the control register can access the I/O cell 118 corresponding to the control register. Further, the VO multiplexer 120 implements a security protocol as to how a given control register can be programmed with the identifier for a particular peripheral device in the first place.[0023] The VO multiplexer 120 sets the state of a given I/O cell 118 and controls access to the I/O cell by a peripheral device function based on the VO cell’s state. In one example, the states for an
I/O cell 118 include UNASSIGNED, HANDOVER, CONNECTED (LOCKED), and CONNECTED (UNLOCKED). The UNASSIGNED state means that the I/O cell has not been assigned to any peripheral device function. The HANDOVER state means that the I/O cell has been assigned to a particular peripheral device function, but that the peripheral device function has not been connected to the I/O cell. The I/O multiplexer 120 implements two types of CONNECTED states. The CONNECTED (LOCKED) state means that a particular peripheral device function can now directly access the EO pin but no other peripheral device can be connected to the EO pin. The CONNECTED (UNLOCKED) state means that a particular peripheral device function can now directly access the EO pin with the possibility that another peripheral device can be connected to the EO pin. In another example, the EO multiplexer implements a single CONNECTED state and thus without regard to any LOCK/UNLOCK control status.[0024] For a given peripheral device function to control (write or read) a particular EO pin 119, in one embodiment a multi-step security protocol is implemented by the I/O multiplexer 120. First, a CPU 104 executes one or more machine instructions (software) to issue a transaction on system bus 106 which will cause the peripheral device function to access a particular EO pin 119. As described above, each EO pin 119 has a corresponding address. Each EO pin also has an associated, programmable control register The EO multiplexer 120 stores a list of the addresses of the EO pins that are currently in the UNASSIGNED state (e.g., I/O pins that are available to be connected to a peripheral device function). In one example, the system bus transaction for a peripheral device function to use a pin in the UNASSIGNED state is a first write transaction in which the write address is the address of the pin and the write data includes the address of the peripheral device function attempting to gain access to the pin. The EO multiplexer 120 stores the state of the I/O pin, for example, in the control register associated with the EO pin. Upon receipt of the bus transaction (e.g., the first write transaction mentioned above), the EO multiplexer verifies that the state of the target pin is currently UNASSIGNED and, if that is the case, then the EO multiplexer changes the state of the EO pin in the associated control register from UNASSIGNED to HANDOVER. The EO multiplexer stores an identifier of the state in the register and thus changes the identifier from an identifier indicative of UNASSIGNED to an identifier indicative of HANDOVER. The I/O multiplexer also stores an identifier of the particular peripheral device function identified by the above-described bus transaction. At this point, the control register for the target I/O pin indicates that a particular peripheral device function is associated with the EO pin and the pin is in the
HANDOVER state. Upon receipt of the above-described first write transaction, if the state of the I/O pin is not in the HANDOVER state (which may be the case if the VO pin is in the CONNECTED state with respect to a different peripheral device function), then in one example the VO multiplexer will not update the pin’s control register to store the identifier of the peripheral device function identified in the first write transaction — and thus prevent the peripheral device function being able to use the VO pin.[0025] The second step of the security protocol is for a CPU 104 to execute one or more machine instructions to issue a second write transaction to the peripheral device function with the write data including certain bits which specify that the VO pin is to transition to the CONNECTED state (CONNECTED LOCKED or CONNECTED LOCKED) for the peripheral device function. The VO multiplexer receives this latter, second write transaction and compares the peripheral function identifier from the VO pin’ s control register to the identifier of the peripheral device function targeted by the second write transaction. If the VO multiplexer determines that the identifiers match, then the VO multiplexer updates the state stored in the VO pin’s control register from the HANDOVER to the CONNECTED state (LOCKED or UNLOCKED). If the VO multiplexer determines that the identifiers do not match, then, then update to the control register does not occur and the peripheral device function targeted by the second write transaction is not permitted to be connected to the VO pin.[0026] The third step of the security protocol is for software running on the CPU 104(?) to issue a third write targeting the peripheral device function with the write data being the data that is to be transmitted through the connected I/O pin. The bits of the VO pin’s control register that store the identifier of the peripheral device CONNECTED to the I/O pin are used to select the particular input of a multiplexer corresponding to the peripheral device to which the VO pin is CONNECTED. If the VO pin is not in the CONNECTED state (e.g., the pin is in the UNASSIGNED or HANDOVER states), then none of the multiplexer’s inputs is selected. The output of the multiplexer passes through a logic circuit which permits the multiplexer’s selected input to pass through if the state of the I/O pin is in the CONNECTED state. If the pin is not in the CONNECTED state, then even if an input of the multiplexer is selected to be its output, the signal on the multiplexer’s output is precluded from reaching the VO pin.[0027] FIG. 3 is a block diagram of the VO multiplexer 120 and its connections to the system bus 106, peripheral devices 310 (which may be, for example, the system memory 110 and peripheral
devices 112 of FIGS. 1 and 2), and I/O cells 118. The I/O multiplexer 120 includes an authenticator 320, control register 330, and an I/O cell access control circuit 340. Additional or different components may be included as well as part of the I/O multiplexer 120. Each peripheral device 310 implements at least one addressable peripheral device function. FIGS. 5 and 6 provide a detailed example implementation of the authenticator 320 and will be described below. Similarly, FIG. 7 (described below) provides a detailed example implementation of the VO cell access control circuit 340.[0028] Referring still to FIG. 3, each VO pin 119 is connected to a corresponding VO cell 118. Control registers 320 include a control register for each VO cell 118 (which thus also means each VO pin 119 has a corresponding control register). In general, the authenticator 320 receives and authenticates transactions on system bus 106 from software executed by a CPU 104. The particular transactions to be authenticated by the authenticator 320 include one or more of the write transactions noted above in the multi-step security protocol connecting an VO pin to a particular peripheral device function. The authenticator 320 includes storage for addresses of VO pins that are in the UNASSIGNED state — VO pins which are thus available to be connected to a peripheral device function. For a write transaction whose write address is the address of an VO pin and whose write data is the address of a peripheral device function, the authenticator 320 determines whether the write address corresponds to an I/O pin in the UNASSIGNED state. If the write address corresponds to an VO pin in the UNASSIGNED state, the authenticator 320 updates the VO pin’s register with an identifier of the peripheral device function — otherwise (if the current state of the VO pin is other than UNASSIGNED) the authenticator 320 does not update the register with the identifier of the peripheral device function.[0029] In one example implementation, the authenticator 320 converts the peripheral device function’s address to a shorter (i.e., fewer bits) identifier. For example, the addresses implemented by software are 27 bits long. To avoid implementing VO pin control registers large enough to store 27-bit addresses, the authenticator 320 converts the 27-bit addresses to, for example, 6-bit identifiers for storage in the control registers. With 6-bit identifiers, 31 different peripheral device functions (identifier 000000 not being an identifier of a valid peripheral device function) can access a given VO pin. Fewer or more than 6 bits can be used to implement peripheral device functions. In other implementations, however, the control registers 330 are large enough to store the full addresses of the peripheral device functions.
[0030] The control register for an I/O pin also has one or more bits for storing the state of the I/O pin. In addition to the authenticator 320 storing the identifier of the peripheral device function, the authenticator 320 also changes the state bits in the control register to specify the HANDOVER state. [0031] Once in the HANDOVER state, CPU 104 can issue another write transaction to cause the state of the VO pin to transition from the HANDOVER state to a CONNECTED state (either CONNECTED LOCK or CONNECTED UNLOCK). For this write transaction, the write address is the address of the peripheral device whose identifier is stored in the control register for the I/O pin. The authenticator 320 receives the write transaction, converts the write address to the corresponding 6-bit identifier and compares the converted identifier to the identifier already stored in the I/O pin’s control register. If the two identifiers do not match, then the authenticator does not change the state of the I/O pin from the HANDOVER state to a CONNECTED state and the peripheral device function corresponding to the identifier converted from the write address in the write transaction is not permitted to connect to the I/O pin. However, if the authenticator 320 determines that the two identifiers match, then the authenticator updates the status stored in the I/O pin’s control register from HANDOVER to one of the CONNECTED states. The particular type of CONNECTED state is determined from the write data in the write transaction.[0032] With the control register for the I/O pin specifying that the I/O pin is in the CONNECTED state, one or more control signals 331 are asserted to the I/O cell access control circuit 340 to configure the I/O cell access control circuit 340 to select a signal from the addressable peripheral device function that corresponds to the peripheral device identifier stored in the I/O pin’s control register and to permit that signal to be routed through to the I/O pin.[0033] In some implementations, for each EO pin 119, the EO multiplexer 120 implements multiple channels though which peripheral devices can be connected to any given EO pin. For example, the I/O multiplexer 120 may implement two channels referred to herein as the P channel and the G channel. Multiple peripheral device functions can be connected to inputs of the P channel and multiple peripheral device functions can be connected to inputs of the G channel. A given peripheral device function can be connected to one channel, but not the other channel. Alternatively, the same peripheral device function can be connected to inputs of both channels.[0034] FIG. 4 shows an example bit assignment for a control register 330 of a given EO pin. The bit assignment of the control registers of all of the EO pins may be as shown in FIG. 4. In the example of FIG. 4, the control register 330 is a 32-bit register. Bits [5:0] store the identifier of the peripheral
device function selected through the P channel. Bits [7:6] store the state of the I/O pin with respect to the P channel and this is referred to as the P state. In one implementation, the P-channel state bit assignments are:TABLE I. BIT ASSIGNMENT FOR P CHANNEL STATESimilarly, for the G channel, bits [13:8] store the identifier of the peripheral device function selected through the G channel and bits [15: 14] store the state of the I/O pin with respect to the G channel (the G state) as is shown in Table II.TABLE II. BIT ASSIGNMENT FOR G CHANNEL STATEWhen the P channel is in either of the CONNECTED states, bit 7 is 1. Similarly, when the G channel is in either of the CONNECTED states, bit 15 is 1. Thus, bits 7 and 15 of an I/O pin’ s control register 330 can be used as part of control signals 331 to the I/O cell access control circuit 340 to cause the I/O cell access control circuit to provide a communication pathway from a peripheral device function to the EO pin’s EO cell 118. The use of bits 7 and 15 is further illustrated in FIG. 7 and described below.[0035] Bits [31 :16] of the controller register 330 provide the common control bits for the EO pin. The common controls control the configuration of the EO pin regardless of which channel’s (P or G) peripheral device function is connected to and using the EO pin. Table III below provides an example bit assignment for the common control bits of the control register.TABLE III. COMMON CONTROL BIT ASSIGMENTS[0036] FIG. 5 is an example implementation of the authenticator 320 of the I/O multiplexer 120 (FIG. 3). As shown, the authenticator 320 includes an address authenticator 510, state machines 530 and 550, register 520, read register 548, multiplexer 534, a read multiplexer selection circuit 536, and OR gate 540. Some or all of the address portion (address 506) of the system bus 106 is coupled to the address authenticator 510. Some or all of the data portion 507 of the system bus 106 is coupled to state machine 530. The state machines 530 and 550 may be implemented as digital logic circuits that perform the functionality described herein for the multi-step security protocol. Based on the write transactions described herein and based on the current state of the target VO pin, the state machine 530 updates the corresponding I/O pin control register 330.[0037] As described above, for an I/O pin in the UNASSIGNED state, a write transaction with a write address that matches the address of the VO pin causes the state machine 530 to update the corresponding control register 330 to store the identifier of the peripheral device function whose address is in the write data portion of the write transaction. This process occurs in two steps. In the first step, the write data is loaded into register 520 (also referred to as the update register) under control of the F Update Cyclel control signal. The F Update Cyclel control signal is generated by state machine 550. State machine 550 also generates an F_Update_Cycle2 control signal whose usage is shown in FIG. 6. In the example implementation, in order to use the same authentication address decode logic, the data portion of the transaction is captured in Cyclel (through assertion of F Update Cyclel) and then processed through the address decoder in Cycle2 (through assertion of F_Update_Cycle2). By contrast, a transaction that was directed at reading/writing the control register would have the address to be decoded as the address portion of the transaction and the decode operation would occur in cycle 1. As such, the contents of update register 520 includes the identifier corresponding to the address of the peripheral device function.[0038] As will be described below with regard to FIG. 6, the address authenticator 510 determines
whether the transaction is allowed to modify control information for the P and G channels. The address authenticator 510 asserts an authentication signal for the P channel (AUTHP) responsive to the identifier from the update register 520 matching the identifier currently stored in bits [5:0] of the control register for the I/O pin and an authentication signal for the G channel (AUTHG) responsive to the identifier from the update register 520 matching the identifier currently stored in bits [13 :8] of the control register.[0039] To update a control register 330, a write transaction must be authenticated as described herein. To read a control register 330, if the corresponding I/O pin is UNASSIGNED, then no particular authentication is required to permit the read to occur. If the VO pin is in the HANDOVER or a CONNECTED state, then only two entities can read the VO pin. The peripheral device function that is mapped to the VO pin in the HANDOVER or CONNECTED state can read that VO pin’s control register 330 or a high-level secure process can read the VO pin. Read multiplexer selection circuit 536 generates a selection signal 537 to multiplexer 534 to select which control register’s output read data is permitted to be stored in the read register 548. The read register 548 is couple to the system bus 106 and thus the control register content in the read register 548 can be provided therefrom to the entity that initiated the read transaction.[0040] The SECURE READ signal 541 is asserted (e.g., logic 1) by, for example, a CPU 104 upon a secure process issuing a read transaction. Otherwise, SECURE READ is in the opposite logic state (e.g., 0). Similarly, the UNASSIGNED READ signal 543 is asserted (e.g., logic 1) by, for example, a CPU 104 upon any process or peripheral device function attempting to read a control register whose VO pin is in the UNASSIGNED state. Otherwise, UNASSIGNED READ is in the opposite logic state (e.g., 0). Read multiplexer selection circuit 536 has a 0-input and 1 -input. The 0-input is coupled to the address authenticator 510 and if the output signal 539 of OR gate 540 is a logic 0 (which is the case if neither a read from a secure process nor a read to an UNASSIGNED VO pin has occurred). In that case, a peripheral device function that has been authenticated by address authenticator 510 is permitted to read the appropriate control register 330 in that the selection signal 537 from the read multiplexer selection circuit 536 selects the corresponding control register 330 to transfer its contents to the read register 548.[0041] The 1-input of the read multiplexer selection circuit 536 is coupled to at least a portion of the address portion of the system bus 106. Upon either the SECURE READ or UNASSIGNED READ signals 541, 543 being asserted to a logic 1 state, the OR gate 540 generates a logic 1 on its
output signal 539 to thereby cause the read multiplexer selection circuit 536 to select its 1 -input and thus cause the address portion of the system bus 106 to be used to select the particular control register 330 to have its contents transferred to the read register 548. The address portion of the system bus 106 may be mapped to a smaller (i.e., fewer bits) representation to be used as a selection signal 537 to multiplexer 534.[0042] FIG. 6 shows an example implementation of the address authenticator 510. The address authenticator 510 includes an input multiplexer 602, a decoder 606, and a verification circuit 610. The address authenticator 510 includes a separate verification circuit 610 for each I/O pin to be protected as described herein. The “0” input to multiplexer 602 receives the address portion 506 from the system bus 106 and the “1” input receives the write data portion 507 from the system bus 106. The state machine 550 asserts F Update Cyclel (e.g., to logic 1) to load the write data 507 into the update register 520. After the write data is loaded into the update register 520, the state machine 550 then asserts F_Update_Cycle2 to cause the “1” input of multiplexer 602 to be selected through as its output to decoder 606. Otherwise (when F_Update_Cycle2 is asserted to its opposite polarity state (e.g., logic 0)), the address portion 506 of the system bus is selected through multiplexer 602 as its output to decoder 606.[0043] The decoder 606 is shared by all of the I/O multiplexers 120. The decoder 606 converts the address provided to it from multiplexer 602 (be it the address directly from the system bus 106 or the address retrieved from the write data of a write transaction via the update register 520) to a shorter peripheral device function identifier (e.g., 6 bits in length).[0044] The verification circuit 610 for each VO pin includes, for its P-channel, a P-channel reencoder 612 coupled to a P-channel compare logic circuit 614. Similarly, the verification circuit 610 includes, for its G-channel, a G-channel re-encoder 622 coupled to a G-channel compare logic circuit 624. Each re-encoder 612 and 622 converts the longer address from the system bus 106 to a shorter representation for the peripheral device function identifiers. The output 613 of the P-channel reencoder 612 is the peripheral device function identifier (labeled Pin X Next PF [5:0] in FIG. 6) decoded from the system bus 106 or from the update register 520. Similarly, output 623 of the G- channel re-encoder 622 is the peripheral device function identifier (labeled Pin X Next GF [5:0]) decoded from the system bus 106 or from the update register 520.[0045] FIG. 6 shows an input to the P-channel compare logic 614 being a P-channel unassigned (PU) bit and an input to the G-channel compare logic 624 being a G-channel unassigned (GU) bit.
PU is asserted to logic state (e.g., logic 1) responsive to the I/O pin being in the UNASSIGNED state with respect to the P-channel. Similarly, GU is asserted to logic state (e.g., logic 1) responsive to the EO pin being in the UNASSIGNED state with respect to the G-channel. If PU is asserted to logic 0, the P-channel compare logic circuit 614 compares Pin X Next PF [5:0] with the P-channel function identifier currently stored in that EO pin’s control register. Accordingly, the comparison is performed by the P-channel compare logic when the I/O pin is not in the UNASSIGNED state for the P- channel — if the P-channel is in the UNASSIGNED state, the control register for that channel will not have a valid peripheral device function with which to be compared. Similarly, If GU is asserted to logic 0, the G-channel compare logic circuit 624 compares Pin X Next GF [5:0] with the G- channel function identifier currently stored in that EO pin’s control register. Accordingly, the comparison is performed by the G-channel compare logic when the EO pin for the G-channel is not in the UNASSIGNED state. The output of the P-channel compare logic circuit 614 is a bit having one logic state (e.g., 1) if its peripheral device function identifiers match; otherwise the output bit is the other logic state (e.g., 0). The output of the G-channel compare logic circuit 624 is a bit having one logic state (e.g., 1) if its peripheral device function identifiers match; otherwise the output bit is the other logic state (e.g., 0).[0046] The address authenticator 510 also includes multiplexers 640 and 644, AND gates 642 and 646 (other types of logic gates), and AUTHP HOLD and AUTHG HOLD registers 648 and 649. The AUTHP HOLD and AUTHG HOLD registers 648 and 649 are used to store the corresponding output bits of the P-channel compare logic circuit 614 and the G-channel compare logic circuit 624. Assertion of the F Update Cyclel causes registers 648 and 649 to store the corresponding outputs of the P-channel and G-channel compare logic circuits 6124, 624. Responsive to F_Update_Cycle2 being a logic 0, multiplexers 640 and 644 are configured to select their 0-inputs (which are the outputs of the corresponding P-channel and G-channel compare logic circuits 614, 624 as their outputs. Otherwise, responsive to F_Update_Cycle2 being a logic 1, the 1 -inputs of multiplexers 640, 644 are selected as their outputs. This functionality causes AUTHP[X] and AUTHG[X] for EO pin X to be asserted at the correct time, for example, at the time either (a) coincident with the authentication of the address stored in the update register 520 (in the case of a write to the EO pin address with the write data being the address of the peripheral device function) or (b) coincident with the authentication of the address directly from the address portion 506 of the system bus (in the case in which write transaction is to the address of the peripheral device function. An asserted AUTHP[X]
(e.g., logic ‘ 1’) means that the transaction on the P channel has been authenticated and can proceed (e.g., to update a control register 330). AUTHPfX] being a 0 means that the transaction is not authenticated. Similarly, an asserted AUTHGfX] (e.g., logic ‘ 1’) means that the transaction on the G channel has been authenticated and can proceed.[0047] FIG. 7 shows an example implementation of the I/O cell access control circuit 340. A separate I/O access control circuit 340 is provided for each I/O cell and corresponding I/O pin. FIG. 7 shows the VO access control circuit 340 for an I/O pin 719 (which may be one of the I/O pins 119 in FIGS. 1-3). I/O pin 719 is coupled to an I/O cell circuit 718 which in turn is connected to he I/O cell access control circuit 340.[0048] The I/O access control circuit 340 includes a P channel multiplexer 710, a P-channel logic circuit 712, a G channel multiplexer 720, a G channel logic circuit 722, multiplexer 726, and outbound manipulation circuit 728. The P channel multiplexer 710 has multiple inputs, any of which can be coupled to a peripheral device function, an output and a selection input. In one implementation, the P channel multiplexer has 32 inputs and thus can be coupled to as many as 31 different peripheral device functions. The P channel peripheral function identifier (bits [5:0]) is the selection signal for the P-channel multiplexer. However, identifier value 000000 is not a valid peripheral device function so a maximum of only 31 peripheral device functions can be selected by the peripheral device function in the control register. Each peripheral device function input to multiplexer 710 is a single bit signal (i.e., a 0 or a 1 from the corresponding peripheral device function). Responsive to the state machine 530 programming the VO pin control register with a particular peripheral device identifier for the P channel, the programmed peripheral device identifier (which is couple to the selection input 709 of the P channel multiplexer 710 causes the P channel multiplexer to select the input corresponding to the peripheral device identifier stored in the control register.[0049] The P channel logic circuit 712 prevents the selected input of the P channel multiplexer 710 from being in communication with the VO cell 718 unless the P channel is in the CONNECTED state. The P channel logic circuit 712 has inputs 715 and 717. The output 711 of the P-channel multiplexer 710 is coupled to input 715 of the P channel logic circuit 712. In the example of FIG. 7, the P channel logic circuit 712 is, or includes, an AND gate 713 and inputs 715 and 717 are the inputs of the AND gate 713. Bit 7 of the P state field of the control register 330 is coupled to input 717 of AND gate 713. In one implementation, each control register 330 is a combination of flip-
flops and bit 7 is the output of a flip-flop. As described previously, bit 7 is a 1 when the P channel is in either of the CONNECTED states. With bit 7 being a 0, any signal from a peripheral device function through multiplexer 710 will be gated off by AND gate 713. Responsive to the P channel being in a CONNECTED state, bit 7 is a 1 and thus the logic state of a signal on the selected peripheral device function through multiplexer 710 flows through AND gate 713 to input 729 of multiplexer 726.[0050] The G channel has a configuration similar to that of the P channel. The G channel multiplexer 720 has multiple inputs (e.g., 32), any of which can be coupled to a peripheral device function. The G channel peripheral function identifier (bits [13:8]) is the selection signal for the G channel multiplexer 720. Thus, responsive to the state machine 530 programming the VO pin control register with a particular peripheral device identifier for the G channel, the programmed peripheral device identifier causes the G channel multiplexer 720 to select the input corresponding to the peripheral device identifier stored in the control register. Each peripheral device function input to multiplexer 720 is a single bit signal (i.e., a 0 or a 1 from the corresponding peripheral device function). The G channel logic circuit 722 has inputs 719 and 721. The output 727 of the G channel multiplexer 720 is coupled to input 721 of the G channel logic circuit 722.[0051] As is the case for the P channel’s logic circuit 712, the G channel’s logic circuit 722 prevents the selected peripheral device function from communicating with the VO cell unless the G channel is in the connected state. In the example of FIG. 7, the G channel logic circuit 722 is, or includes, an AND gate 723 and inputs 719 and 721 are the inputs of the AND gate 723. Bit 15 (which may be the output of a flip-flop) of the G state field of the control register 330 is coupled to input 719 of AND gate 723. As described previously, bit 15 is a 1 when the G channel is in either of the CONNECTED states. With bit 15 being a 0, any signal from a peripheral device function through multiplexer 720 will be gated off by AND gate 723. Responsive to the GP channel being in a CONNECTED state, bit 15 is a 1 and thus the logic state of a signal on the selected peripheral device function through multiplexer 720 flows through AND gate 723 to input 731 of multiplexer 726.[0052] As such, for a given peripheral device function to assert a signal through to a given VO cell 718, the control register for that VO cell must be programmed for the identifier of the given peripheral device function and the channel to which that peripheral device function is coupled must be in one of the CONNECTED states. The bits of the peripheral device function identifier in the control register are used to control that channel’s multiplexer 710, 720 and at least one of the state bits for
that channel (e.g., bits 7 and 15) are used to gate on/off the communication pathway between the peripheral device function and the I/O cell based on the state of the channel.[0053] Multiplexer 726 implements a priority selection between the P and G channels in the event both channels have an active connection between the I/O cell circuit 718 and peripheral device functions. For example, the G channel could be used to drive a wake-up protocol sequence of bits to the I/O pin 719 to signal a receiving device that a transmission is about to occur, while the P channel could be used to drive data to the receiving device. The PRIORITY signal 725 is a selection signal for multiplexer 725 to select one of the P or G channels to be coupled to the I/O cell circuit 718. The PRIORITY signal 725 may be asserted by, for example, state machine 530.[0054] The I/O cell circuit 718 receives one or more bits of the common control field within the control register 330 The I/O cell circuit 718 uses the bits to configure the I/O cell circuit 718 (e.g., open drain, pull-up or pull-down resistor, drive strength, etc.).[0055] FIG. 8 is an example state diagram illustrating the states implemented by state machine 530 for a given VO pin X. This state diagram is applicable to either the G channel or the P channel for a given I/O pin. The states in this example shown include UNASSIGNED 810, HANDOVER 820, CONNECTED (UNLOCKED) 830, CONNECTED (LOCKED) 840, and LOCKED 850. GU is a hidden register bit for the G channel indicating the unassigned state of the G channel. GU being a 1 means that the G channel is unassigned for I/O pin X while GU being a 0 means that a peripheral device function has been assigned through the G channel for I/O pin X. PU also is a hidden register bit that means the same as the GU bit but for the P channel. GL is a hidden register bit for the G channel indicating the lock status of the G channel. GL being a 1 means that I/O pin X is in the locked state for a particular peripheral device function while GL being a 0 means that the I/O pin is not in the locked state. PL also is a hidden register bit that means the same as the GL bit but for the P channel. GSTATE indicates the state of the G channel (‘00’ means UNASSIGNED, ‘01’ means HANDOVER, ‘ 10’ means CONNECTED (UNLOCKED), and ‘ 11’ means CONNECTED (LOCKED)). The state diagram of FIG. 8 is applicable to the G channel, but a similar state transition is implemented by the state machine 530 for the P channel.[0056] While in the UNASSIGNED state 810, GU equal 1 (unassigned), GL equals 0 (unlocked), and GSTATE and PSTATE equal ‘00’ (unassigned). From the UNASSIGNED state 810, the state machine 530 can transition to the HANDOVER state 820. In the HANDOVER state, the I/O pin X has been handed over to a peripheral device function and thus the I/O pin is no longer unassigned.
The transition between UNASSIGNED state 810 and HANDOVER state 820 can be caused in one of two ways. First, if PU is set equal to 1 (which means the P channel is in the UNASSIGNED state for the EO pin X) and software issues a write transaction in which the write address is the address of EO pin X, the G channel state changes from UNASSIGNED state 810 to HANDOVER state 820 if nextGSTATE is set equal to ‘01’ (the state bits within the control register 330) and nextGF is not equal to 0 (i.e., the write data is an address of a peripheral device function and thus not 0). The state machine 820 updates the control register for the I/O pin X to store the identifier for the peripheral device function (following mapping of its address to the identifier) and updates the state bits in the register to ‘01’ to indicate that the state of the G channel is now HANDOVER. At this point, the G channel is in the HANDOVER state for a particular peripheral device function and the P channel is still in the UNASSIGNED state (meaning that no peripheral device function coupled to the P channel multiplexer 710 are has been unassigned to the I/O pin X). While the G channel is in the HANDOVER state 820, GU=0, GL is 0 or 1, and GSTATE is ‘01’.[0057] Once in the HANDOVER state 820, a transition can occur to either the CONNECTED (UNLOCKED) state 830, the CONNECTED (LOCKED) state 840, or the LOCKED state 850. A transition to the CONNECTED (UNLOCKED) state 830 occurs upon AuthGfX] 511 being asserted by the address authenticator 510 with nextGSTATE = ‘ 10’ and the GL lock bit set to 0. While in the CONNECTED (UNLOCKED) state 830, GU=0, GL=0, and GSTATE=’ 10’. The state machine 530 updates the control register 330 for EO pin X to specify the G channel state as ‘ 10’. The CONNECTED (UNLOCKED) state 830 permits the peripheral device function which has been connected to the EO pin X to use the EO pin for transmitting or receiving data.[0058] From the HANDOVER state 820, a transition can occur to the CONNECTED (LOCKED) state 840 occurs upon either AuthGfX] 511 being asserted by the address authenticator 510 or upon nextGSTATE being ‘ 11’ and the GL lock bit set to 1. While in the CONNECTED (LOCKED) state 840, GU=0, GL=1, and GSTATE=’ l l’. The state machine 530 updates the control register 330 for EO pin X to specify the G channel state as ‘ 11’. The CONNECTED (LOCKED) state 830 permits the peripheral device function which has been connected to the I/O pin X to use the I/O pin for transmitting or receiving data.[0059] The LOCKED state 850 is a state in which the EO pin is locked but not connected to any peripheral device function. The state machine 530 transitions to the LOCKED state 850 upon AUTHGfX] being asserted by the address authenticator 510 while PU=0 and with
nextGSTATE=’OO’, nextGF=O, and nextGU=l. While in this state, PU=0 (assigned), GU=1 (assigned), GL=1 (locked) and GSTATE=‘00’ (unassigned). From the LOCKED state 850, the state machine 530 can transition back to the UNASSIGNED state 810 upon AUTHPfx] being asserted by the address authenticator and nextPU being set to 1 and nextGL being set to 0.[0060] FIG. 9 is a flow chart illustrating an example method 900. At 902, a request is made to access to a target I/O pin. In one example (and as described above), this includes a CPU 104 executing one or more machine instructions to perform a write transaction in which the write address is the address of the target I/O pin and the write data includes the address of the peripheral device function.[0061] At 904, a state machine (e.g., state machine 530 in the authenticator 320 of the EO multiplexer 120) determines whether the target I/O pin is currently in the UNASSIGNED state. This determination is performed by examination of the state bits. If the state bits correspond to the UNASSIGNED state (e.g., 00), then the target I/O pin is determined to be in the UNASSIGNED state. Otherwise, the target I/O pin is determined not to be in the UNASSIGNED state. If the target I/O pin is in the ASSIGNED state (which means assigned to a different peripheral device function), the request is denied at 906. Denial of the request may mean ignoring the request and taking no further action.[0062] If the target I/O pin is currently in the UNASSIGNED state, then control passes to operation 908 and the state of the target I/O pin is changed to the HANDOVER state. This operation may be performed by the state machine 530 updating the state field of the control register 330 for the target EO pin to specify the HANDOVER state. At 910, a request is made to connect the EO pin to the peripheral device function specified in the control register 330. This request may be performed by CPU 104 issuing a write transaction in which the write address is the address of the peripheral device function and the write data contains an indication of a CONNECTED state for the I/O pin (e g., CONNECTED (LOCKED) or CONNECTED (UNLOCKED)). The verification circuit 610 within the address authenticator 510 determines whether the identifier corresponding to the write address matches the identifier currently stored in the I/O pin’s control register. If the identifiers do not match, the request is denied at 914. However, if the identifiers match, then control moves to operation 916 in which the state machine 530 changes the state of the EO pin to one of the CONNECTED states (as specified in the request at operation 910). Subsequently, at operation 918, the common control bits within the control register 330 are configured if the EO pin is in the
CONNECTED state for the given peripheral device function (as ensured by state machine 530).[0063] FIGS. 10A, 10B, and IOC shows an example in which a specific peripheral device function connects to a specific VO pin. Three peripheral devices 1001, 1002, and 1003 are shown as UART 0, UART 1, and I2C 0, respectively. UART 0 has a base address of 0x40004800 and two peripheral device functions — a transmit function (TXD) and a receive function (RXD). The TXD function has an address offset relative to the base address of 0x8 and the RXD’s offset is 0x4. Similarly, UART 1 has a base address of 0x40012800 with a TXD offset of 0x8 and RXD offset of 0x4. The I2C’s base address is 0x40073800 and its two functions and their offsets are DATA (offset 0x8) and CLK (offset 0x4). VO pins in the UNASSIGNED state are identified at 1005. The I/O pin address space has a base address of 0x4001A000 and I/O pins 1, 2, and 22 have offsets of 0x4, 0x8, and 0x58, respectively.[0064] In this example, the UART 0’s TXD function initiates a process to connect to I/O pin 22. At step 1011, a write transaction is performed (e.g., by CPU 102) in which the write address is the address of the VO pin 22 (0x4001A058) and the write data includes the address of UART 0’s TXD function (0x40004808). The authenticator 320 responds as described above and the state machine 530 updates the control register 330 for VO pin 22 to specify that the state of the VO pin and the identifier of the corresponding peripheral device function is HANDOVER for the P channel (the G channel is still in the UNASSIGNED state) and the identifier corresponding to UART 0 TXD address 0x40004808.[0065] At step 1012, a write transaction is performed in which the write address is the address of UART 0 TXD (address 0x40004808) and the write data includes bits that encode the next for I/O pin 22 as the CONNECTED (UNLOCKED) state. The firewalls ensure the security of the transaction targeting UART 0 TXD and thus authenticates the transaction at step 1012.[0066] At step 1013, the common control bits for VO pin 22 are configured through a write transaction in which the write address again is the address of UART 0 TXD (0x40004808) and the write data includes the common control configuration bits. As in step 1012, the firewalls ensure the security of the transaction targeting UART 0 TXD and thus authenticates the transaction at step 1013.[0067] The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A generates a signal to control device B to perform an action, in a first
example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal generated by device A.[0068] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
According to one exemplary embodiment, a FET which is situated over a substrate (104), comprises a channel (112) situated in the substrate (104). The FET further comprises a first gate dielectric (116) situated over the channel (112), where the first gate dielectric (116) has a first coefficient of thermal expansion. The FET further comprises a first gate electrode (114) situated over the first gate dielectric (116), where the first gate electrode (114) has a second coefficient of thermal expansion, and where the second coefficient of thermal expansion is different than the first coefficient of thermal expansion so as to cause an increase in carrier mobility in the FET. The second coefficient of thermal expansion may be greater that the first coefficient of thermal expansion, for example. The increase in carrier mobility may be caused by, for example, a tensile strain created in the channel (112). |
1. A FET situated over a substrate (104), said FET comprising: a channel (112) situated in said substrate (104); a first gate dielectric (116) situated over said channel (112), said first gate dielectric (116) having a first coefficient of thermal expansion; a first gate electrode (114) situated over said first gate dielectric (116), said first gate electrode (114) having a second coefficient of thermal expansion; wherein said second coefficient of thermal expansion is different than said first coefficient of thermal expansion so as to cause an increase in carrier mobility in said FET. 2. The FET of claim 1 wherein said second coefficient of thermal expansion is greater than said first coefficient of thermal expansion. 3. The FET of claim 2 wherein said increase in said carrier mobility is caused by a tensile strain created in said channel (112). 4. A FET situated over a substrate (104), said FET comprising a channel (112) situated in said substrate (104), a first gate dielectric (116) situated over said channel (112), said first gate dielectric (116) having a first coefficient of thermal expansion, a first gate electrode (114) situated over said first gate dielectric (116), said first gate electrode (114) having a second coefficient of thermal expansion, said FET being characterized in that: said second coefficient of thermal expansion being different than said first coefficient of thermal expansion so as to cause an increase in carrier mobility in said FET. 5. The FET of claim 4 wherein said second coefficient of thermal expansion is greater than said first coefficient of thermal expansion so as to cause a tensile strain in said channel (112), said tensile strain causing said increase in said carrier mobility. 6. The FET of claim 4 further comprising a second gate electrode (220) situated between said <Desc/Clms Page number 9> first gate electrode (222) and said first gate dielectric (216), said second gate electrode (220) having a third coefficient of thermal expansion, said third coefficient of thermal expansion being greater than said first coefficient of thermal expansion and said third coefficient of thermal expansion being less than said second coefficient of thermal expansion so as to cause a tensile strain in said channel (212), said tensile strain causing said increase in said carrier mobility. 7. The FET of claim 4 further comprising a second gate dielectric (324) situated between said first gate dielectric (316) and said substrate (304), said second gate dielectric (324) having a third coefficient of thermal expansion, said third coefficient of thermal expansion being less than said first coefficient of thermal expansion and said second coefficient of thermal expansion being greater than said first coefficient of thermal expansion so as to cause a tensile strain in said channel (312), said tensile strain causing said increase in said carrier mobility. 8. The FET of claim 4 wherein said FET is a PFET, said first coefficient of thermal expansion being greater than said second coefficient of thermal expansion so as to cause a compressive strain in said channel (112), said compressive strain causing said increase in said carrier mobility. 9. The FET of claim 4 further comprising a gate liner (426,428) situated adjacent to said first gate dielectric (116) and a gate spacer (430,432) situated adjacent to said gate liner (426,428), said gate liner (426,428) having a third coefficient of thermal expansion and said gate spacer (430,432) having a fourth coefficient of thermal expansion, said fourth coefficient of thermal expansion being greater than said third coefficient of thermal expansion so as to cause a tensile strain in said channel (412). 10. A FET situated on a substrate (104), said FET comprising: a channel (112) situated in said substrate (104); a gate stack (106) situated over said channel (112); a first gate dielectric (116) situated in said gate stack (106), said first gate dielectric (116) having a first coefficient of thermal expansion; a first gate electrode (114) situated over said first gate dielectric (116), said first gate electrode (114) <Desc/Clms Page number 10> having a second coefficient of thermal expansion; wherein said second coefficient of thermal expansion is different than said first coefficient of thermal expansion so as to cause a strain in said channel (112), said strain causing an increase in carrier mobility in said FET. |
<Desc/Clms Page number 1> FIELD EFFECT TRANSISTOR HAVING INCREASEDCARRIER MOBILITY TECHNICAL FIELD The present invention is generally in the field of semiconductor devices. More particularly, the present invention is in the field of fabrication of semiconductor field effect transistors ("FETs"). BACKGROUND ART A continuing demand exists for higher performance integrated circuits ("IC"), such as very large scale integrated circuits ("VLSI"). As a result, semiconductor manufacturers are challenged to increase the performance of transistors, such as n-channel field effect transistors ("NFETs") or p-channel field effect transistors ("PFETs"), which are utilized in ICs. One important measure of field effect transistor ("FET") performance is speed, which is related to current in the FET. A typical FET includes a gate stack, which includes a gate electrode situated over a gate dielectric, a source and a drain, and a channel, which is situated between the source and the drain in a silicon substrate. The channel is also situated underneath the gate dielectric, which is situated over a substrate, such as a silicon substrate. When a voltage is applied to the gate electrode that is greater than a threshold voltage, a layer of mobile charge carriers, e. g. electrons in an NFET and holes in a PFET, is created in the channel. By applying a voltage to the drain of the FET, a current can be caused to flow between drain and source. In the FET discussed above, the mobility of the carriers is directly related to the current that flows between the drain and the source, also referred to as FET current in the present application, which is directly related to the speed of the FET. Carrier mobility is a function of, among other things, temperature, electric field created between gate electrode and channel by the gate voltage, and dopant concentration. By increasing carrier mobility, FET current and, consequently, FET speed can be increased. Thus, as a result of increasing carrier mobility, FET performance can be desirably increased. Thus, there is a need in the art for a FET having increased carrier mobility to achieve increased FET performance. SUMMARY The present invention is directed to field effect transistors ("FETs") having increased carrier mobility. The present invention addresses and resolves the need in the art for a FET having increased carrier mobility to <Desc/Clms Page number 2> achieve increased FET performance. According to one exemplary embodiment, a FET, which is situated over a substrate, comprises a channel situated in the substrate. The FET further comprises a first gate dielectric situated over the channel, where the first gate dielectric has a first coefficient of thermal expansion. The FET further comprises a first gate electrode situated over the first gate dielectric, where the first gate electrode has a second coefficient of thermal expansion, and where the second coefficient of thermal expansion is different than the first coefficient of thermal expansion so as to cause an increase in carrier mobility in the FET. The second coefficient of thermal expansion may be greater that the first coefficient of thermal expansion, for example. The increase in carrier mobility may be caused by, for example, a tensile strain created in the channel. According to this exemplary embodiment, the FET may further comprise a"gate liner"situated adjacent to the first gate dielectric and a"gate spacer"situated adjacent to the gate liner, where the gate liner has a third coefficient of thermal expansion and the gate spacer has a fourth coefficient of thermal expansion, and where the fourth coefficient of thermal expansion is greater than the third coefficient of thermal expansion so as to cause a tensile strain in the channel. According to one exemplary embodiment, the FET may further comprise a second gate electrode situated between the first gate electrode and the first gate dielectric, where the second gate electrode has a third coefficient of thermal expansion, where the third coefficient of thermal expansion is greater than the first coefficient of thermal expansion and the third coefficient of thermal expansion is less than the second coefficient of thermal expansion so as to cause a tensile strain in the channel, and where the tensile strain causes the increase in the carrier mobility. Other features and advantages of the present invention will become more readily apparent to those of ordinary skill in the art after reviewing the following detailed description and accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a cross-sectional view of a structure, including an exemplary FET, in accordance with one embodiment of the present invention. Figure 2 illustrates a cross-sectional view of a structure, including an exemplary FET, in accordance with one embodiment of the present invention. Figure 3 illustrates a cross-sectional view of a structure, including an exemplary FET, in accordance <Desc/Clms Page number 3> with one embodiment of the present invention. Figure 4 illustrates a cross-sectional view of a structure, including an exemplary FET, in accordance with one embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTIONThe present invention is directed to field effect transistors ("FETs") having increased carrier mobility. The following description contains specific information pertaining to the implementation of the present invention. One skilled in the art will recognize that the present invention may be implemented in a manner different from that specifically discussed in the present application. Moreover, some of the specific details of the invention are not discussed in order not to obscure the invention. The drawings in the present application and their accompanying detailed description are directed to merely exemplary embodiments of the invention. To maintain brevity, other embodiments of the present invention are not specifically described in the present application and are not specifically illustrated by the present drawings. Figure 1 shows a cross-sectional view of an exemplary structure including an exemplary FET in accordance with one embodiment of the present invention. Structure 100 includes FET 102, which is situated on substrate 104. FET 102 includes gate stack 106, which includes gate electrode layer 114 and gate dielectric layer 116, source 108, drain 110, and channel 112. In the present embodiment, FET 102 can be an NFET or a PFET. As shown in Figure 1, source 108 and drain 110, which are formed in a manner known in the art, are situated in substrate 104 and channel 112 is situated between source 108 and drain 110. Further shown in Figure 1, gate dielectric layer 116 is situated over channel 112 on top surface 118 of substrate 104. By way of example, gate dielectric layer 116 can have a thickness of between 10.0 Angstroms and 15.0 Angstroms. Also shown in Figure 1, gate electrode layer 114 is situated over gate dielectric layer 116. By way of example, gate electrode layer 114 can have a thickness of between 500.0 Angstroms and 2000.0 Angstroms. Gate electrode layer 114 can be deposited over gate dielectric layer 116 at high temperature utilizing a chemical vapor deposition ("CVD") process or other appropriate processes. In the present embodiment, gate electrode layer 114 and gate dielectric layer 116 are selected such that gate electrode layer 114 has a coefficient of thermal expansion ("CTE") that is higher than a CTE of gate <Desc/Clms Page number 4> dielectric layer 116. Thus, as a wafer comprising structure 100 cools down after gate electrode layer 114 has been deposited at high temperature, gate electrode layer 114 decreases in size to a greater extent (i. e. shrinks more) than gate dielectric layer 116. As a result, tensile strain is created in channel 112, which increases carrier mobility in FET 102. In one embodiment, FET 102 is a PFET while gate dielectric layer 116 and gate electrode layer 114 are selected such that gate dielectric layer 116 has a CTE that is higher than a CTE of gate electrode layer 114. In such embodiment, compressive strain is created in channel 112, which increases carrier mobility in the PFET. Figure 2 shows a cross-sectional view of an exemplary structure including an exemplary FET in accordance with one embodiment of the present invention. Structure 200 includes FET 202, which is situated on substrate 204. FET 202 includes gate stack 206, which includes gate electrode layers 218 and 220 and gate dielectric layer 216, source 208, drain 210, and channel 212. Similar to FET 102, FET 202 can be an NFET or a PFET. In structure 200 in Figure 2, substrate 204, source 208, drain 210, and channel 212 correspond, respectively, to substrate 104, source 108, drain 110, and channel 112 in structure 100. As shown in Figure 2, gate dielectric layer 216 is situated over channel 212 on top surface 218 of substrate 204. By way of example, gate dielectric layer 216 can have a thickness of between 10.0 Angstroms and 15.0 Angstroms. Also shown in Figure 2, gate electrode layer 220 is situated over gate dielectric layer 216 and may comprise, for example, polycrystalline silicon or other appropriate material. By way of example, gate electrode layer 220 can have a thickness of between 100.0 Angstroms and 500.0 Angstroms. Further shown in Figure 2, gate electrode 222 is situated over gate electrode 220 and may comprise, for example, silicide or other appropriate material. By way of example, gate electrode layer 220 can have a thickness of between 400.0 Angstroms and 1500.0 Angstroms. Gate electrode layer 220 can be deposited over gate electrode layer 220 at high temperature utilizing a CVD process or other appropriate processes. In the embodiment of the present invention in Figure 2, gate electrode layers 220 and 222 and gate dielectric layer 216 are selected such that gate electrode layer 222 has a CTE that is higher than a CTE of gate electrode layer 220 and the CTE of gate electrode layer 220 is higher than a CTE of gate dielectric layer 216. Thus, as a wafer comprising structure 200 cools down after gate electrode layer 222 has been deposited at high temperature, gate electrode layer 222 decreases in size to a greater extent than gate electrode layer 220 and gate electrode layer 220 decreases in size to a greater extent than gate dielectric layer 216. As a result, tensile strain is created in channel 212, which increases carrier mobility in FET 202. In one embodiment, FET 202 is <Desc/Clms Page number 5> a PFET while gate dielectric layer 216 and gate electrode layers 220 and 222 are selected such that gate dielectric layer 216 has a CTE that is higher than a CTE of gate electrode layer 220 and the CTE of gate electrode layer 220 is higher than a CTE of gate electrode layer 222. In such embodiment, compressive strain is created in channel 212, which increases carrier mobility in the PFET. Figure 3 shows a cross-sectional view of an exemplary structure including an exemplary FET in accordance with one embodiment of the present invention. Structure 300 includes FET 302, which is situated on substrate 304. FET 302 includes gate stack 306, which includes gate electrode layer 314 and gate dielectric layers 316 and 324, source 308, drain 310, and channel 312. Similar to FET 102, FET 302 can be an NFET or a PFET. In structure 300 in Figure 3, substrate 304, source 308, drain 310, and channel 312 correspond, respectively, to substrate 104, source 108, drain 110, and channel 112 in structure 100. As shown in Figure 3, gate dielectric layer 316 is situated over channel 312 on top surface 318 of substrate 304 and may comprise silicon dioxide or other appropriate dielectric. Also shown in Figure 3, gate dielectric layer 324 is situated over gate dielectric layer 316 and may comprise silicon nitride or other appropriate dielectric. Further shown in Figure 3, gate electrode layer 314 is situated over gate dielectric 324. Gate electrode layer 314 can be deposited over gate dielectric layer 324 at high temperature utilizing a CVD process or other appropriate processes. In the present embodiment, gate electrode layer 314 and gate dielectric layers 316 and 324 are selected such that gate electrode layer 314 has a higher CTE than a CTE of gate dielectric layer 324 and gate dielectric layer 324 has a higher CTE than a CTE of gate dielectric layer 316. Thus, as a wafer comprising structure 300 cools down after gate electrode layer 314 has been deposited at high temperature, gate electrode layer 314 is reduced in size to a greater extent than gate dielectric layer 324 and gate dielectric layer 324 is reduced in size to a greater extent than gate dielectric layer 316. As a result, tensile strain is created in channel 312, which increases carrier mobility in FET 302. In one embodiment, FET 302 is a PFET while gate dielectric layers 316 and 324 and gate electrode layer 314 are selected such that gate dielectric layer 316 has a CTE that is higher than a CTE of gate dielectric layer 324 and gate dielectric layer 324 has a higher CTE than a CTE of gate electrode layer 314. In such embodiment, compressive strain is created in channel 312, which increases carrier mobility in the PFET. Figure 4 shows a cross-sectional view of an exemplary structure including an exemplary FET in accordance with one embodiment of the present invention. Structure 400 includes FET 402, which is situated <Desc/Clms Page number 6> on substrate 404. FET 402 includes gate stack 406, source 408, drain 410, channel 412, "gate liner"426 and "gate spacers"428. Similar to FET 102, FET 402 can be an NFET or a PFET. In structure 400 in Figure 4, substrate 404, source 408, drain 210, and channel 212 correspond, respectively, to substrate 104, source 108, drain 110, and channel 112 in structure 100. As shown in Figure 4, gate stack 406 is situated over substrate 404. Gate stack 406 can be gate stack 106 in Figure 1, gate stack 206 in Figure 2, or gate stack 306 in Figure 3. Further shown in Figure 4, gate liners 426 and 428 are situated over substrate 404 and are also situated adjacent to respective sides of gate stack 406. By way of example, gate liners 426 and 428 can have a thickness of between 50.0 Angstroms and 200.0 Angstroms. Also shown in Figure 4, gate spacers 430 and 432 are situated adjacent to gate liners 426 and 428, respectively. Thus, gate liners 426 and 428 are situated between gate spacers 430 and 432 and sides of gate stack 406, respectively, and are also situated between respective gate spacers 430 and 432 and substrate 404. In the present embodiment, gate liners 426 and 428 and gate spacers 430 and 432 are selected such that gate spacers 430 and 432 have respective CTEs that are higher than respective CTEs of gate liners 426 and 428. As a result, for similar reasons as discussed above, tensile strain is created in channel 412, which increases carrier mobility in FET 402. In one embodiment, FET 302 is a PFET while gate liners 426 and 428 and gate spacers 430 and 432 are selected such that gate liners 426 and 428 have respective CTEs that are higher than respective CTEs of gate spacers 430 and 432. As a result, compressive strain is created in channel 412, which increases carrier mobility in the PFET. Thus as discussed above, by selecting gate electrode and dielectric layers of a gate stack to have appropriate respective coefficients of thermal expansion, the present invention achieves increased tensile strain in the channel of a FET, i. e. FETs 101, 102,103, or 104. As a result, the present invention advantageously achieves increased carrier mobility in the FET, which results in increase FET performance. Additionally, by selecting gate electrode and dielectric layers of a gate stack to have appropriate respective coefficients of thermal expansion, the present invention achieves increased compression strain in the channel of a PFET, which results in increased carrier mobility and, consequently, increased performance in the PFET. From the above description of exemplary embodiments of the invention it is manifest that various techniques can be used for implementing the concepts of the present invention without departing from its scope. Moreover, while the invention has been described with specific reference to certain embodiments, a <Desc/Clms Page number 7> person of ordinary skill in the art would recognize that changes could be made in form and detail without departing from the spirit and the scope of the invention. The described exemplary embodiments are to be considered in all respects as illustrative and not restrictive. It should also be understood that the invention is not limited to the particular exemplary embodiments described herein, but is capable of many rearrangements, modifications, and substitutions without departing from the scope of the invention. Thus, field effect transistors ("FETs") having increased carrier mobility have been described. |
A buried-channel PMOS device is fabricated simultaneously with a surface-channel device if the gate is doped N-type while the NMOS gates are doped and the P+ source/drain doping is blocked from the "high" P-channel device. In the normal process the "high" PMOS is not fully self-aligned. However, when the PMOS process includes a lightly-doped drain (PLDD), the LDD doping is self-aligned. |
What is claimed is: 1. A process of forming a naturally high Vt P-MOS device simultaneously with N-MOS and P-MOS surface channel devices, comprising the steps of:forming N-tank regions; forming P-tank regions; forming transistor regions within the N-tank and P-tank regions; implanting P-type doping ions near the surface of the P-tank regions; implanting N-type doping ions near the surface of at least one of the transistor regions within the N-tank regions; forming gate oxide on each of the transistor regions; forming N-type polysilicon gate electrodes on the transistor regions in P-tank and N-tank regions while simultaneously forming at least one of undoped and P-type polysilicon gate electrode over other transistor regions within N-tank regions; implanting n-type doping ions into P-tank regions to form N-MOS transistor source/drain regions; and implanting p-type doping ions into N-tank regions to form P-MOS transistor source/drain regions while intentionally blocking the majority of the implant ions from penetration into the N-type polysilicon gate electrodes within N-tank regions. 2. The process according to claim 1, including the step of:forming a coating of oxide over each gate electrode. 3. The process according to claim 1, including the wherein the oxide/nitride region around each of the gate electrodes is formed by depositing and anisotropically etching the nitride.4. The process according to claim 1, including the step of forming isolation nitride pads over the tank regions prior to forming the field oxide regions.5. The process according to claim 1, after forming N-tank and P-tank regions, forming field oxide regions between adjacent tank regions.6. A process of forming a P-MOS buried channel device simultaneously with N-MOS and P-MOS surface channel devices, comprising the steps of:implanting phosphorus to form two N-tank regions; implanting boron to from a P-tank region; forming isolation nitride pads over the tank regions; forming field oxide regions between adjacent tank regions; forming gate oxide on each of the tank regions; implanting boron into the P-tank region; implanting phosphorus into one of the two N-tank regions; forming N-type polysilicon gate electrodes on the P-tank and one of the N-tank regions and at least one of an undoped and P-type polysilicon gate electrode on the other N-tank region; implanting the P-tank with one of phosphorus and arsenic to form lightly doped source/drain regions; implanting the N-tank with boron to form lightly doped source/drain regions; forming a oxide/nitride region around each of the gate electrodes; implanting the P-tank with at least one of phosphorus and arsenic to from N-type source/drain regions: and implanting boron to from P-Type source/drain in the two N-tanks while blocking the implant from the N-type polysilicon gate electrode contained therein. 7. The process according to claim 6, including the step of:forming a coating of oxide over each gate electrode. 8. The process according to claim 6, including the wherein the oxide/nitride region around each of the gate electrodes is formed by depositing and anisotropically etching the nitride.9. A process of forming a naturally high Vt P-MOS device simultaneously with N-MOS and P-MOS surface channel devices, comprising the steps of:implanting phosphorus to form N-tank regions; implanting boron to from P-tank regions; forming field oxide regions between adjacent tank regions; forming gate oxide on each of the tank regions; implanting boron into the P-tank region; implanting phosphorus into one the two N-tank regions; forming N-type polysilicon gate electrodes on the P-tank and one of the N-tank regions and at least one of an undoped and P-type polysilicon gate electrode on the other N-tank region; implanting the P-tank with one of phosphorus and arsenic to form lightly doped source/drain regions; implanting the N-tank with boron to form lightly doped source/drain regions; forming a oxide/nitride region around each of the gate electrodes; implanting the P-tank with one of phosphorus and arsenic to form N-type source/drain regions; and implanting boron to form P-type source/drains in the two N-tanks while blocking the implant from the N-type polysilicon gate electrode contained therein. 10. A process for forming a naturally high Vt PMOS device in an integrated circuit comprising mainly of NMOS and PMOS surface channel devices without using additional process steps, comprising the steps of:a) forming a semiconductor substrate with a plurality of N-type regions each for containing a PMOS device and a plurality of P-type regions each for containing an NMOS device, with suitable isolation regions therebetween; b) forming an insulated polysilicon gate electrode in each of the plurality of N-type regions for the PMOS devices and for the high Vt PMOS devices and in each of the plurality of P-type regions for the NMOS devices; c) implanting an n-type dopant in the polysilicon gate electrodes for the NMOS devices and for the high Vt PMOS devices using a same implant step to form N+ polysilicon gate electrodes; d) implanting a p-type dopant in the polysilicon gate electrodes for the PMOS devices to form P+ polysilicon gate electrodes; e) implanting an n-type dopant to form lightly-doped-drain extension regions and source-drain regions for the NMOS devices; and f) implanting a p-type dopant to form lightly-doped-drain extension regions and source-drain regions for the PMOS devices and for the high Vt PMOS devices. 11. The method of claim 10, wherein step b comprises the steps of forming a layer of polysilicon over the semiconductor substrate and patterning the polysilicon layer to form a plurality of polysilicon gate electrodes; andwherein step c is performed prior to the step of patterning the polysilicon layer by masking regions of the polysilicon layer that correspond to the PMOS devices but not to the high Vt PMOS devices. 12. The method of claim 10, wherein step d and step f are performed using the same implant steps.13. The method of claim 10, wherein during step f, the N+ polysilicon gate electrodes of the high Vt PMOS devices are masked to prevent p-type dopant implantation therein. |
FIELD OF THE INVENTIONThe invention relates to integrated circuits, and more particularly to a high threshold PMOS transistor utilizing a surface-channel process.BACKGROUND OF THE INVENTIONIn certain integrated circuit designs, a PMOS transistor with high Vt is required to guarantee a zero through-current in normal circuit operation. This has been accomplished by the use of a "natural" Vt buried channel PMOS device. However most present day CMOS technology uses surface-channel PMOS transistors, therefore an alternative method is required.For traditional "buried-channel" PMOS devices, the high Vt is easy to make. When the PMOS gate material is N±doped polysilicon, a boron Vt adjust implant is usually required to reduce the Vt to the desired voltage, the "natural" Vt (without Vt-adjustment implant) is too high for optimum circuit performance. For a buried-channel PMOS process, the Vt-adjust implant may be blocked from those transistors that need the high Vt, and both high and low Vt devices are produced simultaneously. If the natural PMOS Vt is too high, an extra mask and implant will produce a device with the correct Vt. For surface-channel PMOS devices, obtaining high Vt is more difficult. For these devices, the PMOS gate is P+ doped, so the "natural" device has a very low Vt.SUMMARY OF THE INVENTIONThe invention provides a method for building high Vt PMOS devices in an otherwise surface-channel process without adding any process steps. A buried-channel PMOS device is fabricated simultaneously with a surface-channel device if the gate is doped N-type while the NMOS gates are doped and the P+ source/drain doping is blocked from the "high" P-channel device. In the normal process the "high" PMOS is not fully self-aligned. However, when the PMOS process includes a lightly-doped drain (PLDD), the LDD doping is self-aligned.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a P substrate of silicon with N-wells, and six devices; andFIGS. 2 through 21 show the process steps for forming an NMOS and PMOS surface channel device, and a PMOS buried channel device according to the present invention.DESCRIPTION OF A PREFERRED EMBODIMENTFIG. 1 shows a P-type substrate 10 of silicon having six devices 12-17 formed thereon. Devices 12, 14, 15 and 17 are PMOS devices, and devices 13 and 16 are NMOS devices. Each of PMOS devices 12, 14, 15 and 17 are formed in N-wells (also called N-Tanks) 12a, 14a, 15a and 17a, respectively. NMOS devices are formed directly into substrate 10.Each device has a moat area and a gate. For example, the moat areas of devices 12, 14, 15, and 17 are 12b, 14b, 15b, and 17b, respectively, and the gates are 12c, 14c, 15c and 17c, respectively. Devices 13 and 16 utilize the substrate 10 as the tank area. Devices 13 and 16 have moats 13b and 16b, respectively, and gates 13c and 16c, respectively.FIGS. 2-21 show, for example, the formation of three devices as it would appear in a cross-sectional view taken along section line 2-2.FIG. 2 shows substrate 10 which is, for example, a p-type silicon substrate, boron doped. Substrate portion 20 may be, for example, a p-epi silicon, boron doped layer. A first oxide pad 21 is formed over p-layer 20. Oxide pad 21 may be in the range of 370 to 490 Å in thickness.On top of oxide pad 21 is formed a nitride layer 22. Nitride layer 22 may be a thickness in the range of 900 to 1500 Å.FIG. 3 shows an initial process step for forming three devices. A layer of photoresist material 24 is placed over the central portion of nitride layer 22. Portions of nitride layer 22 have been removed, the portion under photoresist 24 is not removed. An N-tank phosphorus implant is made through the oxide pad 21 to form two N-tanks 25,26 in the surface of layer 20.FIG. 4 shows the two N-tank regions 25,26 in layer 20. The photoresist layer 24, (FIG.3) has been removed by, for example, a plasma O2 removal process, exposing the nitride layer 22, and a tank oxide 27, 28 is formed over the N-tank regions 25, 26, respectively. Tank oxide layers 27,28 may have, for example, a thickness in the range of 3800 to 5200 Å.FIG. 5 shows the next step in the process where nitride layer 22 has been removed, for example, by a phosphoric acid etch process. A boron implant is made to form P-tank 30. The device now has two N-Tanks 25 and 26, which may correspond to N-Tanks 12a and 14a in FIG. 1, and P-Tank 30 is the area on device 10 between 12a and 14a in FIG. 1.After P-Tank 30 has been formed, the substrate is annealed at a high temperature so that the N-tank and P-tank regions diffuse to the desired depth, as shown in FIG. 6. N-Tanks 25 and 26 on each side of P-Tank 30, are covered with tank oxide layers 27 and 28 respectively. P-tank 30 has a layer 29 of pad oxide over its surface.In FIG. 7, the tank oxide layers 27 and 28 are removed over a central portions of each of N-Tanks 25 and 26 and P-Tank 30. An isolation pad oxide 31 is formed over the N-tanks 25 and 26, and P-tank 30. Isolation oxide pad may be, for example, in the range of 125 to 175 Å. Isolation nitride layer 32, for example in the range of 1800 to 2200 Å thick is then deposited over pad oxide layer 31. Photoresist is deposited and portions are removed to form photoresist layers 34a, 34b, 34c. The nitride layer 32 is then etched to form nitride pads 32a, 32b, and 32c over each tank region 25, 26 and 30. After forming the nitride pads 32a, 32b and 32c, the device is subjected to an oxidation process which includes heating the device in an N2 atmosphere in a temperature range, for example, of 750[deg.] to 950[deg.] followed by a steam oxidation process. This forms the field oxide regions shown in FIG. 8 and indicated at 33a, 33b, 33c and 33d. FIG. 9 shows the next step in the process where nitride pads 32a, 32b and 32c and pad oxide regions 31a, 31b and 31c have been removed using phosphoric acid followed by a HF wet etch.In the etch and cleaning process, when nitride pads 32a, 32b and 32c are removed, the pad oxide layers 31a-31c (FIG. 8) are removed. FIG. 10 shows layers of dummy gate oxide, layers 34a, 34b and 34c, that are formed, for example, by heating between 750[deg.] and 950[deg.] followed by a steam oxidation process.Next, in FIG. 11, a layer of photoresist 36a, 36b is deposited over each of the N-tank regions 25 and 26, but P-tank 30 is exposed. A Vt implant for the NMOS device (VTN) is made to the P-tank region 30 to set the threshold voltage, an NMOS device implant for punch through (NPTHRU) is made to prevent bulk punch through in the N channel, and a Channel stop implant is made to set the field voltage. These three implants are made with boron.The photoresist layers 36a and 36b are removed and a photoresist layer 37 is formed over the P-tank 30 and the high Vt PMOS N tank region 25 as shown in FIG. 12. Two implants are made with phosphorus, a Vt implant for the PMOS device (VTP) to set P-channel threshold voltages, and a PMOS device implant for punch through (PPTHRU) to prevent bulk punch through in the P-channel.Next, the photoresist 37 is removed. The dummy gate oxide gate 34a-34c is etched and oxide gate 34d-34f is grown over the tank surfaces (tanks 25, 26 and 30). Then a polysilicon layer 35 is deposited over each tank area (FIG. 13). The polysilicon layer 35 may be, for example, a thickness between 2800 and 3400 Å.FIG. 13a shows layer of photoresist 37b over tank region 26. An NPOLY phosphorus implant is made into polysilicon 35 except the part over N-tank 26. This provides N-type doping in the polysilicon over P-tank region 30 and N-tank region 25.After the phosphorus implant, (FIG. 13a), photoresist 37b over tank region 26 is removed and photoresist material is deposited on the polysilicon and patterned to form the regions 38, 39 and 40, over the central regions 26a, 30a and 25a of the N- and P-tanks 25, 30 and 25, respectively. This is shown in FIG. 14. The polysilicon 35 is etched to remove all the polysilicon over the P-tank and N-tank regions except those regions that were under the photoresist areas 38, 39 and 40.After the etch process, polysilicon gate electrodes 41, 42, and 43 remain on N-tank 25, P-tank 30, and N-tank 26, respectively. Each polysilicon gate electrode may be, for example, in the range of 2800 to 3400 Å in thickness. A thin coating of oxide 41a, 42a and 43a is formed over each respective polysilicon gate electrode 41-43. This oxide may be in the range, for example, of 75 to 85 Å thick (FIG. 15).FIG. 16 shows the device of FIG. 15 with the two N-tanks 25 and 26 covered with photo resist 50 and 51, respectively. In this step of the process the source/drain region of the P-tank is implanted with phosphorus and arsenic. This is the N-LDD (Lightly Doped Drain) implant. The implanted region 55 is shown in FIG. 16.In FIG. 17, the P-tank region is covered with photoresist 53 and the N-tank regions 25 and 26 are implanted with boron to form the source/drain regions 58 and 59. This is the P-LDD (Lightly Doped Drain) implant.FIG. 18 shows the device of FIG. 17 after nitride has been deposited on the device surface and anisotropically etched back to form oxide/nitride around each of the polysilicon electrodes 43, 42, and 41. The oxide/nitride spacers 62-67 are used for self-aligned silicide formation as well as allowing self-aligned source/drain implants.In FIG. 19, the N-tank regions are covered with photoresist at 70 and 71. An arsenic and/or phosphorus N+ source/drain implant is made to allow grading of the N+/P junctions. The step or grading is shown at 55/55a, with the 55a region being the N+ implant region.In FIG. 20, a layer of photoresist is placed over the P-tank region. A layer of photoresist is placed over the gate electrode 41 on the N-tank 25. However, there is no photoresist over gate electrode 43 on N-tank 26. A P+ source/drain boron implant is made over the N-tank region 26. Photoresist layer 73 blocks the source/drain implant from gate electrode 41 and may provide an extended P region 58 when compared with the P region 59. The P+ regions 58a and 59a are a result of the P+ implant.FIG. 21 shows the photoresist layers 72 and 73 removed, and the resulting device PLDD (P Lightly Doped Drain) in N-Tank 25. The P source/drain has been blocked to proved the lightly doped region 58. The LLD is self-aligned and provides a high Vt PMOS device in an otherwise surface channel process without adding additional process steps. |
Multiple data transfer requests can be merged and transmitted as a single packet on a packetized bus such as a PCI Express (PCI-E) bus. In one embodiment, requests are combined if they are directed to contiguous address ranges in the same target device. An opportunistic merging procedure is advantageously used that merges a first request with a later request if the first request and the later request are mergeable and are received within a certain period of time; otherwise, requests can be transmitted without merging. |
A bus interface device for transmitting data transfer requests from a plurality of clients as packets on a bus, the device comprising:a queue configured to store a plurality of data transfer requests from the plurality of clients, each data transfer request specifying a target address range;combiner logic configured to form a packet from at least one of the data transfer requests in the queue, the combiner logic being further configured to combine two or more of the data transfer requests in the queue into one packet in the event that the two or more data transfer requests being combined specify respective target address ranges that are mergeable; andoutput logic configured to drive the packets onto the bus.The bus interface device of claim 1 wherein the respective target address ranges specified by two or more data transfer requests are mergeable in the event that the respective target address ranges are contiguous.The bus interface device of claim 1 wherein the respective target address ranges specified by two or more data transfer requests are mergeable in the event that either the respective target address ranges are contiguous or the respective target address ranges are at least partially overlapping.The bus interface device of claim 2 wherein each data transfer request further specifies a target device and wherein the respective target address ranges specified by two or more data transfer requests are mergeable in the event that each of the two or more data transfer requests specifies the same target device and the respective target address ranges are contiguous.The bus interface device of claim 1 wherein the combiner logic includes:merging logic configured to detect whether any of the data transfer requests in the queue that are mergeable with an oldest one of the data transfer requests in the queue and to generate status information based at least in part on the detection; andsend control logic configured to determine whether to transmit a packet during a current cycle or wait for a subsequent cycle, wherein the determination whether to transmit a packet is based at least in part on the status information generated by the merging logic.The bus interface device of claim 5 wherein the send control logic is further configured such that the determination whether to transmit a packet is based at least in part on a number of data transfer requests in the queue.The bus interface device of claim 5 wherein the send control logic is further configured such that the determination whether to transmit a packet is based at least in part on an elapsed time since sending a previous packet.The bus interface device of claim 5 wherein the send control logic is further configured such that the determination whether to transmit a packet is based at least in part on an elapsed time since receiving the oldest data transfer request in the queue.The bus interface device of claim 5 wherein the send control logic is further configured such that the determination whether to transmit a packet is based at least in part on a number of data transfer requests in the queue that are mergeable with the oldest data transfer request in the queue.The bus interface device of claim 1 wherein the clients are processing cores of a processor.The bus interface device of claim 1 wherein each of the clients is a discrete component and wherein the bus interface device is configured with a dedicated local bus line for connecting to each of the clients.The bus interface device of claim 1 wherein the data transfer requests are received in a packet format suitable for transmission on the bus.The bus interface device of claim 1 wherein the data transfer requests are received in an internal format that is different from a packet format suitable for transmission on the bus.The bus interface device of claim 1 wherein each of the data transfer requests is a read request.The bus interface device of claim 1 wherein each of the data transfer requests is a write request.The bus interface device of claim 1 wherein the bus is a PCI Express (PCI-E)bus.A method for transmitting data transfer requests from a plurality of clients as packets on a bus, the method comprising:receiving a first data transfer request specifying a first address range;receiving at least one subsequent data transfer request, each subsequent data transfer request specifying a respective target address range;determining whether the first target address range is mergeable with the target address range of one or more of the subsequent data transfer requests;forming a packet for transmission on the bus,wherein in the event that the first target address range is mergeable with the target address range specified by one or more of the subsequent data transfer requests, the packet is formed from the first request and the one or more of the subsequent data transfer requests, andwherein in the event that the first target address range is not mergeable with the target address range of any of the subsequent requests, the packet is formed from the first request; anddriving the packet onto the bus.The method of claim 17 wherein the first target address range is mergeable with a second target address range of a subsequent data transfer request in the event that the first target address range and the second target address range are contiguous.The method of claim 18 wherein each of the first and subsequent data transfer requests further specifies a target device and wherein the first target address range is mergeable with the second target address range in the event that the first data transfer request and the subsequent data transfer request specify the same target device and the first target address range and the second target address range are contiguous.The method of claim 17 wherein the act of forming the packet is performed in response to a send condition based at least in part on a number of data transfer requests that have been received.The method of claim 17 wherein the act of forming the packet is performed in response to a send condition based at least in part on an elapsed time since sending a previous packet.The method of claim 17 wherein the act of forming the packet is performed in response to a send condition based at least in part on an elapsed time since receiving the first data transfer request.The method of claim 17 wherein the act of forming the packet is performed in response to a send condition based at least in part on a number of subsequent data transfer requests that are mergeable with the first data transfer request.A processor comprising:a plurality of processing cores, each processing core configured to generate data transfer requests; anda bus interface unit configured to receive data transfer requests from the processing cores and to transmit the data transfer requests as packets on a bus, the bus interface unit including:a queue configured to store a plurality of data transfer requests from the plurality of processing cores, each data transfer request specifying a target address range;combiner logic configured to form a packet from at least one of the data transfer requests in the queue, the combiner logic being further configured to combine two or more of the data transfer requests in the queue into one packet in the event that the two or more data transfer requests being combined specify respective target address ranges that are mergeable; andoutput logic configured to drive the packets onto the bus. |
BACKGROUND OF THE INVENTIONThe present invention relates in general to communication on a bus, and in particular to combining packets for transmission onto a bus that uses a packetized protocol.Modem personal computer systems generally include a number of different devices, including processors, memory, data storage devices using magnetic or optical media, user input devices such as keyboards and mice, output devices such as monitors and printers, graphics accelerators, and so on. All of these devices communicate with each other via various buses implemented on a motherboard of the system. Numerous bus protocols are known, including PCI (Peripheral Component Interconnect), PCI-E (PCI Express), AGP (Advanced Graphics Processing), Hypertransport, and so on. Each bus protocol specifies the physical and electrical characteristics of the connections, as well as the format for transferring information via the bus. In many instances, the buses of a personal computer system are segmented, with different segments sometimes using different bus protocols, and the system includes bridge chips that interconnect different segments.Typically, buses are used to exchange data between system components. For instance, when a graphics processor needs to read texture or vertex data stored in system memory, the graphics processor requests the data via a bus and receives a response via the same bus. Where many devices are making requests for data (e.g., from system memory) or where one device is making large or frequent requests, a bus or bus segment can become saturated, leading to decreased performance. In fact, many modem graphics processors are bandwidth-limited; that is, their performance is limited by the ability to deliver data via the bus that connects them to the rest of the system. Consequently, reducing traffic on the bus, which increases the available bandwidth, is expected to improve system performance. Techniques that reduce traffic on the bus would therefore be highly desirable.BRIEF SUMMARY OF THE INVENTIONEmbodiments of the present invention provide devices and methods for merging multiple data ttarasfer requests and transmitting the merged requests as a single packet on a packetized bus such as a PCI Express (PCI-E) bus. Requests can be combined, for instance, if they are directed to contiguous address ranges in the same target device. An opportunistic merging procedure is advantageously used that merges a first request with a later request if the first request and the later request are mergeable and are received within a certain period of time; otherwise, requests can be transmitted without merging. The wait time and other parameters of the procedure can be tuned to optimize tradeoffs between reduced overhead on the bus due to merging and added latency introduced by waiting for mergeable requests.According to one aspect of the present invention, a bus interface device is provided for transmitting data transfer requests from a plurality of clients as packets on a bus. The bus interface device includes a queue, combiner logic, and output logic. The queue is configured to store data transfer requests from the clients, each data transfer request specifying a target address range. The combiner logic is configured to form a packet from at least one of the data transfer requests in the queue and is further configured to combine two or more of the data transfer requests in the queue into one packet in the event that the two or more data transfer requests being combined specify respective target address ranges that are mergeable. The output logic configured to drive the packets onto the bus.In some embodiments, the respective target address ranges specified by two or more data transfer requests are mergeable in the event that the respective target address ranges are contiguous. In other embodiments, each data transfer request further specifies a target device and wherein the respective target address ranges specified by two or more data transfer requests are mergeable in the event that each of the two or more data transfer requests specifies the same target device and the respective target address ranges are contiguous.In some embodiments, the combiner logic includes merging logic and send control logic. The merging logic is configured to detect whether any of the data transfer requests in the queue that are mergeable with an oldest one of the data transfer requests in the queue and to generate status information based at least in part on the detection. The send control logic is configured to determine whether to transmit a packet during a current cycle or wait for a subsequent cycle, wherein the determination whether to transmit a packet is based at least in part on the status information generated by the merging logic.Various conditions can be tested in determining whether to send a packet. For example, the send control logic can be configured such that the determination whether to transmit a packet is based at least in part on a number of data transfer requests in the queue, or on an elapsed time since sending a previous packet, or on an elapsed time since receiving the oldest data transfer request in the queue, or on a number of data transfer requests in the queue that are mergeable with the oldest data transfer request in the queue. Any combination of these or other conditions may be tested.The bus interface device can be arranged in various ways. In one embodiment, the device is a component of a processor and the clients are processing cores of the processor. In another embodiment, each of the clients is a discrete component, and the bus interface device is configured with a dedicated local bus line for connecting to each of the clients.In some embodiments, the data transfer requests are received by the bus interface device in a packet format suitable for transmission on the bus. In other embodiments, the data transfer requests are received in an internal format that is different from a packet format suitable for transmission on the bus. The data transfer requests may include, e.g., read requests and/or write requests. The bus can be a PCI Express (PCI-E) bus or any other packetized bus.According to another aspect of the present invention, a method for transmitting data transfer requests from a plurality of clients as packets on a bus includes receiving a first data transfer request specifying a first address range and receiving at least one subsequent data transfer request, each subsequent data transfer request specifying a respective target address range. A determination is made as to whether the first target address range is mergeable with the target address range of one or more of the subsequent data transfer requests. A packet is formed for transmission on the bus. In the event that the first target address range is mergeable with the target address range specified by one or more of the subsequent data transfer requests, the packet is formed from the first request and the one or more of the subsequent data transfer requests; in the event that the first target address range is not mergeable with the target address range of any of the subsequent requests, the packet is formed from the first request. The packet is driven onto the bus.In some embodiments, the act of forming the packet is performed in response to a send condition, which can be based on various considerations. For example, the send condition can be based at least in part on a number of data transfer requests that have been received, on an elapsed time since sending a previous packet, on an elapsed time since receiving the first data transfer request, or on a number of subsequent data transfer requests that are mergeable with the first data transfer request. Any combination of these or other conditions may be used to control when a packet is formed and/or sent.According to still another aspect of the present invention, a processor includes multiple processing cores and a bus interface unit. Each of the processing cores is configured to generate data transfer requests. The bus interface unit, which is configured to receive data transfer requests from the processing cores and to transmit the data transfer requests as packets on a bus, includes a queue, combiner logic, and output logic. The queue is configured to store data transfer requests from the processing cores, each data transfer request specifying a target address range. The combiner logic configured to form a packet from at least one of the data transfer requests in the queue and is further configured to combine two or more of the data transfer requests in the queue into one packet in the event that the two or more data transfer requests being combined specify respective target address ranges that are mergeable. The output logic is configured to drive the packets onto the bus.The following detailed description together with the accompanying drawings will provide a better understanding of the nature and advantages of the present invention.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a computer system according to an embodiment of the present invention;FIG. 2 is a block diagram of a graphics processing unit according to an embodiment of the present invention;FIG. 3 illustrates an operating principle for combining packets according to an embodiment of the present invention;FIG. 4 is a block diagram of a transmitter module that combines and transmits packets according to an embodiment of the present invention;FIG. 5 is a flow diagram showing a control logic process that may be implemented in a merging logic block according to an embodiment of the present invention;FIG. 6 is a flow diagram of process steps for merging additional requests according to an embodiment of the present invention;FIG. 7 is a flow diagram showing a control logic process that may be implemented in a send control logic block according to an embodiment of the present invention; andFIG. 8 is an example of processing for a sequence of requests according to an embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONEmbodiments of the present invention provide devices and methods for combining two or more data transfer requests into a single packet for transmission onto a packetized bus such as a PCI Express (PCI-E) bus. As used herein, a "packetized" bus includes any bus via which data transfer requests are transmitted using packets with a header (generally although not necessarily of fixed size) and a payload of variable size. The header advantageously identifies the requesting device and the target address range. Target address ranges can be identified by starting and ending addresses, by starting address and size, or the like. In some instances, the target device may be expressly identified; in other instances, the target address range adequately identifies the target device in accordance with address mapping rules, and an explicit identification of a target device is not required. Requests can be combined by the bus interface unit of an integrated device that includes multiple request generators, by a discrete bus interface element such as a switch, or by other devices as will become apparent in view of the present disclosure. Combining multiple requests into a single packet reduces the overhead on the bus arising from multiple packet headers. In some embodiments, the reduced overhead can provide increased bandwidth for data and/or other performance advantages, examples of which are described below.FIG. 1 is a block diagram of a computer system 100 according to an embodiment of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and a system memory 104 communicating via a memory bridge 105. Memory bridge 105 is connected via a bus 106 to an I/O (input/output) bridge 107. I/O bridge 107 receives user input from one or more user input devices 108 (e.g., keyboard, mouse) and forwards the input to CPU 102 via bus 106 and memory bridge 105. Visual output is provided on a pixel based display device 110 (e.g., a conventional CRT or LCD based monitor) operating under control of a graphics subsystem 112 coupled to memory bridge 105 via a bus 113. A system disk 114 is also connected to I/O bridge 107. A switch 116 provides connections between I/O bridge 107 and other components such as a network adapter 118 and various add-in cards 120, 121. In preferred embodiments, some or all of the connections among various components of system 100 ― e.g., between memory bridge 105 and graphics subsystem 112, between memory bridge 105 and I/O bridge 107, and between I/O bridge 107 and switch 116 ― are implemented using a packetized bus protocol such as PCI-Express (PCI-E).Graphics processing subsystem 112 includes a graphics processing unit (GPU) 122 and a graphics memory 124, which may be implemented, e.g., using one or more integrated circuit devices such as programmable processors, application specific integrated circuits (ASICs), and memory devices. GPU 122 may be configured to perform various tasks related to generating pixel data from graphics data supplied by CPU 102 and/or system memory 104 via memory bridge 105 and bus 113, interacting with graphics memory 124 to store and update pixel data, and the like. For example, GPU 122 may generate pixel data from 2-D or 3-D scene data provided by various programs executing on CPU 102. GPU 122 may also store pixel data received via memory bridge 105 to graphics memory 124 with or without further processing. GPU 122 advantageously also includes a scanout pipeline for delivering pixel data from graphics memory 124 to display device 110. Any combination of rendering and scanout operations can be implemented in GPU 122, and a detailed description is omitted as not being critical to understanding the present invention.It will be appreciated that the system shown herein is illustrative and that variations and modifications are possible. The number and arrangement of bus devices and bridges may be modified as desired; for instance, a graphics subsystem could be connected to I/O bridge 107 rather than memory bridge 105, or I/O bridge 107 and memory bridge 105 might be integrated into a single chip. Alternatively, a graphics subsystem could be integrated on a single chip with a bus bridge. The bus topology may also be varied; for instance, in one alternative embodiment, the system memory is connected to the CPU directly rather than through a bridge.Any number of graphics processors may be included (e.g., by connecting multiple add-in cards with a graphics processor on each to bus 113), and such processors may be operated in parallel to generate images for the same display device or for different display devices. Each graphics processor may have any amount of local graphics memory, including no local memory, and a graphics processor may use local memory and system memory in any combination.FIG. 2 is a block diagram of GPU 122 according to an embodiment of the present invention. GPU 122 includes multiple execution cores 202(1) to 202(N) that operate in parallel to perform rendering and/or scanout operations. In one embodiment, one of cores 202(1) to 202(N) performs scanout operations while the other cores perform rendering operations; it will be appreciated that other configurations may be used. Any numberNof cores may be provided, and each core may be of generally conventional design.From time to time in the course of their operations, cores 202(1) to 202(N) may require data to be retrieved from system memory 104 (FIG. 1) or other devices that are accessible to GPU 122 via memory bridge 105. In accordance with an embodiment of the present invention, cores 202(1) to 202(N) forward such requests to a bus interface unit 204. Bus interface unit 204 includes a transmitter module 206 and a receiver module 208. Transmitter module 206, an embodiment of which is described below, forwards requests received from the cores to bus 113. Receiver module 208 receives responses to the requests via bus 113 and forwards each response to the requesting one of cores 202(1) to 202(N).In preferred embodiments, the bus is a "packetized" bus, meaning that information is transferred using packets that may vary in size. In the case of data transfer requests, each packet includes a header of fixed size (e.g., 20 bytes) that identifies the requesting device, the target device, and a target address or address range (within the target device) to or from which data is to be transferred. In some embodiments, addresses are defined in a "global" address space that is shared among multiple devices, and the target address range alone suffices to identify a target device; in such instances, a separate identifier of the target device may be omitted. Similarly, in embodiments where multiple address spaces are defined (e.g., I/O space, memory space, configuration space and so on) and portions of each space are assigned to specific target devices, the target device may be identified by specifying a target address space and a target address range within that space. The target address range can be specified using starting and ending addresses, or a starting address and size, or the like.The header advantageously also includes other information, such as the type of operation (e.g., read or write) that is to be performed, packet priority, a packet serial number or other unique identifier (referred to herein as a "tag") provided by the requesting device, and so on. In the case of a write operation, a request packet advantageously also has a "payload" portion that includes the data to be written. In the case of a read operation, the payload portion of the request packet is advantageously omitted. Numerous packet formats are known in the art, and a detailed description is omitted as not being critical to understanding the present invention. In one embodiment, the packets conform to the PCI-E protocol.Where the packet requested a read operation, the target device advantageously returns a response packet whose payload portion includes the requested data. The header of the response packet identifies the requesting device and includes the tag from the corresponding request packet to facilitate identification of the data. In the case of a write operation, a response packet might not be sent.In some embodiments, the target device of each read request or write request packet returns an acknowledgement ("Ack") to the device that sent the request. An Ack, which is separate from any data transfer, may be a small packet that simply indicates successful receipt of the request packet, e.g., by returning the tag associated with the request packet. In the case of a read request, the target device would return an Ack upon receipt of the request packet and (after an applicable read latency period) the requested data in a separate packet. The requesting device would then send an Ack back to the target device to indicate receipt of the data packet. In one embodiment, Acks also conform to the PCI-E protocol.Transmitter module 206 advantageously provides receiver 208 the unique tag for each request packet sent and also provides receiver 208 information about which core 202(1)-202(N) originated the request. Receiver 208 matches request tags in the headers of incoming response packets to the request tags provided by transmitter module 206 and uses that information to direct the response to the originating one of cores 202(1)-202(N).In some instances, other devices in system 100 (FIG. 1) may request a data transfer to or from graphics processing subsystem 112, e.g., to or from graphics memory 124. In this situation, receiver 208 receives an incoming request packet via bus 113 and forwards the request to an appropriate handler within graphics processing subsystem 112. The handler may be, e.g., one of cores 202(1) to 202(N) or a separate graphics memory interface module (not shown in FIG. 2). The response (data or Ack) is returned from the handler to transmitter module 206, which formats and sends a response packet to the requesting device via bus 113.Packets are not limited to data transfer requests and responses. In some embodiments, packets may also be used to deliver various messages (e.g., interrupts, resets, and the like) between system components, in addition to data transfer requests and responses.In one embodiment, bus 113 is a PCI-E bus, with separate physical paths 113a, 113b for sending packets and receiving packets, respectively, as shown in FIG. 2. It will be appreciated that other packetized buses could be substituted, with or without separate sending and receiving paths, and the present invention is not limited to PCI-E.In accordance with an embodiment of the present invention, when cores 202(1) to 202(N) generate data transfer requests, transmitter unit 206 can combine multiple requests into a single packet to be transmitted on bus 113. Packets are advantageously combined if they reference contiguous address ranges in the same device and specify the same type of operation (e.g., read or write).FIG. 3 illustrates an operating principle for combining packets. Packet 302 has a header portion 304 and a payload portion 306 (which may be empty). Header portion 304 specifies a device (Dev1), an address range [A0, A1), and an operation (Read). Herein, address ranges are specified in a linear address space using half-open intervals [Aa, Ab) wherea<b, denoting that the first address in the range is Aaand the last address is the largest valid address value that is less than Ab. Address indices are ordered such that ifa<b, address Aais lower than address Abin the linear address space; thus, it is to be understood that A0 < A1 < A2 and so on. The spacing between addresses is arbitrary and may be varied; for instance, A0 and A1 might be 64 addresses apart while A1 and A2 are 32 addresses apart. Those of ordinary skill in the art with access to the present teachings will be able to adapt the embodiments described herein to other address spaces.Similarly, packet 308 has a header portion 310 and a payload portion 312 (which may also be empty). Header portion 310 specifies the same device (Dev1) and an address range [A1, A2). Packets 302 and 308 reference contiguous address ranges in the same device and specify the same type of operation. In accordance with an embodiment of the present invention, these packets can be combined into a new packet 314. Header portion 316 of packet 314 specifies the device (Dev1), the combined address range [A0, A2), and the operation (Read). Payload portion 318 of packet 314 contains the concatenated payloads of packets 302 and 308.In some embodiments, packet headers have the same size regardless of payload size. For instance, header 316 is the same size as either one of headers 304 and 310. Sending combined packet 314 rather than individual packets 302 and 308 reduces the bandwidth used by the size of one header. This can result in a substantial savings. For instance, in one embodiment using PCI-E, each packet header is 20 bytes, and the payload might be 16, 32, 64 or 128 bytes. If two 64-byte payloads are merged into one 128-byte payload, the effective bandwidth is increased by about 13.5%. This is a significant efficiency gain in situations where a bus device is bandwidth limited, as is often the case for GPUs. More generally, as long as the combined packet 314 is smaller than the two packets 302, 308, combining the packets results in some reduction in bus bandwidth usage and, particularly in instances where the bus device is bandwidth limited, some improvement in performance.Further, it should be noted that where the data transfer request is a read request, combining packets can reduce header overhead in both directions, as the response to each request packet would be sent as a separate packet. For instance, in FIG. 3, if the target device received packets 302 and 308 via a PCI-E bus, it would generate two response packets to deliver the requested data, but if the target device received packet 314, it would generate only one response packet. In addition, in embodiments using PCI-E or other protocols in which the target device sends an Ack for each packet received, combining packets can reduce the number of Acks that need to be sent, further reducing overhead on the bus.In some embodiments, in instances where transmitter module 206 is transmitting response packets, combining packets might not be appropriate. For instance, PCI-E requires that a target device return at least one packet for every received request packet; accordingly, where a PCI-E device is acting as a target device, transmitter 206 would not combine packets generated in response to a request. Those of ordinary skill in the art will appreciate that transmitter module 206 can be configured to distinguish request packets from other types of packets and to perform combining operations only for request packets.In instances where transmitter module 206 is transmitting request packets, any requests and any number of requests can be combined into a single packet as long as the target device is able to respond to the request. In some embodiments, requests from different cores 202 can be combined by transmitter 206. Where the request is a read request, transmitter 206 advantageously provides to receiver module 208 information indicating which core 202 requested which portion of the data in the combined request; given such information, receiver module 208 can direct the correct portion of the returned data to the correct client.FIG. 4 is a block diagram of transmitter module 206 according to an embodiment of the present invention. Transmitter module 206 includes an input arbiter 402, a write queue 404 for temporarily storing write requests, a read queue 406 for temporarily storing read requests, a write combiner 408, a read combiner 412, an output arbiter 416, and a driver circuit 418 coupled to outgoing data path 113a of bus 113.Input arbiter 402 receives data transfer requests, including read and write requests, from cores 202(1) to 202(N).In some embodiments, the cores send requests to transmitter module 206 in the packet format specified by the bus protocol; in other embodiments, the requests are sent using a different data format and are converted to the appropriate packet format within transmitter module 206.Input arbiter 402 directs write requests to write queue 404 and read requests to read queue 406. Since multiple cores may make requests at the same time, input arbiter 402 advantageously includes control logic for arbitrating among simultaneous requests. Conventional arbitration or scheduling rules such as round-robin, priority-based arbitration, or the like may be used, and a particular arbitration scheme is not critical to the present invention. In one embodiment, input arbiter 402 forwards up to one request per clock cycle to each of read queue 406 and write queue 404.Read queue 406 and write queue 404 may be implemented using conventional techniques for queuing requests in the order received. As described below, in some embodiments requests may be removed out-of-order from queues 406 and 404.Write combiner 408 includes merging logic 420, a timer 422 and send control logic 424. Merging logic 420 examines new requests as they are received to determine whether they can be merged with a current request, e.g., the oldest request in write queue 404 (plus any requests that have already been merged with the oldest request). A specific implementation of merging logic 420 is described below. Send control logic 424 uses status information provided by merging logic 420 to determine whether a packet is to be sent. Send control logic 424 may be configured to detect a variety of conditions under which a packet is to be sent; examples are described below. When a packet is to be sent, send control logic 424 generates the packet from the current request(s) identified by merging logic 420. In some embodiments, send control logic 424 also reformats the request(s) into the packet format specified by the bus protocol. Once a packet is formed, send control logic 424 forwards the packet to output arbiter 416 and removes all requests that were included in the packet from write queue 404. Timer 422 is advantageously used by send control logic 424 to prevent requests from waiting too long in write queue 404, as described below.Similarly, read combiner 412 includes merging logic 426, a timer 428 and send control logic 430. Merging logic 426 examines new requests as they are received to determine whether they can be merged with a current request, e.g., the oldest request in read queue 406 (plus any requests that have already been merged with the oldest request). A specific implementation of merging logic 426 is described below. Send control logic 430 uses status information provided by merging logic 426 to determine whether a packet is to be sent. Send control logic 430 may be configured to detect a variety of conditions under which a packet is to be sent; examples are described below. When a packet is to be sent, send control logic 430 generates the packet from the current request(s) identified by merging logic 426. In some embodiments, send control logic 430 reformats the request(s) into the packet format specified by the bus protocol. Once a packet is formed, send control logic 430 forwards the packet to output arbiter 416 and removes all requests that were included in the packet from read queue 406. Timer 428 is advantageously used by send control logic 430 to prevent requests from waiting too long in read queue 406, as described below.Write combiner 408 and read combiner 412 advantageously communicate to each other information about the target devices and target addresses of packets in their respective queues, e.g., so that order can be preserved between a read request and a write request with the same target address. For instance, suppose that a core sends (in order) a first request to read data from address range [A0, A1), a second request to write data to address range [A1, A2), and a third request to read data from address range [A1, A2). Before merging the two read requests, read combiner 412 detects the existence of the intervening write request based on information communicated from write combiner 408. In one embodiment, read combiner 412 merges the two read requests and holds the merged request (i.e., does not deliver it to output arbiter 416) until after the intervening write request has been sent. Alternatively, read combiner 412 might send the first request without merging the second request; this option may be preferable, e.g., if sending of the write request is delayed.Output arbiter 416 receives packets carrying write requests from write combiner 408 and packets carrying read requests from read combiner 412. ln one embodiment, each of write combiner 408 and read combiner 412 delivers at most one packet to arbiter 416 on each clock cycle. Where only one of the combiners provides a packet, output arbiter 416 forwards the packet to driver 418. Where both combiners provide packets, output arbiter 416 may employ conventional arbitration logic to select between them (e.g., least recently serviced or priority-based arbitration algorithms). Output arbiter 416 may include FIFOs or other buffer circuits to temporarily store packets until they are selected for transmission.Driver circuit 418 receives the selected packet from output arbiter 416 and drives the packet onto bus lines 113a in accordance with the bus protocol. Driver circuit 418 may be of generally conventional design. In some embodiments, the bus includes multiple signal lines 113a, and driver circuit 418 drives at least some of the data bits comprising the packet in parallel onto these lines.It will be appreciated that the transmitter module described herein is illustrative and that variations and modifications are possible. The read queue and write queue may be of any size and may be implemented in physically or logically separate circuits. The read combiner and write combiner may have identical or different configurations, and in some embodiments, combining requests might be performed only for read requests or only for write requests. In embodiments where the bus protocol does not include a dedicated path for transmitting data, the output arbiter or another component of the transmitter module may be configured to obtain control of the bus in a manner consistent with the applicable bus protocol prior to transmitting a packet, as is known in the art. Further, while the transmitter module is described herein with reference to particular functional blocks, it is to be understood that the blocks are defined for convenience of description and need not correspond to physically distinct components.Merging logic blocks 420 and 426 will now be described. FIG. 5 is a flow diagram showing a control logic process 500 that may be implemented in merging logic block 426 of read combiner 412 and/or merging logic block 420 of write combiner 408 according to an embodiment of the present invention. Process 500 may be repeated, starting from "cycle" step 502. In some embodiments, execution of process 500 is synchronized to a clock, which may be further synchronized to operation of bus 106.In this embodiment, an "active window" is defined in read queue 406 (or write queue 404) to identify requests that are currently candidates for merging. The active window has a predetermined maximum sizeMWthat is advantageously measured in number of requests. For example,MWmight be 2, 3, 4 or any size up to the total size of read queue 406. At any given time, theMWoldest requests in read queue 406 are in the active window.Initially, read queue 406 is empty. As described above, during operation, input arbiter 402 adds zero or more requests to read queue 406 on each cycle. At step 504, merging logic 426 checks read queue 406 to determine whether a new request has been received. If so, then at step 506 merging logic 426 determines whether timer 428 is already running and, if not, starts timer 428 at step 508. In one embodiment, timer 428 can be implemented using a counter that can be incremented or reset on each clock cycle, and starting the timer includes resetting the counter so that it can be incremented as indicated below. If, at step 506, timer 428 is already running, then timer 408 is incremented at step 510.At step 512, merging logic 426 determines whether the new request can be merged with a current request. The "current request" is advantageously defined by reference to an address range. When the first request is received, the current address range is initialized to the address range of that request. Thereafter, each time a packet is sent, the current address range is reinitialized to the address range of the oldest request in the queue. Until such time as a packet is sent, merging logic 426 can expand the current address range by merging address ranges of subsequent requests into the current address range if those ranges happen to be contiguous with the current address range.Accordingly, step 512 includes comparing the address range of the new request to the current address range to determine whether merging is possible. In some embodiments, "mergeable" address ranges advantageously include address ranges that represent contiguous blocks in the same address space, so that merging enlarges the address range. In addition, mergeable address ranges may also include address ranges that overlap partially or completely. Thus, a request for address range [A0, A2) is mergeable with a request for address range [A1, A2) or with a second request for address range [A0, A2). Where address ranges of different requests overlap, transmitter module 206 (FIG. 2) advantageously identifies to receiver module 208 the specific range requested by each one of cores 202, and receiver module 208 delivers the appropriate data to each requesting core 202.In some embodiments, merging may change either the starting or ending addresses. In other embodiments, the starting address is not changed by merging, and only requests whose target addresses correspond to larger address values are considered mergeable at step 512. In still other embodiments, the ending address is not changed by merging, and only requests whose target addresses correspond to smaller address values are considered mergeable at step 512. In addition, merging may also be limited to requests that target the same device. In some embodiments, requests from different cores might or might not be considered mergeable.If the new request can be merged, then at step 514, the current address range is updated to reflect the merged request. The current address range may be represented, e.g., by a starting address (e.g., the lowest address value to be accessed) and a range size or by a starting address and ending address (e.g., the lowest and highest address values to be accessed); updating the current address range may include changing the starting address, ending address and/or range size to reflect the concatenated address range of the original current request and the new request.At step 516, the new request is marked as merged. In some embodiments, requests in the queue may be modified to reflect merges. In other embodiments, the requests are not modified, and flags or similar data structures may be used to identify which requests are being merged. More specifically, in one embodiment, a one-bit register corresponding to each location in the window is provided. The register value may be initialized to a logic low value to indicate that the request has not been merged and set to a logic high value at step 515 to that the request has been merged.At step 518, merging logic 426 checks for additional merges that may be possible after merging the new request. For example, if a first request has address range [A0, A1) and a second request has address range [A2, A3), no merge is possible. But if a third request with address range [A1, A2) is received and merged with the first request to create a current address range [A0, A2), it becomes possible to merge the second request as well to create a current address range [A0, A3). Step 518 advantageously includes detecting such situations and performing additional merging.FIG. 6 is a flow diagram of processing that may be performed at step 518 to merge additional requests. Beginning at step 606, process 600 traverses the active window to identify any unmerged requests therein whose address ranges are contiguous with the current merged address range. At step 608 a candidate request is selected. The candidate request is advantageously the oldest request in the active window that has not already been marked as merged. At step 610, the target address of the candidate request is determined.At step 612, it is determined whether the target address range of the candidate request is mergeable with the current merged address range. The same criteria for merging used at step 512 are advantageously used at step 612. If the address ranges are mergeable, then the merged address range is updated at step 614, and the candidate request is marked as merged at step 616. These steps may be generally similar to steps 514 and 516 described above.At step 618, regardless of whether the candidate request was merged, it is determined whether any unmerged requests in the window have not been tested. If so, then the next such request is selected as a new candidate request at step 608 and tested as described above.At step 620, it is determined whether testing should continue. For instance, if any requests were merged during traversal of the window, it is possible that additional unmerged requests might now be mergeable with the expanded address range. If testing should continue, process 600 returns to step 606 to begin a new traversal of the window. It will be appreciated that the specific steps shown in FIG. 6 are not required.Checking for additional merges at step 518 can reduce the dependence of merging on the order in which requests arrive. For example, suppose that the oldest request in the active window has a target address range [A0, A1), the next oldest request has an address range [A2, A3), and the third oldest request has an address range [A1, A2). At steps 512-516 of process 500 the first request would be merged with the third request to create a merged address range [A0, A2). At step 518, the second request would be merged with the merged request to create a merged address range [A0, A3). It should be noted that step 518 is optional, and in some embodiments, simplifying the merging logic may be a higher priority than the increased likelihood of merging requests that step 518 provides.Referring again to FIG. 5, at step 520, merging logic 426 provides updated status information to send control logic 430. The status information may include any information that is usable by send control logic 430 in determining whether to send a packet during the current cycle. Examples of status information include the total number of requests in the window, the number of requests merged into the current request, the size of the current address range, and so on.In the case where no new request is received at step 504, merging logic 426 increments timer 428 at step 524. At step 526, merging logic 426 determines whether a packet was sent on the last cycle; step 526 may include detecting a signal generated by send control logic 430 as described below. If a packet was sent, the packet would have included the oldest request in the queue, so the current address range is no longer accurate. At step 528, the current address range is updated to reflect the address range of the request that is now the oldest in read queue 406. Merging logic 426 advantageously checks for other requests in queue 406 that can be merged with the now-oldest request (step 518).Send control logic blocks 424 and 430 will now be described. FIG. 7 is a flow diagram showing a control logic process 700 that may be implemented in send control logic block 430 of read combiner 412 or send control logic block of write combiner 408 according to an embodiment of the present invention. Process 700 may be repeated, starting from "Cycle" step 702.At step 704 send control logic 430 (or send control logic 424) determines whether a send condition has occurred. As used herein, a "send condition" refers generally to any detectable condition whose occurrence indicates that a packet should be sent from read queue 406 (or write queue 404). Send conditions are advantageously defined and tuned to a particular system configuration and may take into account such considerations as the size of read queue 406, the maximum latency to be introduced by transmitter module 206, actual levels of bus activity, and so on. A variety of send conditions may be used, and an embodiment of the invention may test any number and combination of send conditions at step 704.For example, in some embodiments, a send condition occurs when timer 428 (or timer 422) expires, e.g., when the counter implementing timer 428 has reached a predetermined maximum value. The maximum value is advantageously defined such that expiration of timer 428 occurs when waiting longer would introduce latency that is not expected to be offset by the efficiency gain from merging requests. In some embodiments, the maximum value may be a fixed value, which can be a configurable parameter of the device. In other embodiments, the maximum value can be dynamically tuned based on operating conditions, e.g., by monitoring bus traffic and selecting a lower maximum value to reduce latency when the bus is relatively lightly loaded and a higher maximum value to increase the likelihood of combining packets when the bus is heavily loaded.In some embodiments, send conditions are based on properties of the request queue and/or the bus protocol. For example, as described above, in some embodiments, an active window is defined in read queue 406 to identify requests that are currently candidates for merging. The active window has a predetermined maximum sizeMwthat is advantageously measured in number of requests. For example,MWmight be 2, 3, 4 or any size up to the total size of read queue 406. At any given time, theMwoldest requests in read queue 406 are in the active window. (If fewer thanMWrequests are present in read queue 406, then all requests would be in the active window.)Once the active window is full, i.e., once read queue 406 includesMWrequests, then continuing to wait for requests is no longer worthwhile (even if the timer has not expired) since no new requests can enter the active window until a packet has been sent. Thus, a full active window is advantageously detected as a send condition. For instance, ifMW= 2, once two requests have been received, the first request should be sent. If the target address range of the second request is contiguous with the target address range of the first request, the requests would be merged as described above. Otherwise, the second request remains in the queue until a send condition occurs in a subsequent cycle.In another embodiment, a send condition is based on a maximum numberMRof requests that can be merged;MRcan be any size up to the window sizeMW .If the first (oldest) request in queue 406 can be merged withMR―1 other requests, it is advantageous not to wait longer before forming and sending the packet. For example, in one such embodiment,MW =3 andMR= 2; if the oldest request can be merged with the second oldest request, the send condition onMRwould occur, and a packet would be sent regardless of whether a third request had been received. It will be appreciated that whereMR=MWand a send condition based onMWis used, a send condition based onMRwould be redundant in the sense that theMR-based send condition would occur only in situations where theMW-based send condition would also occur.In still another embodiment, a send condition is based on a maximum packet size supported by the bus. Once a packet of the maximum size can be created by merging requests in the active window, no further requests can be merged with the existing requests without creating an unacceptably large packet. For example, suppose that the bus protocol limits the payload of a packet to 128 bytes. If the first (oldest) request in queue 406 has a 128-byte payload, then it cannot be merged into a larger packet, and the packet-size send condition would occur. Similarly, if the first (oldest) request in queue 406 is 64 bytes and can be merged with another 64-byte request to make a packet with a 128-byte payload, then the packet-size send condition would also occur. It is to be understood that the example of a 128-byte maximum is illustrative; different bus protocols may place different limits (or no limits) on packet size.Referring again to FIG. 7, step 704 may include detecting any combination of the above or other send conditions. If, at step 704, no send condition has occurred, then process 700 returns to step 702 without taking any action. If occurrence of a send condition is detected, a packet is formed at step 706. In one embodiment, send control logic 430 forms the packet by reference to the current address range determined by merging logic 426 described above. Forming a packet may include generating a new header or, in embodiments where the requests are already formatted as packets, modifying information such as the address range in the header of one packet to incorporate information from any other packets being merged with it. Creating a packet may also include concatenating the payloads of the requests (or packets) being merged; payloads are advantageously concatenated based on the order of addresses, which is not necessarily the order of the requests in the window. In some embodiments, payloads may be concatenated as mergeable requests are identified (e.g., at step 514 of process 500 of FIG. 5) rather than waiting until step 706. At step 708, the packet is sent to output arbiter 416 for transmission onto the bus.At step 710, packet information is forwarded to receiver module 208 (FIG. 2). The packet information advantageously includes the unique packet tag, identification of which one (or ones) of cores 202(1) to 202(N) requested the data, and any other information that may be used by receiver module 208 to determine how to handle any received response to the packet. Such information may be generally conventional in nature, and receiver module 208 is not required to know whether a particular packet was created by merging requests except to the extent that different subsets of the data received in response are to be routed to different requesting cores.At step 712, send control logic 430 removes all requests that were included in the packet from read queue 406. In one embodiment, the merge register values described above are used to determine which requests were included in the packet.At step 714, send control logic 430 signals to merging logic 426 that a packet has been sent. In response to this information, merging logic 426 can update the current address range to reflect that a different request is now the oldest request, as described above. At step 716, send control logic 430 resets timer 428. Process 700 then returns to step 702 for the next cycle.It will be appreciated that the merging and sending processes described herein are illustrative and that variations and modifications are possible. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified or combined. For instance, the active window may have any size (2 requests or larger) and may be traversed any number of times (once, twice, etc.). In some embodiments, if a maximum packet size or maximum number of merged requests is reached, traversal of the window is discontinued. In some embodiments, certain address ranges might not be allowed; for instance, if the allowed payload sizes are 32, 64 and 128 bytes, a 32-byte request and a 64-byte request would not be merged to create a 96-byte payload. The processes described herein be modified to address any such constraints.Additionally, the send conditions may differ from those described above. For instance, the timer described above resets each time a packet is sent, and as a result, the number of cycles that a given request waits in the queue might depend in part on whether other packets are being sent. In one alternative embodiment, a separate timer (e.g., a counter) is used for each request, and a request is sent when its timer expires, regardless of other activity. More generally, any number and combination of send conditions may be tested by the send control logic. In some embodiments, requests from certain cores might be sent without waiting. For example, if one core is designated as high priority, receipt of a request from that client might be treated as a send condition, and the packet former logic might be configured to detect such a request and send it as a packet regardless of whether other packets are waiting in the queue. The process described herein may be performed to combine read requests and/or to combine write requests. As noted above, different logic may be used to control combining of read requests and combining of write requests.To further illustrate the operation of one embodiment of the invention, FIG. 8 is an example of processing for a sequence of read requests that might be received by transmitter module 206 (FIG. 2) and processed in accordance with the processes described above. In FIG. 8, column 802 identifies cycles (numbered 1-8 for convenience). Column 804 shows the address ranges of the requests in the active window during each cycle, after receipt of any new requests. Column 806 indicates which, if any, send condition occurs on each cycle. For purposes of this example, the send conditions are: "timer," which expires if the timer reaches a count of 3; "window full," which occurs when three requests are in the window (MW =3); and "max merge," which occurs when two mergeable requests are present in the window (MR= 2). Column 808 indicates the address range for a packet (if any) sent during that cycle.During cycle 1, a first request 811 with target address range [A0, A1) is received and enters the active window. The timer is started at an initial value of zero. No send condition occurs, so no packet is sent. During cycles 2 and 3, no requests are received, and the first request 811 waits in the active window. The timer increments once per cycle.During cycle 4, a second request 812 with target address range [A2, A3) is received and enters the active window. The timer reaches a count of 3 and expires, so a packet 821 is sent. As described above, the packet corresponds to the oldest request in the window (request 811); since [A0, A1) and [A2, A3) are not contiguous ranges, request 812 is not merged with request 811, and packet 821 has the target address range [A0, A1). Since a packet is sent, the timer is reset.During cycle 5, a third request 813 with target address range [A5, A6) is received and enters the active window. No send condition occurs, so no packet is sent.During cycle 6, a fourth request 814 with target address range [A3, A4) is received and enters the window. Since the window is now full, a send condition occurs, and a packet 822 is sent. In this instance, the oldest request is request 812, and request 814 is merged with request 812. Thus, packet 822 has the target address range [A2, A4). Request 813 is not merged and remains in the window.At cycle 7, a fifth request 815 with target address range [A6, A7) is received and enters the window. Since request 815 can be merged with request 813, a "max merge" send condition occurs, and a packet 823 is sent. Packet 823 has the target address range [A5, A7) due to merging.At cycle 8, a sixth request 816 with target address range [A7, A8) is received and enters the window. At this point, request 816 is the only request in the window, and no send condition occurs so no packet is sent. This process can be continued in this manner indefinitely.It will be appreciated that the sequence of events and send conditions described herein is illustrative and that variations and modifications are possible. Different send conditions may be defined, and different rules may be used to determine whether requests can be merged.In the example shown in FIG. 8, merging reduces the number of packets sent on the bus from five (the number of requests) to three. More generally, the bandwidth that can be gained by merging requests depends in part on the particular send conditions and merging rules implemented. It also depends in part on the extent to which the requests generated by the various cores or other clients of the bus interface unit tend to have contiguous target address ranges, which depends on the type of operations being performed by the clients. As noted above, as long as the merged packet is smaller than the combined size of the two or more packets from which the merged packet was created, at least some reduction in bandwidth usage is attained. Further, where multiple requests are merged into one packet, the response will more likely be returned in one packet rather than multiple packets, thereby reducing bandwidth usage on the return path.In addition, merging of packets can provide other benefits. For instance, in some embodiments, power consumption is reduced by placing driver circuit 418 (FIG. 4) in a low power state when it is not transmitting packets; accordingly, reducing the number of packets transmitted by driver circuit 418 can reduce power consumption. As another example, in embodiments where bus interface unit 204 (FIG. 2) uses tags to keep track of request packets for which a response has not been received, merging requests into a smaller number of packets can reduce the number of tags that need to be tracked.While the invention has been described with respect to specific embodiments, one skilled in the art will recognize that numerous modifications are possible. For instance, combining requests is not limited to the PCI-E bus protocol; the techniques described herein may be adapted to any bus via which devices transmit read and/or write requests using packets. The present invention can also be applied to communication between different computer systems (including server and client systems, handheld or mobile devices that send and/or receive data, and so on) where the systems communicate by exchanging packets over a network.In some embodiments, merging can be enabled or disabled depending on the level of bus activity. For example, send control logic blocks 424 and/or 430 of FIG. 4 might be configured such that when bus activity is below a threshold level, a send condition occurs each time a request is received. This reduces transmitter latency that can be introduced by waiting to determine whether a first request can be merged with a subsequent request. Where bus activity is above the threshold level, other send conditions may be tested, e.g., as described above. The threshold bus activity level may advantageously be set such that merging is enabled when the bus is busy enough that some delay in transmitting requests onto the bus and/or receiving responses to requests via the bus would be expected.Further, while the embodiments described above relate to a GPU, the present invention is not limited to GPUs or to integrated processors. Any device with a bus interface that manages requests from multiple clients can include suitable components to merge requests. For instance, PCI-E switch 116 of FIG. 1 might also be configured to merge requests from the devices connected thereto.Thus, although the invention has been described with respect to specific embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims. |
Methods, apparatus, systems and articles of manufacture are disclosed to initialize enclaves on target processors. An example apparatus includes an image file retriever to retrieve configuration parameters associated with an enclave file, and an address space manager to calculate a minimum virtual address space value for an enclave image layout based on the configuration parameters, and generate an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout. |
What Is Claimed Is:1. An apparatus to generate a processor agnostic enclave image layout, comprising:an image file retriever to retrieve configuration parameters associated with an enclave file; andan address space manager to:calculate a minimum virtual address space value for an enclave image layout based on the configuration parameters;andgenerate an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.2. An apparatus as defined in claim 1, wherein the image file retriever is to retrieve a number of candidate threads associated with the enclave file.3. An apparatus as defined in claim 2, wherein the configuration parameters include a static heap size associated with the number of candidate threads associated with the enclave file.4. An apparatus as defined in claim 3, wherein the address space manager is to calculate the minimum virtual address space value based on the number of candidate threads and the static heap size.5. An apparatus as defined in claim 1, wherein the address space manager is to generate the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout.6. An apparatus as defined in claim 5, wherein the measurement of the optimized enclave image layout is the same for all of the unknown processor types.7. An apparatus as defined in claim 1, further including a software guard extension (SGX) identifier to determine a type of a respective one of the unknown processor types.8. An apparatus as defined in claim 7, wherein the type of the respective one of the unknown processor types includes at least one of a static SGX processor or a dynamic SGX processor.9. An apparatus as defined in claim 7, wherein the SGX identifier is to associate the optimized enclave image layout with a target enclave instruction set based on the determined type of the respective one of the unknown processor types.10. A method to generate a processor agnostic enclave image layout, comprising:retrieving configuration parameters associated with an enclave file; calculating a minimum virtual address space value for an enclave image layout based on the configuration parameters; andgenerating an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.11. A method as defined in claim 10, further including retrieving a number of candidate threads associated with the enclave file.12. A method as defined in claim 1 1, wherein the configuration parameters include a static heap size associated with the number of candidate threads associated with the enclave file.13. A method as defined in claim 12, further including calculating the minimum virtual address space value based on the number of candidate threads and the static heap size.14. A method as defined in claim 10, further including generating the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout.15. A method as defined in claim 14, wherein the measurement of the optimized enclave image layout is the same for all of the unknown processor types.16. A method as defined in claim 10, further including determining a type of a respective one of the unknown processor types.17. A method as defined in claim 16, wherein the type of the respective one of the unknown processor types includes at least one of a static SGX processor or a dynamic SGX processor.18. A method as defined in claim 16, further including associating the optimized enclave image layout with a target enclave instruction set based on the determined type of the respective one of the unknown processor types.19. A tangible computer-readable storage disk or storage device comprising instructions which, when executed, cause a processor to at least: retrieve configuration parameters associated with an enclave file; calculate a minimum virtual address space value for an enclave image layout based on the configuration parameters; andgenerate an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.20. A tangible computer-readable storage disk or storage device as defined in claim 19, wherein the instructions, when executed, further cause the processor to retrieve a number of candidate threads associated with the enclave file.21. A tangible computer-readable storage disk or storage device as defined in claim 20, wherein the instructions, when executed, further cause the processor to identify a static heap size associated with the number of candidate threads associated with the enclave file.22. A tangible computer-readable storage disk or storage device as defined in claim 21, wherein the instructions, when executed, further cause the processor to calculate the minimum virtual address space value based on the number of candidate threads and the static heap size.23. A tangible computer-readable storage disk or storage device as defined in claim 19, wherein the instructions, when executed, further cause the processor to generate the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout.24. A tangible computer-readable storage disk or storage device as defined in claim 23, wherein the instructions, when executed, further cause the processor to create the measurement of the optimized enclave image layout as the same for all of the unknown processor types.25. A tangible computer-readable storage disk or storage device as defined in claim 19, wherein the instructions, when executed, further cause the processor to determine a type of a respective one of the unknown processor types. |
METHODS AND APPARATUS TO INITIALIZE ENCLAVES ON TARGET PROCESSORSFIELD OF THE DISCLOSURE[0001] This disclosure relates generally to computer platform application security, and, more particularly, to methods and apparatus to initialize enclaves on target processors.BACKGROUND[0002] In recent years, security enhancements to processors have emerged to allow applications to create protected regions of address space, which are referred to herein as enclaves. Particular processor instruction sets (e.g., processor microcode) allow implementation of enclaves that prohibit access to enclave memory areas by code outside of the enclave. Example processor instruction sets to facilitate application enclaves of a platform are known as SGX (Software Guard Extension) instruction sets. Some example SGX instruction sets permit enclave dynamic memory management that allow adding or removing cache pages to an enclave as needed during runtime.BRIEF DESCRIPTION OF THE DRAWINGS[0003] FIG. 1A is an example enclave memory layout conforming to requirements of a static software guard extension processor.[0004] FIG. IB is an example enclave memory layout conforming to requirements of a dynamic software guard extension processor.[0005] FIG. 2 is an example platform environment to initialize enclaves on target processors.[0006] FIG. 3 is an example software guard extension software development kit engine to initialize enclaves on target processors in a manner consistent with this disclosure.[0007] FIG. 4 illustrates example enclave memory layouts generated by the example platform environment of FIG. 2 and the example software guard extension software development kit engine of FIG. 3. [0008] FIGS. 5-7 are flowcharts representative of example machine readable instructions that may be executed to implement the example platform environment of FIG. 2 and the example software guard extension software development kit engine of FIG. 3.[0009] FIG. 8 is a block diagram of an example processor platform structured to execute the example machine readable instructions of FIGS. 5-7 to implement the example platform environment of FIG. 2, the example software guard extension software development kit engine of FIG. 3, and/or the example enclave memory layouts of FIG. 4.DETAILED DESCRIPTION[0010] Early generation SGX (Software Guard Extension) instruction sets permitted the generation of secure enclaves (referred to herein as "enclaves") that establish a secure computing environment for software applications and/or data. When the enclave (e.g., an enclave file) is initialized on a computing platform (e.g., an enclave image), it operates in a portion of virtual memory (trusted memory) that is inaccessible by external applications and/or the operating system (OS). Functions performed by the enclave image may be invoked by an untrusted application (e.g., the OS) and one or more trusted functions of the enclave image may be invoked so that the trusted function can access enclave data in plaintext, while functions external to the enclave image are denied such plaintext access.[0011] Generally speaking, SGX enclaves include a build-time phase, a load- time phase, and a runtime phase. During the example build-time phase, source code (e.g., developed by an independent software vendor (ISV)) is supplied to a compiler and linker, which may utilize one or more signing tools of an SGX software development kit (SDK) to generate an enclave executable file. Additionally, the example source code includes configuration information that has one or more requirements for the enclave, such as component requirements (e.g., a size of heaps, a size of stacks, a number of threads, etc). The one or more signing tools of the SGX SDK determine a virtual size of the enclave and addresses of the one or more components. Further, during the build-time compiling/linking, address information and/or size information (of page types) are written to a corresponding executable file to be used during the load-time phase. In some examples, address information, size information and/or page information generated by the one or more signing tools of the SGX SDK are referred to as metadata.[0012] During the example load-time phase, the enclave executable file is loaded into an enclave page cache (EPC), which results in an enclave image. As described in further detail below, the example EPC is a secure memory structure that can be accessed using instructinos from an SGX instruction set associated with a processor developed to handle enclaves. During this load-time phase, corresponding code and/or data of the enclave file is loaded along with one or more static components identified by the metadata created by the one or more signing tools of the SGX SDK. The loading activity may include one or more SGX instructions such as ECREATE, EADD (adds pages needed by memory structures specified by the metadata) and/or EEXTEND. Additionally, each enclave file includes an associated digital signature to be verified to a subsequent measurement. The measurement occurs in response to a particular SGX instruction named ΕΓΝΙΤ. After the EINIT instruction is invoked, other instructions listed above cannot be used any more, and a measurement of the enclave occurs that can be compared to the digital signature, thereby allowing a (remote) party to authenticate the enclave with an expected measurement value. Remote parties may include the operating system (OS) and/or applications and/or services running on remote platform(s).[0013] In the event of a successful measurement, the runtime phase may proceed, in which the enclave initialization code (e.g., trusted code running inside the enclave) determines whether to use static or dynamic heap/stacks to be used. Prior to the EINIT instruction, the enclave can not run. However, untrusted runtime libraries and/or drivers must collaborate with page allocation for any interaction to preserve the secure nature of enclave operation(s).[0014] During the runtime phase, the enclave image operates in trusted memory and includes enclave data and/or enclave code having any number of threads (e.g., a thread control structure (TCS)). As described above, in the event a remote party (e.g., an application and/or OS) wishes to access one or more services of the enclave image, an integrity verification occurs (measurement) prior to the runtime phase to ensure that the enclave image has not been compromised. The measurement includes parameters of the enclave image such as, but not limited to, a size of virtual memory required for operation (e.g., an enclave virtual size), a size of a static heap, a number of threads, a size of each thread (including thread stack sizes, thread local storage, thread data, etc.), or a size of a dynamic heap (e.g., for enclave images that support dynamic memory management). While an independent software vendor (ISV) develops and/or otherwise writes code for an enclave file, the measurement occurs after the enclave file is loaded into virtual memory as an enclave image. A enclave in memory includes at least a memory-mapped executable file developed by the ISV, one or more stacks (e.g., multiple stacks for multi-threaded applications), and a heap. The corresponding enclave image in memory includes a loaded version of these components in which those components are aligned and padded. During a loading phase of the enclave file, virtual memory is allocated (e.g., pages), and static components are allocated (e.g., a static heap, static thread(s)). In the event that the SGX instruction set of the processor includes dynamic memory management capabilities, dynamic components may be used (e.g., a dynamic heap, dynamic thread(s)), etc.). As described in further detail below, dynamic components do not exist until runtime.[0015] In the illustrated example of FIG. 1A, a first enclave memory layout 100 includes an enclave image 102, a static heap 104, a first static thread 106, and a second static thread 108. In particular, the first enclave memory layout 100 conforms to requirements of a first SGX instruction set of a corresponding processor that is capable of handling static components only (e.g., a "static SGX processor"), in which the particular arrangement of components within the first enclave memory layout 100 cannot change during runtime. In some examples, the static SGX processor refers to relatively early generations of SGX processors named "SGX 1.0". As discussed in further detail below, other (e.g., more recent generations) SGX instruction sets (e.g., microcode of more recent SGX processors) are capable of handling both static components, as well as dynamic components that may be invoked and/or otherwise change during runtime. In some examples, relatively recent generations of SGX processors (e.g., "dynamic SGX processors") are named "SGX 2.0". The example first enclave memory layout 100 is allocated to virtual memory having a first virtual size 1 10 represented by a dashed line. As such, when the example first enclave memory layout 100 is loaded, a corresponding measurement will be based on the example first virtual size 110 of allocated virtual memory, the example enclave image 102, the example static heap 104, the example first static thread 106, and the example second static thread 108.[0016] In the illustrated example of FIG. IB, a second enclave memory layout 130 includes the enclave image 102, the static heap 104, the first static thread 106, and the second static thread 108. However, because the example second enclave memory layout 130 is targeted for a second SGX processor having a second SGX instruction set that is capable of handling dynamic components (e.g., a "dynamic SGX processor," or "SGX 2.0"), the example second enclave memory layout 130 further includes, during runtime, an example dynamic heap 132, an example first dynamic thread 134, and an example second dynamic thread 136. Each of the first dynamic thread 134 and the example second dynamic thread 136 may include corresponding thread local storage (TLS) components, thread data (TD) components, state save areas (SSAs) and/or stacks. Additionally, to accommodate for the dynamic components the example second enclave memory layout 130 is allocated to virtual memory having a second virtual size 138 represented by a dashed line. As such, when the example second enclave memory layout 130 is loaded, a corresponding measurement will be based on the example second virtual size 138 of allocated virtual memory, the example enclave image 102, the example static heap 104, the example first static thread 106, and the example second static thread 108, but the example dynamic heap 132, the example first dynamic thread 134, and the example second dynamic thread 136 will not be part of that measurement.[0017] While the same example enclave image 102 is used for the example first enclave memory layout 100 and the example second enclave memory layout 130, the measurements between these two layouts will be different. In particular, when comparing the example first enclave memory layout 100 and the example second enclave memory layout 130, two different measurements will result due to the difference in size of the example allocated first virtual size 1 10 and the example allocated second virtual size 138 of the virtual memory. As such, the ISV is burdened with a responsibility to manage two separate memory layouts depending on which target processor type will execute the enclave. In some examples, the ISV may require adjustments to its attestation infrastructure to accommodate for different measurements that may occur depending on different types of SGX capabilities (e.g., SGX processors with only static component capabilities, SGX processors with static and dynamic component capabilities).[0018] Example methods, apparatus, systems and/or articles of manufacture disclosed herein facilitate processor agnostic enclave image layout loading, such as enclave image 102 loading on either static memory management SGX processors (static SGX processors) or dynamic memory management SGX processor (dynamic SGX processors) without further coding efforts by the ISVs. As described above, ISVs are typically responsible for coding and/or otherwise designing enclave files to be executed in secure virtual memory as an enclave image, such as the example enclave image 102 of FIGS. 1A and/or IB. Examples disclosed herein permit an ISV to deploy an enclave image file to a target platform when the target processor type (e.g., static SGX processor, dynamic SGX processor) is unknown, thereby removing ISV concern for specifically tailoring the enclave file for one or more memory management instructions that are specific to a target SGX processor. Additionally, examples disclosed herein determine and/or otherwise detect a type of the target SGX processor after the loading phase is complete to inform the enclave memory layout in virtual memory of which memory management routines are available during runtime (e.g., static component memory management or dynamic component memory management).[0019] FIG. 2 illustrates an example platform environment 200 to initialize enclave files on target processors. In the illustrated example of FIG. 2, the platform environment 200 includes a processor 202 containing a software guard extension (SGX) instruction set 204. In some examples, the example processor 202 is referred to as an SGX processor by virtue of a type of SGX instruction set 204. The example SGX instruction set 204 may be a static component instruction set 204A (e.g., a legacy or early generation instruction set), or a dynamic component instruction set 204B (e.g., a more recent generation instruction set that is capable of adding and/or removing dynamic components from the enclave without altering its measurement). The example SGX instruction set 204 includes hardware instructions (e.g., processor microcode) used by an operating system of the platform environment 200 to implement an enclave file for an executable application. The example platform environment 200 also includes physical memory 206 (e.g., a cache memory, dynamic random access memory (DRAM), synchronous DRAM and/or other volatile memory) that may store data and/or instructions for the example platform environment 200. The example physical memory 206 includes an example enclave page cache (EPC) 208 that may include any number of pages 210.[0020] The example EPC 208 is a secure memory structure that can be accessed using instructions from the example SGX instruction set 204. In the illustrated example of FIG. 2, the EPC 208 provides access control mechanisms to protect the integrity and confidentiality of the one or more pages 210. In some examples, the EPC 208 maintains a coherency protocol and may be implemented as a dedicated synchronous RAM of the example processor 202. In some examples, the EPC 208 is implemented via a Crypto Memory Aperture (CMA) mechanism.[0021] The example physical memory 206 also includes an example SGX software development kit (SDK) image file 212 that is loaded on a virtual memory 214 as an SGX engine 216 by the example processor 202. Additionally, the example virtual memory 214 includes an enclave image 218 that is loaded and instantiated by the example SGX engine 216. The example enclave image 218 also includes an associated measurement value 220 that is determined after the enclave image 218 is loaded into virtual memory, but prior to runtime.[0022] In operation, the example enclave image 218 may execute during runtime to perform one or more operations based on a request. For example, a banking client may invoke a function call to determine a measurement value of the enclave image 218 so that it may be compared against a prior known value that is deemed valid and/or otherwise trustworthy. In the event the measurement value stored by the example banking client matches the example measurement value 220 associated with the example enclave image 218, then one or more authorized function calls may be made to the enclave image 218. However, no function calls are available to cause the enclave image 218 to reveal its data and/or code in plaintext format, even if such requests originate from an OS kernel. As such, enclave images 218 implemented by SGX processors 202 are deemed trustworthy containers within which code may be executed.[0023] However, ISVs that develop enclave files to be loaded as enclave images 218 in virtual memory 214 typically tailor their building, compiling and/or linking of the enclave files in a manner that satisfies a target processor type (e.g., a type of SGX instruction set available on the example platform environment 200). As described above in connection with FIGS. 1A and IB, enclave memory layouts will differ based on a target processor type that allocates virtual memory resources in a manner that causes measurements for the same enclave image file to be different from one platform environment to another platform environment. Additionally, once an enclave image 218 is loaded, it must be instantiated and/or otherwise initialized for runtime with information that reveals which type of memory management instruction set is available. For example, if the example platform environment 200 includes a legacy and/or older generation SGX instruction set that is not capable of dynamic memory management during runtime (e.g., a static SGX processor), then the enclave image 218 must use only those memory management instructions related to static memory management. On the other hand, in the event the example platform environment 200 includes a more-recent generation SGX instruction set that is capable of dynamic memory management during runtime (e.g., a dynamic SGX processor), then the enclave image should use such dynamic memory management instructions to take advantage of performance and/or memory management enhancements (e.g., EPC minimization and/or footprint reduction).[0024] FIG. 3 includes additional detail of the example SGX engine 216 of FIG. 2. In the illustrated example of FIG. 3, the SGX engine 216 includes an image file manager 302, a configuration parameter manager 304, an address space manager 306, an enclave initializer 308, and an SGX identifier 310. In operation (during build- time), the example image file manager 302 retrieves an enclave image file (e.g., from an ISV). In some examples, enclave image files are stored on disk and transferred to a relatively faster storage device, such as the example physical memory 206 of FIG. 2. Ultimately, the target/desired enclave image file is loaded (during load-time) into virtual memory 214 in a particular layout for runtime execution (e.g., see FIGS. 1A and IB), which includes any number of additional components needed for such execution (e.g., heaps, stacks, threads, etc.). During the example build-time phase, the example configuration parameter manager 304 extracts configuration parameters from the enclave image file. Configuration parameters include, but are not limited to a number of static threads required for runtime, a size of a static heap during runtime, a size of a dynamic heap during runtime, a number of thread stacks and their associated sizes, etc. [0025] Based on the configuration parameters extracted by the example configuration parameter manager 304, the example address space manager 306 calculates (e.g., during the build-time phase when compiling/linking) a virtual address space value. In some examples, the configuration parameter manager 304 calculates a maximum possible address space value that is based on a maximum amount of resources that could be requested and/or otherwise demanded by the enclave image 218 during execution. Determining the maximum possible address space value may be accomplished in a manner consistent with U.S. Patent Publication No. 2014- 000671 1 Al, filed on June 27, 2012, entitled "Method, System, and Device for Modifying a Secure Enclave Configuration Without Changing the EnclaveMeasurement," which is incorporated by reference herein in its entirety. The maximum possible address space value specifies, for example, a maximum number of threads supported by an application of the enclave image 218 under anycircumstances, which may also include a maximum number of thread control structures, a maximum or upper bound of the heap and/or stack sizes, etc. However, in some examples determining a maximum address space value may accommodate an enclave image that operates in connection with a first type of SGX instruction set, but may not accommodate that enclave image that operates in connection with a second type of SGX instruction set.[0026] Returning to the illustrated example of FIG. 1A, the example first enclave memory layout 100, which is associated with an SGX instruction set capable of handling only static components, has an associated first virtual size 110. For the sake of example, the first virtual size 110 is determined to be a maximum address space value to accommodate the associated enclave image 102 and all components (e.g., two threads) that it might require during operation. On the other hand, in the event the enclave image 102 is to execute in connection with an SGX instruction set capable of handling dynamic components, as shown in the illustrated example of FIG. IB, then any maximum address space value determination performed earlier in connection with the alternate SGX processor (e.g., the SGX processor capable of only handling static components) would not be appropriate and/or otherwise successful on that alternate target processor. Instead, the example second virtual size 138 of FIG. IB would be required for successful operation of the enclave image 102 with the dynamic instruction set. [0027] To determine a virtual address size for an enclave of interest that can execute in connection with either the example static component instruction set 204A or the example dynamic component instruction set 204B (while maintaining a measurement value that is consistent in either situation), the example address space manager 306 calculates a minimum address space required for a target application that is to use the example enclave image 102. In particular, because both static memory layouts (e.g., the example first enclave memory layout 100) and dynamic memory layouts (e.g., the example second enclave memory layout 130) must, at a minimum, include all static components during a loading phase, the example address space manager 306 determines a virtual address space that includes the example enclave image 102, the example static heap 104, the example first static thread 106, and the example second static thread 108. To generate an enclave memory layout that is compatible with both types of SGX processors, and to prevent any ISV efforts and/or concerns regarding generating an enclave memory layout that conforms to a particular measurement value, the example address space manager 306 applies a virtual address multiplication factor when determining a virtual address size value. The virtual address multiplication factor may be any numeric value that results in an increase of a minimum address space value to an adjusted value.[0028] FIG. 4 illustrates example enclave memory layouts 400. In the illustrated example of FIG. 4, the example first enclave memory layout 100 from FIG. 1A is shown for comparison purposes. The example address space manager 306 determines that a minimum virtual address space that is represented by the example first virtual size 1 10 (see dashed line) under the assumption that both the example first static thread 106 and the example second static thread 108 will be required when the enclave is invoked by an application. While the example first virtual size 110 of FIG. 4 is shown as a rectangular region, the example first virtual size 110 may also be represented in a numeric manner, such as an amount of memory in bytes. To generate an enclave memory layout that is compatible with any type of SGX processor (e.g., static SGX processors with microcode/instruction sets that can handle static components, dynamic SGX processors with microcode/instruction sets that can handle dynamic components), the example address space manager 306 multiplies the example first virtual size 110 by a multiplication factor to determine an optimized virtual size 402 that includes a dynamic virtual container 404. As such, an optimized memory enclave layout 406 results during the loading phase of the enclave image 102. Additionally, regardless of whether the enclave image 102 is to be loaded in connection with a legacy SGX processor (e.g., only capable of static memory management) or a relatively newer generation of SGX processor (e.g., capable of dynamic memory management), the same optimized memory enclave layout 406 may be used.[0029] For example, the illustrated example of FIG. 4 includes the optimized memory enclave layout 406 that was loaded in connection with a platform using a relatively newer generation of SGX processor (see optimized memory enclave layout 406 referenced by 408). At the time of measurement, the static contents of the optimized memory enclave layout 410 have not changed, and the optimized virtual size 402 has not changed, which results in the same measurement values during the enclave loading phase regardless of the type of SGX processor being used.Additionally, in the event the target platform is using the relatively newer generation of SGX processor (e.g., a processor using the example dynamic component instruction set 204B of FIG. 2), then one or more dynamic runtime components may use the example dynamic virtual container 404.[0030] In other words, when the ISV prepares to build, load and instantiate an enclave file for use on a platform (e.g., for which the ISV does not know or care about the particular SGX instruction set capabilities), examples disclosed herein permit such building, loading and instatiation to occur so that the instantiated enclave can operate with either an SGX instruction set that includes static capabilities only, or dynamic capabilities. Further, any measurement taken of the example layout after the loading phase (when measurements occur) will be the same regardless of the SGX processor type used because, in part, the allocated virtual address size will be the same in all instances.[0031] The example enclave instantiator 308 instantiates the example optimized memory enclave layout 406, and the example SGX identifier 310 informs the optimized memory enclave layout 406 of the type of SGX instructions available to it. In particular, the example SGX identifier 310 detects an architecture type of the processor, such as by way of invoking and/or otherwise querying a CPU ID command/opcode. In the event the query indicates that the processor is of a legacy type, then the example SGX identifier 310 associates the optimized memory enclave layout 406 with a static memory flag so that when the enclave image invokes one or more memory management instructions, it does so in view of the capabilities of the current platform. On the other hand, in the event the query indicates that the processor is of a relatively newer type that is capable of dynamic memory management, then the example SGX identifier 310 associates the optimized memory enclave layout 406 with a dynamic memory flag.[0032] While an example manner of implementing the platform environment 200 of FIG. 2 is illustrated in FIGS. 1-4, one or more of the elements, processes and/or devices illustrated in FIGS. 1-4 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example image file manager 302, the example configuration parameter manager 304, the example address space manager 306, the example enclave initializer 308, the example SGX identifier 310 and/or, more generally, the example SGX engine 216 of FIGS. 2 and/or 3 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example image file manager 302, the example configuration parameter manager 304, the example address space manager 306, the example enclave initializer 308, the example SGX identifier 310 and/or, more generally, the example SGX engine 216 of FIGS. 2 and/or 3 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example image file manager 302, the example configuration parameter manager 304, the example address space manager 306, the example enclave initializer 308, the example SGX identifier 310 and/or, more generally, the example SGX engine 216 of FIGS. 2 and/or 3 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware. Further still, the example platform environment 200 of FIGS. 1-4 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIGS. 1-4, and/or may include more than one of any or all of the illustrated elements, processes and devices. [0033] Flowcharts representative of example machine readable instructions for implementing the platform environment 200 of FIGS. 1-4 are shown in FIGS. 5-7. In these examples, the machine readable instructions comprise program(s) for execution by a processor such as the processor 812 shown in the example processor platform 800 discussed below in connection with FIG. 8. The program(s) may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor 812, but the entire program(s) and/or parts thereof could alternatively be executed by a device other than the processor 812 and/or embodied in firmware or dedicated hardware. Further, although the example program(s) is/are described with reference to the flowcharts illustrated in FIGS. 5-7, many other methods of implementing the example platform environment 200 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.[0034] As mentioned above, the example processes of FIGS. 5-7 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, "tangible computer readable storage medium" and "tangible machine readable storage medium" are used interchangeably. Additionally or alternatively, the example processes of FIGS. 5-7 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase "at least" is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term "comprising" is open ended.[0035] The program 500 of FIG. 5 begins at block 502 where the example image file manager 302 retrieves an enclave image file, such as an enclave image file developed by an ISV and stored on disk and/or the example physical memory 206. The example configuration parameter manager extracts configuration parameters from the example enclave image file (block 504), which may include information related to a number of static threads required for runtime, a size of a static heap during runtime, a size of a dynamic heap during runtime, a number of thread stacks and/or associated sizes. To generate an enclave memory layout that is compatible with any available processor type, the example address space manager 306 calculates virtual address space (block 506).[0036] FIG. 6 includes additional detail associated with calculating virtual address space of block 506. In the illustrated example of FIG. 6, the example address space manager 306 calculates a minimum address space (block 602). In some examples, the address space manager 306 determines the minimum address space required for runtime based on the retrieved configuration parameters, which identify how many threads the example enclave image is to provide. As described above, the example enclave image may be capable of instantiating any number of threads, but only if such threads are first allocated virtual memory to be used at runtime. To ensure that an enclave image will work on an un-identified target platform having either an SGX processor capable of static memory management only, or capable of dynamic memory management, the example address space manager 306 applies a virtual address multiplication factor to the minimum address space size (block 604). In some examples, the virtual address space size is determined by enclave configuration information (e.g., the metadata), which may include the number of static threads, the number of dynamic threads, sizes of static and/or dynamic heaps, and/or stack sizes. As described above in connection with FIG. 4, the example virtual address multiplication factor results in an additional amount of allocated virtual memory during load time, and shown as the example dynamic virtual container 404. During the enclave loading phase, the example address space manager 306 allocates the newly calculated virtual address space (block 606), such as the optimized virtual size 402 to generate the optimized memory enclave layout 406 shown in FIG. 4.[0037] The example enclave initializer 308 initializes and/or otherwise instantiates the example optimized memory enclave layout (block 508). In some examples, the enclave initializer 308 instantiates the optimized memory enclave layout in a manner consistent with U.S. Patent Publication No. 2014-000671 1 Al, filed on June 27, 2012, entitled "Method, System, and Device for Modifying a Secure Enclave Configuration Without Changing the Enclave Measurement," which is incorporated by reference herein in its entirety. Additionally, the example enclave initializer 308 may instantiate the optimized memory enclave layout in a manner consistent with U.S. Patent Application No. 14/849,222, filed on September 9, 2015, entitled "Application Execution Enclave Memory Page Cache Management Method and Apparatus," which is incorporated by reference herein in its entirety.[0038] At runtime, the example SGX identifier 310 configures runtime parameters for the example optimized memory enclave layout (block 510). FIG. 7 includes additional detail associated with configuring the runtime parameters for the example optimized memory enclave layout of block 510. In the illustrated example of FIG. 7, the SGX identifier 310 detects a processor architecture type of the processor (block 702). As described above, the example SGX identifier 310 may invoke one or more opcodes/commands to determine and/or otherwise detect the architecture type of the processor, such as the CPU ID opcode. If the example SGX identifier 310 determines that the processor is of a legacy type (block 704), then the example SGX identifier 310 associates the optimized memory enclave layout 406 with a static memory flag (block 706), thereby allowing static memory management instructions to be used with the optimized memory enclave layout 406. On the other hand, in the event the example SGX identifier 310 determines that the processor is capable of dynamic memory management capabilities (block 704), then the example SGX identifier 310 associates the optimized memory enclave layout 406 with a dynamic memory flag (block 708), thereby allowing dynamic memory management instructions to be used with the optimized memory enclave layout 406. [0039] FIG. 8 is a block diagram of an example processor platform 800 capable of executing the instructions of FIGS. 5-7 to implement the SGX engine 216 of FIGS. 1-4. The processor platform 800 can be, for example, a server, a personal computer, an Internet appliance, a gaming console, a set top box, or any other type of computing device.[0040] The processor platform 800 of the illustrated example includes a processor 812. The processor 812 of the illustrated example is hardware. For example, the processor 812 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family ormanufacturer. In the illustrated example of FIG. 8, the processor 812 includes one or more example processing cores 815 configured via example instructions 832, which include the example instructions of FIGS. 5-7 to implement the example SGX engine 216 of FIGS. 1-4.[0041] The processor 812 of the illustrated example includes a local memory 813 (e.g., a cache). The processor 812 of the illustrated example is in communication with a main memory including a random access memory (RAM) 814 and a read only memory (ROM) (e.g., non-volatile memory) 816 via a bus 818. The RAM 814 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The ROM 816 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 814, 816 is controlled by a memory controller.[0042] The processor platform 800 of the illustrated example also includes an interface circuit 820. The interface circuit 820 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.[0043] In the illustrated example, one or more input devices 822 are connected to the interface circuit 820. The input device(s) 822 permit(s) a user to enter data and commands into the processor 812. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. [0044] One or more output devices 824 are also connected to the interface circuit 820 of the illustrated example. The output devices 824 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 820 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.[0045] The interface circuit 820 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 826 (e.g., an Ethernet connection, a digital subscriber line (DSL) to facilitate exchange of data within a similar machine platform (e.g., a communication bus), a telephone line, coaxial cable, a cellular telephone system, etc.).[0046] The processor platform 800 of the illustrated example also includes one or more mass storage devices 828 for storing software and/or data. Examples of such mass storage devices 828 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. In some examples, the mass storage device 830 may implement the example subscription database 1 10.[0047] The coded instructions 832 of FIGS. 5-7 may be stored in the mass storage device 828, in the volatile memory 814, in the non-volatile memory 816, and/or on a removable tangible computer readable storage medium such as a CD or DVD 836.[0048] From the foregoing, it will be appreciated that the above disclosed methods, apparatus and articles of manufacture reduce ISV involvement when developing and/or deploying enclave services so that multiple different target processor platforms can implement such enclave services. Accordingly, ISV enclave developers do not need to create and/or otherwise design specific enclaves on a platform-by-platform basis. Additionally, because examples disclosed herein apply a virtual address multiplication factor to the enclave image, the enclave loaded on the platform will exhibit the same measurement regardless of whether the target processor is able to handle static memory management or dynamic memory management during runtime.[0049] Example methods, apparatus, systems and articles of manufacture to initialize enclaves on target processors is disclosed herein. Further examples and combinations thereof include the following.[0050] Example 1 is an apparatus to generate a processor agnostic enclave image layout, including an image file retriever to retrieve configuration parameters associated with an enclave file, and an address space manager to: calculate a minimum virtual address space value for an enclave image layout based on the configuration parameters, and generate an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.[0051] Example 2 includes the apparatus as defined in example 1, wherein the image file retriever is to retrieve a number of candidate threads associated with the enclave file.[0052] Example 3 includes the apparatus as defined in example 2, wherein the configuration parameters include a static heap size associated with the number of candidate threads associated with the enclave file.[0053] Example 4 includes the apparatus as defined in example 3, wherein the address space manager is to calculate the minimum virtual address space value based on the number of candidate threads and the static heap size.[0054] Example 5 includes the apparatus as defined in example 1, wherein the address space manager is to generate the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout.[0055] Example 6 includes the apparatus as defined in example 5, wherein the measurement of the optimized enclave image layout is the same for all of the unknown processor types.[0056] Example 7 includes the apparatus as defined in example 1, further including a software guard extension (SGX) identifier to determine a type of a respective one of the unknown processor types. [0057] Example 8 includes the apparatus as defined in example 7, wherein the type of the respective one of the unknown processor types includes at least one of a static SGX processor or a dynamic SGX processor.[0058] Example 9 includes the apparatus as defined in example 7, wherein the SGX identifier is to associate the optimized enclave image layout with a target enclave instruction set based on the determined type of the respective one of the unknown processor types.[0059] Example 10 includes a method to generate a processor agnostic enclave image layout, including retrieving configuration parameters associated with an enclave file, calculating a minimum virtual address space value for an enclave image layout based on the configuration parameters, and generating an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.[0060] Example 1 1 includes the method as defined in example 10, further including retrieving a number of candidate threads associated with the enclave file.[0061] Example 12 includes the method as defined in example 1 1, wherein the configuration parameters include a static heap size associated with the number of candidate threads associated with the enclave file.[0062] Example 13 includes the method as defined in example 12, further including calculating the minimum virtual address space value based on the number of candidate threads and the static heap size.[0063] Example 14 includes the method as defined in example 10, further including generating the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout.[0064] Example 15 includes the method as defined in example 14, wherein the measurement of the optimized enclave image layout is the same for all of the unknown processor types.[0065] Example 16 includes the method as defined in example 10, further including determining a type of a respective one of the unknown processor types. [0066] Example 17 includes the method as defined in example 16, wherein the type of the respective one of the unknown processor types includes at least one of a static SGX processor or a dynamic SGX processor.[0067] Example 18 includes the method as defined in example 16, further including associating the optimized enclave image layout with a target enclave instruction set based on the determined type of the respective one of the unknown processor types.[0068] Example 19 is a tangible computer-readable storage disk or storage device comprising instructions which, when executed, cause a processor to at least: retrieve configuration parameters associated with an enclave file, calculate a minimum virtual address space value for an enclave image layout based on the configuration parameters, and generate an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.[0069] Example 20 includes the tangible computer-readable storage disk or storage device as defined in example 19, wherein the instructions, when executed, further cause the processor to retrieve a number of candidate threads associated with the enclave file.[0070] Example 21 includes the tangible computer-readable storage disk or storage device as defined in example 20, wherein the instructions, when executed, further cause the processor to identify a static heap size associated with the number of candidate threads associated with the enclave file.[0071] Example 22 includes the tangible computer-readable storage disk or storage device as defined in example 21, wherein the instructions, when executed, further cause the processor to calculate the minimum virtual address space value based on the number of candidate threads and the static heap size.[0072] Example 23 includes the tangible computer-readable storage disk or storage device as defined in example 19, wherein the instructions, when executed, further cause the processor to generate the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout. [0073] Example 24 includes the tangible computer-readable storage disk or storage device as defined in example 23, wherein the instructions, when executed, further cause the processor to create the measurement of the optimized enclave image layout as the same for all of the unknown processor types.[0074] Example 25 includes the tangible computer-readable storage disk or storage device as defined in example 19, wherein the instructions, when executed, further cause the processor to determine a type of a respective one of the unknown processor types.[0075] Example 26 includes the tangible computer-readable storage disk or storage device as defined in example 25, wherein the instructions, when executed, further cause the processor to associate the optimized enclave image layout with a target enclave instruction set based on the determined type of the respective one of the unknown processor types.[0076] Example 27 is a system to generate a processor agnostic enclave image layout, comprising means for retrieving configuration parameters associated with an enclave file, means for calculating a minimum virtual address space value for an enclave image layout based on the configuration parameters, and means for generating an optimized enclave image layout to allow enclave image execution on unknown target processor types by multiplying the minimum address space value with a virtual address factor to determine an optimized virtual address space value for the optimized enclave image layout.[0077] Example 28 includes the system as defined in example 27, further including means for retrieving a number of candidate threads associated with the enclave file.[0078] Example 29 includes the system as defined in example 28, wherein the configuration parameters include a static heap size associated with the number of candidate threads associated with the enclave file.[0079] Example 30 includes the system as defined in example 29, further including means for calculating the minimum virtual address space value based on the number of candidate threads and the static heap size.[0080] Example 31 includes the system as defined in example 27, further including means for generating the optimized enclave image layout with static enclave components and the optimized virtual address space value prior to an instantiation phase to facilitate a measurement of the optimized enclave image layout.[0081] Example 32 includes the system as defined in example 31, wherein the measurement of the optimized enclave image layout is the same for all of the unknown processor types.[0082] Example 33 includes the system as defined in example 27, further including means for determining a type of a respective one of the unknown processor types.[0083] Example 34 includes the system as defined in example 33, wherein the type of the respective one of the unknown processor types includes at least one of a static SGX processor or a dynamic SGX processor.[0084] Example 35 includes the system as defined in example 33, further including means for associating the optimized enclave image layout with a target enclave instruction set based on the determined type of the respective one of the unknown processor types.[0085] Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. |
Semiconductor device assemblies having stacked semiconductor dies and electrically functional heat transfer structures (HTSs) are disclosed herein. In one embodiment, a semiconductor device assembly includes a first semiconductor die having a mounting surface with a base region and a peripheral region adjacent the base region. At least one second semiconductor die can be electrically coupled to the first semiconductor die at the base region. The device assembly can also include an HTS electrically coupled to the first semiconductor die at the peripheral region. |
1.A semiconductor device assembly comprising:a first semiconductor die including a mounting surface having a substrate region and a peripheral region adjacent to the substrate region;At least one second semiconductor die electrically coupled to the first semiconductor die at the substrate region;An electrically functional heat transfer structure HTS electrically coupled to the first semiconductor die at the peripheral region.2.The semiconductor device assembly of claim 1 wherein the first semiconductor die comprises an integrated circuit, and wherein the HTS is electrically coupled to the integrated circuit.3.The semiconductor device assembly of claim 1 further comprising a cover having a cap portion, and wherein the HTS is in indirect contact with the cap portion via an intervening material.4.The semiconductor device assembly of claim 3 wherein the cover further comprises a wall portion, and wherein the HTS is positioned proximate to the wall portion.5.The semiconductor device assembly of claim 1 wherein the HTS comprises a plurality of stacked silicon volumes.6.The semiconductor device assembly of claim 1 wherein said HTS comprises a capacitor.7.The semiconductor device assembly of claim 1 wherein said HTS comprises a plurality of silicon volumes forming a capacitor.8.The semiconductor device assembly of claim 1 further comprising a cover having a cap portion and a wall portion, wherein said at least one second semiconductor die comprises a plurality of vertically stacked second semiconductor dies, and wherein said An HTS is between the first semiconductor die and the cap portion and between the plurality of second semiconductor dies and the wall portion.9.A semiconductor device assembly comprising:Package substratea semiconductor die mounted to the package substrate and having a substrate region, a peripheral region adjacent to the substrate region, and an integrated circuit;a semiconductor die stack mounted to the substrate region of the semiconductor die; andAn electrically functional heat transfer structure HTS mounted to the peripheral region of the semiconductor die and comprising at least one electrical component, wherein the electrical component is electrically coupled to the integrated circuit of the semiconductor die.10.The semiconductor device assembly of claim 9, further comprising a cover having a cap portion and a wall portion, wherein the cover is mounted to the package substrate via the wall portion, and wherein the HTS is from the semiconductor The die extends to a position adjacent to the cap portion and in contact with the cap portion.11.The semiconductor device assembly of claim 10 wherein said contact between said HTS and said cap portion is in direct contact.12.The semiconductor device assembly of claim 10 wherein said contact between said HTS and said cap portion is an indirect contact via an intervening material.13.The semiconductor device assembly of claim 9 further comprising a cover mounted to the support substrate, wherein the electrical component is a capacitor, and wherein the HTS is positioned to transfer heat generated by the capacitor to the cover.14.The semiconductor device assembly of claim 9 wherein said electrical component is a capacitor, and wherein said HTS comprises a plurality of silicon volumes forming said capacitor.15.The semiconductor device assembly of claim 9 wherein said electrical component is a resistor.16.A semiconductor device assembly comprising:Supporting substrate;a first semiconductor die mounted to the support substrate, wherein the first semiconductor die comprises:Base areaa peripheral region adjacent to the substrate region;First integrated circuit; andSecond integrated circuit;a die stack comprising a plurality of second semiconductor dies, wherein the die stack is mounted to the first semiconductor die at the substrate region and electrically coupled to the first integrated circuit;An electrically functional heat transfer structure HTS mounted to the peripheral region and electrically coupled to the second integrated circuit.17.The semiconductor device assembly of claim 16 wherein said HTS comprises a plurality of silicon volumes and at least one pair of bond pads, and wherein said bond pads provide capacitive electrical energy formed at least in part via said silicon volume The electrical connection of the path.18.The semiconductor device assembly of claim 16 wherein said HTS comprises a plurality of silicon volumes forming a capacitor.19.The semiconductor device assembly of claim 16 further comprising a cover mounted to the support substrate, wherein the HTS is positioned to transfer heat to the cover.20.The semiconductor device assembly of claim 19 wherein said HTS is in indirect contact with said cover via an intervening material. |
Semiconductor device assembly having an electrically functional heat transfer structureTechnical fieldThe disclosed embodiments relate to semiconductor device assemblies and, in particular, to semiconductor device assemblies having electrically functional heat transfer structures.Background techniqueA packaged semiconductor die comprising a memory chip, a microprocessor chip, and an imager chip typically includes a semiconductor die mounted on a substrate and encased in a plastic protective cover. The die includes functional features such as memory cells, processor circuitry and/or imager devices, and bond pads electrically connected to the functional features. The bond pads can be electrically connected to terminals external to the protective cover to allow the die to be connected to higher level circuitry.Semiconductor manufacturers continue to reduce the size of the die package to accommodate the space constraints of the electronic device while also increasing the functional capabilities of each package to meet operating parameters. One method for increasing the processing capability of a semiconductor package without substantially increasing the surface area covered by the package (ie, the "occupied area" of the package) is to vertically stack a plurality of semiconductor dies on top of each other in a single package. . The dies in such vertically stacked packages can be interconnected by electrically coupling the bond pads of the individual dies to the bond pads of adjacent dies using a through silicon via (TSV). In a vertically stacked package, the heat generated is difficult to dissipate, which increases the operating temperature of the individual dies, the junction between the dies, and the package as a whole. This can cause the stacked die to reach a temperature above its maximum operating temperature in many types of devices.DRAWINGS1A and 1B are respectively a cross-sectional view and a partially exploded cross-sectional view showing a semiconductor device assembly having an electrically functional heat transfer structure configured in accordance with an embodiment of the present technology.2 is a top plan view of the semiconductor device assembly of FIGS. 1A and 1B.3A and 3B are top plan views of an electrically functional heat transfer structure that has been severed from a semiconductor wafer in accordance with an embodiment of the present technology.4A and 4B are cross-sectional views of an electrically functional heat transfer structure configured in accordance with an embodiment of the present technology.5 and 6 are cross-sectional views showing a semiconductor device assembly having an electrically functional heat transfer structure configured in accordance with an embodiment of the present technology.7 is a schematic diagram showing a system including a semiconductor device assembly configured in accordance with an embodiment of the present technology.Detailed waysSpecific details of several embodiments of a semiconductor device assembly having an electrically functional heat transfer structure are described below. The term "semiconductor device" generally refers to a solid state device comprising a semiconductor material. A semiconductor device can include, for example, a semiconductor substrate, a wafer, or a die singulated from a wafer or substrate. Throughout the present invention, a semiconductor device is typically described in the context of a semiconductor die; however, the semiconductor device is not limited to a semiconductor die.The term "semiconductor device package" may refer to an arrangement having one or more semiconductor devices incorporated in a common package. The semiconductor package can include a housing or housing that partially or completely encapsulates at least one semiconductor device. The semiconductor device package can also include an interposer substrate that carries one or more semiconductor devices and is attached or otherwise incorporated into the housing. The term "semiconductor device assembly" may refer to an assembly of one or more semiconductor devices, semiconductor device packages, and/or substrates (eg, interposer substrates, support substrates, or other suitable substrates). For example, the semiconductor device assembly can be fabricated in discrete package form, strip or matrix form, and/or wafer panel form. As used herein, the terms "vertical," "lateral," "upper," and "lower" may refer to the relative orientation or position of features in a semiconductor device or device assembly in view of the orientations shown in the various figures. For example, "upper" or "uppermost" may refer to a feature that is positioned closer to or closest to the top of the page than another feature or portion of the same feature, respectively. However, these terms should be interpreted broadly to encompass semiconductor devices having other orientations (eg, inverted or tilted orientation) where top/bottom, top/bottom, top/bottom, up/down, and left/right depending on orientation are interchangeable .Several embodiments of the present technology are directed to a semiconductor device assembly including a first semiconductor die, at least one second semiconductor die stacked on a first semiconductor die, and an electrically functional heat transfer structure (HTS). The first semiconductor die includes a mounting surface having a substrate region and a peripheral region that extends around a perimeter of the substrate region. A second semiconductor die is electrically coupled to the first semiconductor die at the substrate region, and the electrical function HTS is electrically coupled to the first semiconductor die at the peripheral region. The electrical function HTS efficiently transfers heat from the peripheral regions of the first semiconductor die and also provides electrical functionality that facilitates operation of the semiconductor device assembly. Accordingly, it is desirable for several embodiments of a semiconductor device assembly in accordance with the present technology to provide a thermally efficient stacked die arrangement, a small package size, and/or more space for functional components due to electrical functionality and from the first Both efficient heat transfer of the peripheral regions of the semiconductor die are performed by a common component.1A and 1B are respectively a cross-sectional view and a partially exploded cross-sectional view showing a semiconductor device assembly 100 having an electrically functional heat transfer structure configured in accordance with an embodiment of the present technology. In particular, FIG. 1A is a cross-sectional view showing the assembly 100 after the fabrication has been completed, and FIG. 1B is a partially exploded view illustrating a portion of the fabrication process of the assembly 100. Referring to FIG. 1A, the assembly 100 includes a package support substrate 102 (eg, an interposer), a first semiconductor die 104 mounted to the support substrate 102, and a plurality of second semiconductor dies 106 mounted to the first die 104. (identified individually by the component symbols 106a to 106d). The first die 104 includes a mounting surface 107 having a base region 108 and a peripheral region 110 (known to those skilled in the art as "corridors" or "racks"). The second die 106 is arranged in a stack 112 ("die stack 112") on the base region 108 of the first die 104.Although the illustrated embodiment of FIGS. 1A and 1B includes a die stack 112 having four individual second dies 106a-106d, other embodiments of the present technology may include additional or fewer second dies 106. For example, in several embodiments, only one second semiconductor die 106 is mounted to the first semiconductor die 104. In other embodiments, two, three, five, six or more second semiconductor dies 106 may be arranged in a die stack on the first semiconductor die 104.The assembly 100 further includes a thermally conductive housing or cover 114 having a cap portion 116 and a wall portion 118. In the illustrated embodiment, the cap portion 116 is joined to the wall portion 118 via a first bonding material 120a (eg, an adhesive). In other embodiments, the cover 114 can be a continuous assembly in which the cap portion 116 is integrally formed with the wall portion 118. The wall portion 118 extends vertically away from the cap portion 116 and may be attached to the support substrate 102 by a second bonding material 120b (eg, an adhesive). In addition to providing a protective cover, the cover 114 can also serve as a heat sink to absorb thermal energy from the semiconductor dies 104 and 106 and dissipate heat. Thus, the cover 114 can be made of a thermally conductive material such as nickel (Ni), copper (Cu), aluminum (Al), a ceramic material having high thermal conductivity (eg, aluminum nitride), and/or other suitable thermally conductive materials.In some embodiments, the first bonding material 120a and/or the second bonding material 120b can be made of materials known in the art as "thermal interface materials" or "TIMs" that are designed to add surface junctions ( For example, contact thermal conductivity at the surface between the die and the heat sink. The TIM may comprise a silicone based grease, gel or binder doped with a conductive material (eg, carbon nanotubes, solder material, diamond-like carbon (DLC), etc.) and a phase change material. In other embodiments, the first bonding material 120a and/or the second bonding material 120b may comprise other suitable materials, such as metal (eg, copper) and/or other suitable thermally conductive materials.Some or all of the first semiconductor die 104 and/or the second semiconductor die 106 may be at least partially encapsulated in the dielectric underfill material 121. The underfill material 121 may be deposited or otherwise formed around or between some or all of the dies in the dies to enhance mechanical connection to the dies and/or to provide conductive features and/or structures (eg, interconnects) Electrical isolation between). The underfill material 121 can be a non-conductive epoxy paste, a capillary underfill, a non-conductive film, a molded underfill, and/or include other suitable electrically insulating materials. In several embodiments, the underfill material 121 can be selected based on its thermal conductivity to enhance heat dissipation through the die of the assembly 100. In some embodiments, the underfill material 121 can be used in place of the first bonding material 120a and/or the second bonding material 120b to attach the cover 114 to the topmost semiconductor die 106d.The first die 104 and the second die 106 may include various types of semiconductor components and functional features such as dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, and other forms of integration. Circuit memory, processing circuitry, imaging components, and/or other semiconductor features. In various embodiments, for example, the assembly 100 can be configured as a hybrid memory cube (HMC), wherein the stacked second dies 106 are DRAM dies or other memory dies that provide data storage, and the first bare Slice 104 is a high speed logic die that provides memory control (e.g., DRAM control) within the HMC. In other embodiments, the first die 104 and the second die 106 may comprise other semiconductor components, and/or the semiconductor components of the individual second dies 106 of the die stack 112 may be different. In the embodiment illustrated in FIG. 1A, first die 104 includes a plurality of integrated circuits 122 that are electrically and/or electrically coupled to other circuits and/or components within first die 104 (individually It is identified as a first integrated circuit 122a, a second integrated circuit 122b, and a third integrated circuit 122c). Additionally, as described in greater detail below, integrated circuit 122 can be part of an associated circuit that includes circuit components external to first die 104.First die 104 and second die 106 may be electrically coupled to package support substrate 102 and coupled to one another by a plurality of conductive elements 124 (eg, copper posts, solder bumps, and/or other conductive features). Additionally, each of the first die 104 and the second die 106 can include a plurality of through silicon vias (TSVs) 126 that are coupled to the conductive elements 124 on opposite sides. . In addition to electrical communication, conductive element 124 and TSV 126 are at least vertically remote from die stack 112 and transfer heat toward cover 114. In some embodiments, the assembly 100 can also include a plurality of thermally conductive elements or "dummy components" that are gaply positioned between the first die 104 and the second die 106 to further facilitate heat transfer through the die stack 112. "(not shown). Such dummy elements can generally be similar to conductive elements 124 and/or TSVs 126, at least in structure and composition, except that the dummy elements are not electrically coupled to the functional circuits of first die 104 and second die 106.The assembly 100 includes a plurality of electrically functional heat transfer structures (HTS) 128 (identically identified as a first HTS 128a and a second HTS 128b) mounted to a peripheral region 110 of the first die 104. In several embodiments, one or more of the integrated circuits 122 can be part of an associated circuit that generates a relatively large amount of heat during operation, such as a serial/deserializer (SERDES) circuit. The HTS 128 can form one or more electronic components that form at least a portion of the circuitry associated with the integrated circuit 122. In the illustrated embodiment of FIG. 1A, for example, HTS 128 includes a capacitor 130 that can be electrically coupled to the integration via first die 104 and conductive elements 134 between bond pads 132 on HTS 128. Circuitry 122 (and/or is electrically coupled to other integrated circuits or components within first die 104).In several embodiments, HTS 128 can include materials selected to provide the desired electrical properties. For example, HTS 128 can be a stacked component formed from a plurality of silicon volumes (eg, layers). In some embodiments, individual silicon volumes may provide a capacitance of approximately 110 fF/μm 2 and individual HTS 128 may have a footprint of approximately 2 mm x 13 mm. In a particular embodiment, the HTS can comprise eight such silicone volumes, corresponding to a total capacitance of approximately 20 μF. In other embodiments, the HTS 128 can include other sizes greater than or less than 2 mm by 13 mm and a total capacitance greater than or less than 20 μF. In operation, the capacitance provided by the HTS 128 can generate relatively significant heat that would otherwise be within the other components attached to the assembly 100 or located within other components of the assembly 100 (eg, attached to the substrate 102 or located A capacitor is produced in the first die 104).As shown in FIG. 1A, the HTS 128 is positioned adjacent to or proximate to the cover 114. In particular, HTS 128 extends laterally between die stack 112 and wall portion 118 of cover 114 and extends vertically from first semiconductor die 104 to cap portion 116. Thus, the heat generated by the HTS 128 can be easily transferred to the cover 114 and thereby transferred to the environment or components external to the cover 114. Heat transfer from the HTS 128 to the cover 114 can significantly reduce the operating temperature of the assembly 100. In particular, HTS 128 rapidly transfers heat to cap portion 116 and wall portion 118 as compared to components located on substrate 102 or within first die 104. In several embodiments, the HTS 128 can be in direct contact with the cover 114. In other embodiments, a filler material and/or other components or materials may be interposed between the HTS 128 and the cover 114 (eg, the first bonding material 120a). In embodiments having this intervening material, the HTS 128 can remain relatively close to the cover 114 and/or in indirect contact with the cover 114 to maintain a high heat transfer rate from the HTS 128 to the cover 114. For example, the intervening material can be selected to include suitable thermal conductivity (eg, TIM).In addition to thermal efficiency, embodiments of the present technology can achieve small package sizes and/or increase the space available for functional components. For example, in existing semiconductor packages, a variety of electrical devices or components are typically integrated or mounted on an associated package substrate adjacent to a stacked die (eg, a surface mount device or an integrated passive device). This arrangement requires the package substrate to have available space outside of the footprint of the substrate die, and thus requires a larger overall device. The HTS 128 disclosed herein can be positioned adjacent to the die stack 112, in a portion of the assembly 100 that would otherwise be filled with filler material or other non-electrical components that do not directly contribute to the electrical function of the assembly 100. Or features (eg, passive thermal components) occupy. Thus, incorporating electrical components and functionality into the HTS 128 avoids electrical devices or components that would otherwise be on the package substrate. This can result in a small package size and/or additional space for larger or additional components (eg, larger first die 104) that provide greater performance and functionality.A partially exploded view of FIG. 1B illustrates a portion of the fabrication process of assembly 100. In particular, the second die 106 and the HTS 128 can be fabricated separately and subsequently attached to the first die 104. Referring to FIGS. 1A and 1B together, HTS 128 has a height h1 corresponding to the height of the uppermost surface of die stack 112 (eg, equal to the height h2 of individual second dies 106 plus the height of the sum of conductive elements 124 therebetween) . Thus, upon assembly, the cap portion 116 of the cover 114 can be thermally coupled to the uppermost surface of the second die 106d and thermally coupled to one or more of the HTSs 128. As shown in FIG. 1A, the cap portion 116 indirectly contacts one or more of the HTSs 128 via the first bonding material 120a (and/or via other alternative or additional intervening materials). In other embodiments, the cap portion 116 can directly contact one or more of the HTSs 128. Whether the cap portion 116 is in direct or indirect contact with the HTS 128 or otherwise sufficiently close to the HTS 128, the proximity of the HTS 128 to the cap portion 116 can achieve a relatively large amount of heat transfer to the exterior of the assembly 100.Although the illustrated embodiment of FIGS. 1A and 1B includes HTS 128 mounted to first semiconductor die 104, other embodiments may include HTS 128 mounted to other components. For example, in several embodiments, HTS 128 can be mounted to substrate 102. In some of these embodiments, the HTS 128 can extend from the substrate 102 to the cap portion 116 and extend from the first die 104 and the second die 106 to the wall portion 118. Additionally, several of these embodiments may include a redistribution layer or other electrical component or circuit within the substrate 102 to provide an electrical connection between the HTS 128 and the first die 104 and/or the second die 106. .2 is a top plan view of the semiconductor device assembly 100 of the first embodiment along line 2-2 of FIG. 1A. The base region 108 occupies a majority of the mounting surface 202 of the first semiconductor die 104 and is at least partially delimited by a boundary or perimeter 204 that may correspond to the footprint of the second semiconductor die 106 (Figs. IA and IB). The peripheral region 110 extends around the perimeter 204 and the bond pads 132 are located within both the substrate region 108 and the peripheral region 110. As shown in the cross-sectional views of FIGS. 1A and 1B, the first HTS 128a and the second HTS 128b are located on opposite sides of the second die 106. However, it should be understood that the additional HTS 128 can be positioned adjacent to the additional side of the second die 106. For example, the first die 104 can include additional bond pads 206 (shown in dashed lines) to mount additional HTSs 128 adjacent to other opposing sides of the second die 106. In such embodiments, the additional HTS 128 can be electrically connected to the integrated circuit 122 or electrically connected to other components or circuits within the first semiconductor die 104 via additional bond pads 206. In several embodiments, the base region 108 can occupy a portion of the mounting surface 202 that is larger than the portion shown in FIG. For example, the peripheral regions 110 can extend on opposite sides of the substrate region 108, and the substrate region 108 can extend through at least a portion of the mounting surface 202 that is occupied by the additional bond pads 206 shown in FIG. In such embodiments, the base region 108 may be delimited from the peripheral region 110 by two boundary lines that extend coaxially along respective opposing portions of the perimeter 204.3A and 3B are top plan views of a heat transfer structure 128 that has been severed from a semiconductor wafer 300 in accordance with an embodiment of the present technology. Wafer 300 can be fabricated by a variety of techniques known in the art (eg, physical vapor deposition, chemical vapor deposition, photolithography, etching, etc.). The fabrication can include depositing a plurality of semiconductor material (eg, silicon) volumes (eg, layers) in the form of a volume of HTS 128. In addition, fabrication can include forming interconnects, bond pads 132, through silicon vias (TSVs), and/or other features via a variety of semiconductor fabrication techniques. After forming wafer 300, HTS 128 can be singulated from wafer 300 via, for example, dicing.The wafer 300 shown in FIG. 3A is not necessarily drawn to scale, but may include, for example, a diameter of 300 mm. Individual HTSs 128 (which are also not necessarily drawn to scale) can be fabricated in a variety of sizes that can be customized for the particular design requirements of the associated device assembly. For example, in some embodiments, the individual HTS 128 can include a footprint of 2 mm x 13 mm. The relatively small size of HTS 128 compared to wafer 300 enables a large number of HTSs 128 to be produced from a single wafer 300. In one embodiment, for example, a single 300 mm wafer can produce approximately 2000 individual HTSs 128.As shown in FIG. 3B, the individual HTSs 128 can include a plurality of bond pads 132. Bond pads 132 can be arranged in a variety of ways and can provide electrical connections to one or more electrical components of associated HTS 128. In the illustrated embodiment, the HTS 128 includes 11 pairs of bond pads 132 spaced along the mounting surface 302. Each individual bond pad pair can be associated with an individual electrical component of the associated HTS 128, as described in more detail below. Additionally, as can be seen by comparing FIGS. 2 and 3B, bond pads 132 of HTS 128 can be aligned with bond pads 132 on first semiconductor die 104 to electrically couple HTS 128 to first semiconductor die 104. In particular, the bond pads 132 of the HTS 128 are arranged in an array to overlap the bond pads 132 of the first semiconductor die 104 (as shown in Figure 1A).4A is a cross-sectional view of an embodiment of HTS 128 taken along line 4A-4A of FIG. 3B in accordance with the present technology. In the illustrated embodiment, HTS 128 includes a plurality of electrically functional volumes or layers 402 that form capacitor 404. In particular, HTS 128 includes eight vertically stacked silicon volumes 402. In several embodiments, volume 402 can include one or more doped or undoped regions. In one embodiment, one or more of the volumes 402 may directly contact the closely adjacent volume 402, as shown in Figure 4A. In other embodiments, an air gap or intervening material (eg, a dielectric material) may be located between one or more adjacent volumes 402. The HTS 128 further includes a through silicon via (TSV) 406 having a first conductive material 408a (eg, metal) and an insulating or dielectric material 410 electrically isolating a portion of the first conductive material 408a from the volume 402. The HTS 128 can further include a second electrically conductive material 408b spaced apart from the first electrically conductive material 408a, wherein at least a portion of the volume 402 is between the first electrically conductive material 408a and the second electrically conductive material 408b. The first conductive material 408a and the second conductive material 408b are electrically coupled to the corresponding bond pads 132. The first electrically conductive material 408a extends through a portion of the uppermost volume 402. In operation, volume 402 (and/or any gap or intervening material) provides capacitance along a path between first conductive material 408a in uppermost volume 402 and second conductive material 408b in lowermost volume 402.4B is a cross-sectional view illustrating an electrically functional heat transfer structure (HTS) 420 configured in accordance with another embodiment of the present technology. Similar to FIG. 4A, the cross-sectional view of FIG. 4B illustrates the HTS 420 taken along a line corresponding to line 4A-4A in FIG. 3B. The HTS 420 includes eight vertically stacked silicon volumes 422, bond pads 424, a first conductive material 426a, a second conductive material 426b, and a dielectric material 428. In operation, volume 422 and/or dielectric material 428 can provide a capacitance between first conductive material 426a and second conductive material 426b.In several embodiments, individual electrical functional elements, components or structures of individual HTSs 128 or 420 can be electrically isolated from one another via one or more electrical barriers. For example, the vertical barrier can be formed between adjacent capacitors and/or other electrical functional components within the HTS 128 or 420 via etching or other techniques known in the art. For example, the HTS 128 shown in FIG. 3B can include eleven electrical functional components (each having a corresponding pair of bond pads 132) separated by ten vertical barriers located between adjacent electrical functional components.5 and 6 are cross-sectional views, respectively, of semiconductor device assemblies 500 and 600 having an electrically functional heat transfer structure (HTS) configured in accordance with an embodiment of the present technology. In the illustrated embodiment of FIG. 5, assembly 500 includes various components that are at least substantially similar to corresponding components in assembly 100 discussed above with respect to FIGS. 1A and 1B. For example, assembly 500 includes a support substrate 502, a first semiconductor die 504, a plurality of second semiconductor dies 506, and a cover 508 having a cap portion 510 and a wall portion 512. Additionally, assembly 500 includes a first electrically functional heat transfer structure (HTS) 514a and a second HTS 514b. The first HTS 514a and the second HTS 514b may include several features at least substantially similar to HTS 128 and 420, including forming one or more capacitors 516 (shown schematically) or, for example, resistors 518 ( The page schematically shows a plurality of vertically stacked silicon volumes of other electrical functional components. However, the HTS 514 includes bond pads 520 on opposite sides (rather than having bond pads on the common side of the associated HTS) to provide circuitry within the first die 504 and circuitry external to the assembly 500 (eg, mountable to A connection between circuits within another assembly of assembly 500). In some embodiments having these features, the cover 508 can be electrically functional or have electrically functional circuitry.The second HTS 514b can be formed from one or more material volumes in a manner that is at least substantially similar to the manner of the HTSs 128, 420, and 514a. For example, the second HTS 514b can include a plurality of vertically stacked material volumes. However, one or more materials may form a resistive component rather than form a capacitive component. In one embodiment, for example, one or more materials may comprise a silicon polymer resistor.As with assembly 500, assembly 600 also includes various components that are at least substantially similar to corresponding components in assembly 100 discussed above with respect to Figures 1A and IB. For example, assembly 600 includes a support substrate 602, a first semiconductor die 604, a plurality of second semiconductor dies 606, and a cover 608 having a cap portion 610 and a wall portion 612. Additionally, assembly 600 includes a first electrically functional heat transfer structure (HTS) 614a and a second HTS 614b. The second HTS 614b can be substantially similar to the HTS 128 discussed above and can include a capacitor 616 (shown schematically). The first HTS 614a can include a diode 618 (also shown schematically).Similar to the first HTS 514a discussed above with respect to FIG. 5, the first HTS 614a may also include bond pads 622 that are electrically coupled to circuitry external to the assembly 600. In contrast, the second HTS 614b includes bond pads 624 that are electrically coupled to one or more circuits within the assembly 600 (eg, within the first semiconductor die 604). Thus, the first HTS 614a together with the second HTS 614b can provide both internal and external electrical connections. Additionally, in several embodiments, individual HTSs configured in accordance with the present technology can include bond pads that provide internal and external electrical connections. For example, an individual HTS can include a pair of bond pads on a common side that are connected to a first electrical functional component in the HTS, and a pair of bond pads on the opposite side that are connected to a second electrical functional component. Moreover, in several embodiments, the HTS can include more than one electrical component (eg, capacitors and resistors, diodes, capacitors, etc.) within the same individual HTS.Any of the stacked semiconductor device assemblies described above with reference to Figures 1A through 6 can be incorporated into any of a myriad of larger and/or more complex systems, a representative example of which is System 700 is shown schematically in FIG. System 700 can include semiconductor device assembly 702, power supply 704, driver 706, processor 708, and/or other subsystems or components 710. The semiconductor device assembly 702 can include features that are substantially similar to features of the semiconductor device assemblies 100, 500, and 600 described above with respect to FIGS. 1A through 6, and thus can include a variety of HTS that can enhance heat dissipation. The resulting system 700 can perform any of a wide variety of functions, such as memory storage, data processing, and/or other suitable functions. Thus, representative system 700 can include, but is not limited to, portable devices (eg, mobile phones, tablet computers, digital readers, and digital audio players), computers, vehicles, appliances, and other products. The components of system 700 can be housed in a single unit or distributed across multiple interconnected units (e.g., via a communication network). The components of system 700 can also include any of a remote device and a wide variety of computer readable media.In view of the foregoing, it will be appreciated that the specific embodiments of the present invention are described herein, and the various modifications may be made without departing from the invention. In addition, the various elements and features illustrated in the various figures may not necessarily be drawn to scale; and various embodiments of the invention may include structures other than those illustrated in the figures and are not necessarily limited to the figures The structure shown in . Moreover, while many of the embodiments of the HTS are described with respect to the HMC, in other embodiments, the HTS can be configured for use with other memory devices or other types of stacked die assemblies. In addition, certain aspects of the novel technology described in the context of particular embodiments may be combined or eliminated in other embodiments. In addition, while the advantages associated with those embodiments have been described in the context of particular embodiments of the present technology, other embodiments may exhibit such advantages and not all embodiments must exhibit such advantages to be attributed to the present invention. Within the scope of technology. Accordingly, the present invention and associated technology may encompass other embodiments not specifically shown or described herein. |
In one embodiment, an apparatus may include a clock generator to generate a format clock signal. The apparatus may also include a serializer to generate serial data based on a transmit clock signal and parallel input data. The apparatus may also include a signal generator to generate at least two differential signals based on the format clock signal and the serial data. |
What is claimed is: 1 . An apparatus comprising: a clock generator to generate a format clock signal; a serializer to generate serial data based on a transmit clock signal and parallel input data; and a signal generator to generate at least two differential signals based on the format clock signal and the serial data. 2. The apparatus of claim 1 , further comprising a dividing unit to obtain the transmit clock signal by dividing a frequency of the format clock signal by a predetermined multiple. 3. The apparatus of claim 2, wherein the dividing unit comprises a multiplexer. 4. The apparatus of claim 3, wherein the dividing unit comprises at least two flip flops, and wherein clock inputs of the at least two flip flops are each coupled to an output of the multiplexer. 5. The apparatus of claim 1 , wherein a pulse period of the transmit clock signal is a predetermined multiple of a pulse period of the format clock signal. 6. The apparatus of claim 1 , wherein the serializer is a Parallel-In, Serial-Out (PISO) unit. 7. The apparatus of claim 1 , wherein the clock generator is to generate the format clock signal based on a gear selection input. 8. The apparatus of claim 7, wherein the transmit clock rate corresponds to a midpoint of a gear associated with the gear selection input. 9. The apparatus of claim 1 , wherein the at least two differential input signals are Pulse Width Modulated (PWM) signals. 10. The apparatus of claim 9, wherein the PWM signals are to conform to the Mobile Industry Processor Interface (MIPI) M-PHY Specification. 1 1 . The apparatus of claim 9, wherein a pulse period of the transmit clock signal is equal to a PWM data bit time period. 12. The apparatus of claim 9, wherein a pulse period of the format clock signal is equal to one-third of a PWM data bit time period. 13. The apparatus of claim 1 , wherein the timing of the at least two differential input signals is aligned with a pulse period of the format clock signal. 14. A system comprising: a system on a chip comprising at least one core having at least one execution unit and transmit logic, the transmit logic comprising: a clock generator to generate a format clock signal; a dividing unit to obtain a transmit clock signal based on the format clock signal; a serializer to generate serial data based on the transmit clock signal and input data; a signal generator to generate two or more differential signals based on the format clock signal and the serial data; and a wireless device coupled to the system on the chip via an interconnect, the interconnect used to communicate data between the wireless device and the transmit logic of the system on the chip. 15. The system of claim 14, wherein the dividing unit is to obtain the transmit clock signal by dividing a frequency of the format clock signal by a predetermined multiple. 16. The system of claim 14, wherein a frequency of the transmit clock signal is one third a frequency of the format clock signal. 17. The system of claim 14, wherein the clock generator is to generate the format clock signal based on a gear selection input. 18. The system of claim 14, the clock generator comprising a delay locked loop (DLL). 19. The system of claim 14, the clock generator comprising a phase locked loop (PLL). 20. A method comprising: generating, in a transmit logic of a first device, a format clock signal; dividing the format clock signal by a predetermined multiple to obtain a transmit clock signal; serializing parallel data based on the transmit clock signal to obtain serial data; and generating a plurality of differential signals based on the serial data and the format clock signal. 21 . The method of claim 20, wherein generating the format clock signal comprises selecting a transmit clock rate based on the gear selection input. 22. The method of claim 20, wherein the predetermined multiple is three. 22. The method of claim 20, wherein each of the plurality of differential signals is a Pulse Width Modulated (PWM) signal. 23. The method of claim 22, wherein each of the plurality of differential signals is to conform to the Mobile Industry Processor Interface (MIPI) M-PHY Specification. |
DATA INTERFACE CLOCK GENERATION Background [0001 ] Embodiments relate generally to data interfaces for electronic devices. [0002] Many electronic devices include multiple components coupled together by one or more data interfaces. For example, a cellular telephone may include a processor core coupled to a radio transceiver, a sound input device, a sound output device, a camera, a display device, a memory device, etc. The functionality of such components has been continually improved to meet market demands. Accordingly, the data interfaces between the components may need to be adapted to such functionality. Brief Description of the Drawings [0003] FIG. 1 is a block diagram of a system in accordance with one or more embodiments. [0004] FIG. 2 is an example timing diagram in accordance with one or more embodiments. [0005] FIG. 3A is a block diagram of a system in accordance with one or more embodiments. [0006] FIG. 3B is a block diagram of a system in accordance with one or more embodiments. [0007] FIG. 4 is a flow chart of a method in accordance with one or more embodiments. [0008] FIG. 5 is an example timing diagram in accordance with one or more embodiments. [0009] FIG. 6 is a block diagram of a processor in accordance with one or more embodiments. [0010] FIG. 7 is a block diagram of an example system in accordance with one or more embodiments. [001 1 ] FIG. 8 is a block diagram of an example system in accordance with one or more embodiments. Detailed Description [0012] In accordance with some embodiments, electronic devices may use differential pulse width modulated (PWM) signals to transmit data between components. In one or more embodiments, a format clock signal may be generated based on a gear selection input. In some embodiments, a transmit clock signal may be generated based on the format clock signal. Further, in some embodiments, the format clock signal may be used to align the timing of differential PWM signals. In one or more embodiments, such alignment of the differential PWM signals may facilitate recovery of PWM data bits by a receiver. [0013] Referring to FIG. 1 , shown is a block diagram of an apparatus 100 in accordance with one or more embodiments. As shown in FIG. 1 , the apparatus 100 may include a link 120 connecting a transmitter 1 10 and a receiver 150. In accordance with some embodiments, the apparatus 100 may be any electronic device, such as a cellular telephone, a computer, a media player, a network device, etc. [0014] In some embodiments, the transmitter 1 10 and the receiver 150 may exist to connect any components or peripherals of the apparatus 100, such as a processor, a processor core, a memory device, a display device, a sound device, a wireless transceiver, a camera, etc. Note that, while only one pair of transmitter 1 10 and receiver 150 is shown for the sake of clarity, the example shown in FIG. 1 is not intended to be limiting. Accordingly, it should be appreciated that any number of such transmitter-receiver pairs may exist to connect various components of the apparatus 100. [0015] In accordance with some embodiments, the link 120 may be any electrical or data connection(s) (e.g., motherboard connection, input/output cable, network connector, bus, wireless link, etc.). In one or more embodiments, the transmitter 1 10 may include transmit logic 1 15 to manage data connections to the receiver 150. Further, in some embodiments, the receiver 150 may include receive logic 155 to manage the data connections from the transmitter 1 10. [0016] In accordance with some embodiments, the link 120, the transmit logic 1 15, and the receive logic 155 may use one or more data interface protocols. For example, in some embodiments, the link 120, the transmitter 1 10, and the receiver 150 may use the M-PHY specification of the Mobile Industry Processor Interface (MIPI) Alliance (MIPI Specification for M-PHY Version 1 .00.00 of February 8, 201 1 , approved April 28, 201 1 ). In such embodiments, the link 120 may include serial lines carrying differential PWM signals. Optionally, such differential signals may be referred to as "self-clocking" if clock information is included in the period of the differential signal waveform. [0017] In one or more embodiments, the differential PWM signals of the link 120 may operate under one or more data rate ranges of the M-PHY specification (referred to as "gears"). For example, the link 120 may operate under gear 1 (3 Mbps to 9 Mbps), gear 2 (6 Mbps to 18 Mbps), gear 3 (12 Mbps to 36 Mbps), gear 4 (24 Mbps to 72 Mbps), gear 5 (48 Mbps to 144 Mbps), gear 6 (96 Mbps to 288 Mbps), gear 7 (192 Mbps to 576 Mbps), etc. [0018] In accordance with some embodiments, the transmit logic 1 15 may include functionality to convert parallel data into differential PWM signals. Further, the transmit logic 1 15 may also include functionality to generate a format clock signal to format the differential PWM signals. In addition, the transmit logic 1 15 may also include functionality to generate a transmit clock signal based on the format clock signal. This functionality of the transmit logic 1 15 is described further below with reference to FIGs. 2-5. [0019] In one or more embodiments, the transmit logic 1 15 and/or the receive logic 155 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments, they may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. While shown with this particular implementation in the embodiment of FIG. 1 , the scope of the various embodiments discussed herein is not limited in this regard. [0020] Referring to FIG. 2, shown is a timing chart of a system in accordance with one or more embodiments. The timing chart shows an example of power states (i.e., voltage levels) of differential-p line 121 and differential-n line 122 with respect to time. In some embodiments, the differential lines 121 and 122 may transport differential PWM signals 123, 124, and may together correspond generally to the link 120 shown in FIG. 1 . [0021 ] In one or more embodiments, the transmit logic 1 15 (shown in FIG. 1 ) may generate the differential PWM signals 123, 124 based on parallel data. Further, in some embodiments, the receive logic 155 (shown in FIG. 1 ) may sample the differential lines 121 and 122 for a designated time period to determine how the transferred data (e.g., a single PWM data bit) may be expressed. For example, as shown, the differential-p signal 123 may be expressed when the differential-p line 121 is in a high power state and the differential-n line 122 is in a low power state. Similarly, the differential-n signal 124 may be expressed when the differential-n line 122 is in a high power state and the differential-p line 121 is in a lower power state. [0022] In addition, in one or more embodiments, the PWM data bit may be defined by the relative duration of the differential signals 123, 124 during the PWM data bit time period. In some embodiments, the relative duration of the differential signals 123, 124 may be defined in terms of equal portions of the PWM data bit time period. Further, the number of the equal portions may be expressed as a predetermined multiple (e.g., 2, 3, 4, etc.). For example, assume that the predetermined multiple is three. Thus, in this example, the duration of the differential signals 123, 124 is defined in terms of one-third portions of the PWM data bit time period. This example may be illustrated in FIG. 2 in accordance with some embodiments. As shown, a "0" data bit 125 may be expressed when the differential-n signal 124 corresponds to two-thirds of the PWM data bit period, and the differential-p signal 123 corresponds to the remaining one-third of the PWM data bit period. Further, as shown, a "1 " data bit 126 may be expressed when the differential-n signal 124 corresponds to one-third of the PWM data bit period, and the differential-p signal 123 corresponds to the remaining two-thirds of the PWM data bit period. Note that, while the example shown in FIG. 2 assumes a predetermined multiple of three, embodiments are not limited in this regard. In some embodiments, the predetermined multiple may be defined in accordance to a given standard or specification (e.g., the M-PHY specification). [0023] Referring to FIG. 3A, shown is a block diagram of a signal generation logic 200 in accordance with one or more embodiments. More specifically, the signal generation logic 200 may generally correspond to all or a portion of the transmit logic 1 15 shown in FIG. 1 . In some embodiments, the signal generation logic 200 may include a clock generator 210, a dividing unit 220, a serializer 230, and a PWM signal generator 240. [0024] As shown, in one or more embodiments, the clock generator 210 may receive a gear selection input. In one or more embodiments, the gear selection input may be any identifier or indication to identify any one of a number of gears (i.e., data rate ranges). In some embodiments, the gear selection input may be one of the seven gears as defined by the M-PHY specification. [0025] In one or more embodiments, the clock generator 210 may include functionality to select a transmit clock rate based on the gear selection input. For example, assume that the gear selection input corresponds to gear 1 (i.e., 3-9 Mbps). In some embodiments, the clock generator 210 may set the transmit clock rate as corresponding to the lower bound of gear 1 (i.e., 3 MHz), to the upper bound of gear 1 , (i.e., 9 MHz), to the mid-point of gear 1 (i.e., 6 MHz), or to any other level or value within gear 1 . [0026] Further, in one or more embodiments, the clock generator 210 may include functionality to generate a format clock signal having a frequency that is the predetermined multiple (e.g., 3X) of the selected transmit clock rate. For example, assuming that the selected transmit clock rate is 6 MHz and the predetermined multiple is three, the clock generator 210 may generate a format clock signal having a frequency of 18 MHz (i.e., three times faster than 6MHz). In some embodiments, the clock generator 210 may include a delay locked loop (DLL), a phase locked loop (PLL), and/or any similar components. Of course, while this example assumes a predetermined multiple of three, embodiments are not limited in this regard. [0027] As shown, the format clock signal may be provided to the dividing unit 220 and the PWM signal generator 240. In one or more embodiments, the dividing unit 220 may include functionality to divide a frequency of the format clock signal by the predetermined multiple to obtain a transmit clock signal. For example, in the case that the format clock signal has a frequency of 18 MHz and the predetermined multiple is three, the dividing unit 220 may provide a transmit clock signal having a frequency of 6 MHz. One example embodiment of the dividing unit 220 is described below with reference to FIG. 3B. [0028] In one or more embodiments, one pulse of the transmit clock signal may correspond to a single PWM data bit time period (e.g., the "0" data bit 125 or the "1 " data bit 126 shown in FIG. 2). Note that the time duration of one pulse of the format clock signal is equal to the time duration of a pulse of the transmit clock signal divided by the predetermined multiple. Accordingly, one pulse of the format clock signal may correspond to the PWM data bit time period divided by the predetermined multiple. [0029] In accordance with some embodiments, the transmit clock signal and parallel input data may be provided to the serializer 230. In one or more embodiments, the serializer 230 may include functionality to convert the parallel input data into serial data. For example, in some embodiments, the serializer 230 may include a Parallel-In, Serial-Out (PISO) component. Further, in one or more embodiments, the serializer 230 may perform this conversion such that the resulting serial data bits are synchronized to the transmit clock signal. In some embodiments, the parallel input data may be any parallel data to be transmitted to a receiver (e.g., receiver 150 shown in FIG. 1 ). [0030] As shown, in one or more embodiments, the serial data may be provided to the PWM signal generator 240. In one or more embodiments, the PWM signal generator 240 may include functionality to convert the serial data into differential PWM signals (e.g., differential signals 123, 124 shown in FIG. 2). [0031 ] In one or more embodiments, the PWM signal generator 240 may use the format clock signal to define the timing of the generated differential signals. Specifically, as described above, each pulse of the format clock signal may correspond to the PWM data bit time period divided by the predetermined multiple. Thus, assuming a predetermined multiple of three, in order to convert a "0" value serial data bit into differential PWM form, the PWM signal generator 240 may generate a differential-n signal for two pulses of the format clock signal (i.e., for the first two-thirds of the PWM data bit period), and may then generate a differential-p signal for one pulse of the format clock signal (i.e., for the remaining one-third of the PWM data bit period). Further, in order to convert a "1 " value serial data bit into differential PWM form, the PWM signal generator 240 may generate a differential-n signal for one pulse of the format clock signal (i.e., for the first one-third of the PWM data bit period), and may then generate a differential-p signal for two pulses of the format clock signal (i.e., for the remaining two-thirds of the PWM data bit period). In this manner, the timing of the differential signals generated by the PWM signal generator 240 may be aligned to the PWM data bit period. In one or more embodiments, such alignment of the differential signals may facilitate recovery of the PWM data bits by the receiver. While the above example assumes a predetermined multiple of three, embodiments are not limited in this regard. [0032] Referring now to FIG. 3B, shown is a block diagram of a dividing logic 300 in accordance with one or more embodiments. More specifically, in some embodiments, the dividing logic 300 may generally correspond to all or part of the dividing unit 220 shown in FIG. 3A. Further, the dividing logic 300 may correspond to a situation in which the predetermined multiple of three. [0033] As shown, the dividing logic 300 may receive a format clock signal. In one or more embodiments, the format clock signal may be provided by a clock generator (e.g., clock generator 210 shown in FIG. 3A). In some embodiments, each pulse of the format clock signal may correspond to one-third of a single PWM data bit time period. [0034] As shown in FIG. 3B, in one or more embodiments, the format clock signal may be provided to a first inverter 310. The format clock signal may also be provided, along with an output of the first inverter 310, to inputs of a 2-to-1 multiplexer 320. As shown, the output of the 2-to-1 multiplexer 320 may be provided to clock inputs of a first flip flop (FF) 330 and a second FF 350. Further, a reset signal may be coupled to the reset inputs of the first FF 330 and the second FF 350. In one or more embodiments, the reset signal may be provided by a processor (not shown) to initiate a PWM signaling process. [0035] As shown, in one or more embodiments, the output of the first FF 330 may be provided to a buffer 340. In some embodiments, the output of the buffer 340 may be a transmit clock signal. In one or more embodiments, this transmit clock signal may be provided to a serializer (e.g., serializer 230 shown in FIG. 3A) for use in serializing parallel data. In one or more embodiments, each pulse of the transmit clock signal may correspond to a single PWM data bit time period. [0036] In accordance with some embodiments, the output of the buffer 340 may also be provided to a data input of the second FF 350. The output of the second FF 350 may be coupled to a second buffer 360. Further, the output of the second buffer 360 may be provided to a second inverter 370. As shown, the output of the second inverter 370 may be coupled to a data input of the first FF 330. Further, in one or more embodiments, the output of the second buffer 360 may also be provided to a selector input of the 2-to-1 multiplexer 320. [0037] In one or more embodiments, the dividing logic 300 shown in FIG. 3B may provide a transmit clock signal having a pulse time period three times longer than the pulse time period of the received format clock signal. Accordingly, in some embodiments, the dividing logic 300 may enable generation of differential PWM signals having precise one-third portions. [0038] Note that the examples shown in FIGs. 1 , 2, 3A, and 3B are provided for the sake of illustration, and are not intended to limit any embodiments. For example, referring to FIG 1 , embodiments may include any number and/or arrangement of transmitters 1 10 and/or receivers 150. In another example, referring to FIG. 3A, the signal generation logic 200 may include additional and/or different components to provide differential PWM signals. In yet another example, referring to FIG. 3B, the dividing logic 300 may include additional and/or different components to provide a transmit clock signal based on a format clock signal. While some of the examples shown in FIGs. 1 , 2, 3A, and 3B assume a predetermined multiple of three, embodiments are not limited in this regard. [0039] It is contemplated that some embodiments may include any number of components in addition to those shown, and that different arrangement of the components shown may occur in certain implementations. Further, it is contemplated that specifics in the examples shown in FIGs. 1 , 2, 3A, and 3B may be used anywhere in one or more embodiments. [0040] FIG. 4 shows a sequence 400 for generating differential signals in accordance with one or more embodiments. In one embodiment, the sequence 400 may be part of the transmit logic 1 15 shown in FIG. 1 . In other embodiments, the sequence 400 may be implemented by any other part of transmitter 1 10. The sequence 400 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0041 ] At step 410, a format clock signal may be generated based on a gear selection input. For example, referring to FIG. 3A, the clock generator 210 may receive a gear selection input (e.g., gear 1 ), and may select a transmit clock rate based on the gear selection input. The clock generator 210 may then generate a format clock signal having a frequency that is a predetermined multiple (e.g., 2X, 3X, 4x, etc.) of the selected transmit clock rate. In some embodiments, the clock generator 210 may include a delay locked loop (DLL), a phase locked loop (PLL), and/or any similar components. [0042] At step 420, the format clock signal (generated at step 410) may be divided by the predetermined multiple to obtain a transmit clock signal. For example, referring to FIG. 3A, the dividing unit 220 may divide the format clock signal (e.g., 18 MHz) by three to obtain the transmit clock signal (e.g., 6 MHz). In one or more embodiments, the dividing unit 220 may include some or all of the dividing logic 300 shown in FIG. 3B. [0043] At step 430, parallel data may be serialized based on the transmit clock signal to obtain serial data. For example, referring to FIG. 3A, the serializer 230 may convert the parallel input data into serial data using the transmit clock signal. In some embodiments, the PWM data bit time period of the resulting serial data bits may be equivalent to a pulse period of the transmit clock signal. In accordance with some embodiments, the serializer 230 may be a PISO unit. [0044] At step 440, differential PWM signals may be generated based on the serial data (obtained at step 430) and the format clock signal (generated at step 410). For example, referring to FIG. 3A, the PWM signal generator 240 may convert serial data into differential PWM signals (e.g., differential signals 123, 124 shown in FIG. 2). In one or more embodiments, the PWM signal generator 240 may use the format clock signal to define equal portions of the PWM data bit time period. In this manner, the PWM signal generator 240 may generate differential PWM signals that are time- aligned to the PWM data bit time period, and may thus enable efficient data recovery by a receiver (e.g., receiver 150 shown in FIG. 1 ). After step 440, the sequence 400 ends. [0045] Referring now to FIG. 5, shown is a timing diagram of a system in accordance with one or more embodiments. Specifically, the timing diagram may correspond to various signals involved in the systems and processes discussed above with reference to FIGs. 1 -4. Further, the timing diagram may correspond to an example in which the predetermined multiple is three. [0046] The first signal shown in FIG. 5 is a format clock signal 501 . As discussed above, the format clock signal 501 may be generated based on a gear selection input. [0047] The second signal shown in FIG. 5 is a reset signal 502. In one or more embodiments, the reset signal 502 may be provided by a processor (not shown) to initiate a PWM signaling process. [0048] The third signal shown in FIG. 5 is a select signal 503. For example, referring to FIG. 3B, the select signal 503 may correspond to the output of the buffer 340, and may be provided to a selector input of the 2-to-1 multiplexer 320. [0049] The fourth signal shown in FIG. 5 is a transmit clock signal 504. In one or more embodiments, each pulse of the transmit clock signal 504 may correspond to a single PWM data bit time period. Further, in some embodiments, the transmit clock signal 504 may be generated using the dividing logic 300 shown in FIG. 3B. [0050] In the example shown in FIG. 5, a rising edge in the reset signal 502 may activate the dividing logic 300. After activation, in response to a rising edge of the format clock signal 501 , the dividing logic 300 may initiate a first pulse of the transmit clock signal 504. Further, in response to the next rising edge of the format clock signal 501 , the dividing logic 300 may initiate a first pulse of the select signal 503. Furthermore, in response to a falling edge of the format clock signal 501 , the dividing logic 300 may end the first pulse of the transmit clock signal 504. [0051 ] Next, in response to another falling edge of the format clock signal 501 , the dividing logic 300 may end the first pulse of the select signal 503. Finally, in response to another rising edge of the format clock signal 501 , the dividing logic 300 may initiate a second pulse of the transmit clock signal 504. The above-described process may then be repeated to generate subsequent pulses of the transmit clock signal 504. [0052] Note that, in this example shown in FIG. 5, a given time period 510 is equal to a single pulse period (e.g., from rising edge to rising edge) of the transmit clock signal 504. However, in the case of the format clock signal 501 , the given time period 510 is equivalent to three pulse periods. While this example assumes a predetermined multiple of three, embodiments are not limited in this regard. [0053] Referring now to FIG. 6, shown is a block diagram of a processor in accordance with one or more embodiments. As shown in FIG. 6, processor 600 may be a multicore processor including a plurality of cores 610a - 61 On. Each core may be associated with a corresponding voltage regulator 612a - 612n. The various cores may be coupled via an interconnect 615 to an uncore logic that includes various components. As seen, the uncore logic may include a shared cache 630 which may be a last level cache. In addition, the uncore logic may include an integrated memory controller 640, various interfaces 650 and transmit/receive logic 655. [0054] In one or more embodiments, transmit/receive logic 655 may include all or a portion of the signal generation logic 200 and/or the dividing logic 300 described above with reference to FIGs. 3A-3B. Thus, the transmit/receive logic 655 may enable the cores 610a - 61 On and/or other components (e.g., components included in a mobile computing device) to generate a format clock signal and/or a transmit clock signal in accordance with some embodiments. [0055] With further reference to FIG. 6, processor 600 may communicate with a system memory 660, e.g., via a memory bus. In addition, by interfaces 650, connection can be made to various off-chip components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 6, the scope of the various embodiments discussed herein is not limited in this regard. [0056] Embodiments may be used in many different environments. Referring now to FIG. 7, shown is a block diagram of a computer system 730 with which embodiments can be used. The computer system 730 may include a hard drive 734 and a removable storage medium 736, coupled by a bus (shown as an arrow) to a chipset core logic 710. A keyboard and/or mouse 720, or other conventional components, may be coupled to the chipset core logic. [0057] The core logic may couple to the graphics processor 712, and the applications processor 700 in one embodiment. The graphics processor 712 may also be coupled to a frame buffer 714. The frame buffer 714 may be coupled to a display device 718, such as a liquid crystal display (LCD) touch screen. In one embodiment, the graphics processor 712 may be a multi-threaded, multi-core parallel processor using single instruction multiple data (SIMD) architecture. [0058] The chipset logic 710 may include a non-volatile memory port to couple to the main memory 732. Also coupled to the core logic 710 may be a radio transceiver and antenna(s) 721 . Speakers 724 may also be coupled to core logic 710. [0059] Referring now to FIG. 8, shown is a block diagram of an example system 800 with which embodiments can be used. As seen, system 800 may be a smartphone or other wireless communicator. As shown in the block diagram of FIG. 8, system 800 may include a baseband processor 810 which may be a multicore processor that can handle both baseband processing tasks as well as application processing. Thus baseband processor 810 can perform various signal processing with regard to communications, as well as perform computing operations for the device. In turn, baseband processor 810 can couple to a user interface/display 820 which can be realized, in some embodiments by a touch screen display. [0060] In addition, baseband processor 810 may couple to a memory system including, in the embodiment of FIG. 8 a non-volatile memory, namely a flash memory 830 and a system memory, namely a dynamic random access memory (DRAM) 835. As further seen, baseband processor 810 can further couple to a capture device 840 such as an image capture device that can record video and/or still images. [0061 ] To enable communications to be transmitted and received, various circuitry may be coupled between baseband processor 810 and an antenna 880. Specifically, a radio frequency (RF) transceiver 870 and a wireless local area network (WLAN) transceiver 875 may be present. In general, RF transceiver 870 may be used to receive and transmit wireless data and calls according to a given wireless communication protocol such as 3G or 4G wireless communication protocol such as in accordance with a code division multiple access (CDMA), global system for mobile communication (GSM), long term evolution (LTE) or other protocol. Other wireless communications such as receipt or transmission of radio signals, e.g., AM/FM, or global positioning satellite (GPS) signals may also be provided. In addition, via WLAN transceiver 875, local wireless signals, such as according to a BluetoothTM standard or an IEEE 802.1 1 standard such as IEEE 802.1 1 a/b/g/n can also be realized. Although shown at this high level in the embodiment of FIG. 8, understand the scope of the present invention is not limited in this regard. [0062] Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein. [0063] Embodiments may be implemented in code and may be stored on a non- transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. [0064] The following clauses and/or examples pertain to further embodiments. One example embodiment may be an apparatus including: a clock generator to generate a format clock signal; a serializer to generate serial data based on a transmit clock signal and parallel input data; and a signal generator to generate at least two differential signals based on the format clock signal and the serial data. The apparatus may also include a dividing unit to obtain the transmit clock signal by dividing a frequency of the format clock signal by a predetermined multiple. The dividing unit may include a multiplexer. The dividing unit may also include at least two flip flops, where clock inputs of the at least two flip flops are each coupled to an output of the multiplexer. A pulse period of the transmit clock signal may be a predetermined multiple of a pulse period of the format clock signal. The serializer may be a Parallel-In, Serial-Out (PISO) unit. The clock generator may be to generate the format clock signal based on a gear selection input. The transmit clock rate may correspond to a midpoint of a gear associated with the gear selection input. The at least two differential input signals may be Pulse Width Modulated (PWM) signals. The PWM signals may be to conform to the Mobile Industry Processor Interface (MIPI) M-PHY Specification. The pulse period of the transmit clock signal may be equal to a PWM data bit time period. The pulse period of the format clock signal may be equal to one-third of a PWM data bit time period. The timing of the at least two differential input signals may be aligned with a pulse period of the format clock signal. [0065] Another example embodiment may be a system including: a system on a chip comprising at least one core having at least one execution unit and transmit logic, the transmit logic including: a clock generator to generate a format clock signal; a dividing unit to obtain a transmit clock signal based on the format clock signal; a serializer to generate serial data based on the transmit clock signal and input data; and a signal generator to generate two or more differential signals based on the format clock signal and the serial data. The system may also include a wireless device coupled to the system on the chip via an interconnect, the interconnect used to communicate data between the wireless device and the transmit logic of the system on the chip. The dividing unit may be to obtain the transmit clock signal by dividing a frequency of the format clock signal by a predetermined multiple. A frequency of the transmit clock signal may be one third a frequency of the format clock signal. The clock generator may be to generate the format clock signal based on a gear selection input. The clock generator may include a delay locked loop (DLL). The clock generator may include a phase locked loop (PLL). [0066] Yet another example embodiment may be a method including: generating, in a transmit logic of a first device, a format clock signal; dividing the format clock signal by a predetermined multiple to obtain a transmit clock signal; serializing parallel data based on the transmit clock signal to obtain serial data; and generating a plurality of differential signals based on the serial data and the format clock signal. Generating the format clock signal may include selecting a transmit clock rate based on the gear selection input. The predetermined multiple may be three. Each of the plurality of differential signals may be a Pulse Width Modulated (PWM) signal. Each of the plurality of differential signals may be to conform to the Mobile Industry Processor Interface (MIPI) M-PHY Specification. [0067] References throughout this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase "one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application. [0068] While the present invention has been described with respect to a limited number of embodiments for the sake of illustration, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
The application relates to integrated assemblies and methods of forming integrated assemblies. Some embodiments include an integrated assembly having a vertical stack of alternating insulative and conductive levels. The conductive levels have terminal regions and nonterminal regions. The terminal regions are vertically thicker than the nonterminal regions. Channel material extends vertically through the stack. Tunneling material is adjacent the channel material. Charge-storage material is adjacent the tunneling material. High-k dielectric material is between the charge-storage material and the terminal regions of the conductive levels. The insulative levels have carbon-containing first regions between the terminal regions of neighboring conductive levels, and have second regions between the nonterminal regions of the neighboring conductive levels. Some embodiments include methods of forming integrated assemblies. |
1.An integrated assembly including:Vertical stacking of alternating insulating levels and conductive levels;The conductive layer has a terminal area, and has a non-terminal area near the terminal area; the terminal area is thicker in the vertical direction than the non-terminal area;Channel material, which extends vertically through the stack;Tunneling material, which is adjacent to the channel material;A charge storage material, which is adjacent to the tunneling material;A high-k dielectric material, which is between the charge storage material and the terminal area of the conductive level;The insulating level has a first area vertically between the terminal areas of adjacent conductive levels, and has a second area vertically between the non-terminal areas of the adjacent conductive levels; andThe first region of the insulating layer includes carbon.2.The integrated assembly of claim 1, wherein the first region of the insulating level includes a combination of the carbon and one or more of silicon, oxygen, and nitrogen.3.The integrated assembly of claim 2, wherein the first region of the insulating level has a horizontal thickness in the range of about 1 nm to about 12 nm.4.The integrated assembly of claim 2, wherein the first region of the insulating level has a horizontal thickness in the range of about 2 nm to about 4 nm.5.The integrated assembly of claim 1, wherein the second region of the insulating level includes silicon dioxide.6.The integrated assembly of claim 1, wherein the second area of the insulation level includes a void.7.The integrated assembly of claim 1, wherein the first region of the insulation level includes SiOC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is at about 1 at% To a concentration in the range of about 50at%.8.The integrated assembly of claim 7, wherein the carbon is present in a concentration ranging from about 4 at% to about 20 at%.9.The integrated assembly of claim 1, wherein the first region of the insulating level comprises SiC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is at about 1 at% To a concentration in the range of about 50at%.10.The integrated assembly of claim 9, wherein the carbon is present in a concentration ranging from about 4 at% to about 20 at%.11.The integrated assembly of claim 1, wherein the first region of the insulation level includes SiNC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is in a range of about 1 ppm to Concentrations in the range of about 5at% are present.12.The integrated assembly of claim 1, wherein:The conductive layer includes a conductive lining material along the outer periphery of the conductive core material;The composition of the conductive lining material is different from the conductive core material;The terminal area only includes the conductive lining material; andThe non-terminal area includes both the conductive liner material and the conductive core material.13.The integrated assembly of claim 12, wherein the terminal area is joined to the non-terminal area at a corner having an angle of about 90°.14.The integrated assembly of claim 12, wherein the terminal area is substantially straight in the vertical direction.15.The integrated assembly of claim 12, wherein the conductive liner material includes metal nitride.16.The integrated assembly of claim 15, wherein the metal nitride comprises titanium nitride; and wherein the conductive core material is composed of tungsten.17.The integrated assembly of claim 1, wherein the non-terminal area is substantially vertically centered with respect to the terminal area along each of the conductive levels.18.An integrated assembly including:Vertical stacking of alternating insulating levels and conductive levels;The conductive layer has a terminal area, and has a non-terminal area near the terminal area; the terminal area is thicker in the vertical direction than the non-terminal area;The conductive layer includes a conductive lining material along the outer periphery of the conductive core material;The composition of the conductive lining material is different from the conductive core material;The terminal area only includes the conductive lining material;The non-terminal region includes both the conductive lining material and the conductive core material; the conductive lining material has a substantially uniform thickness along the non-terminal and terminal regions of the conductive level;The terminal area is joined to the non-terminal area at a corner having an angle of about 90°;The non-terminal area is substantially vertically centered with respect to the terminal area along the conductive level;Channel material, which extends vertically through the stack;Tunneling material, which is adjacent to the channel material;A charge storage material, which is adjacent to the tunneling material;A charge blocking material, which is adjacent to the charge storage material; andA high-k dielectric material is located between the charge blocking material and the terminal area of the conductive level.19.The integrated assembly of claim 18, wherein the conductive liner material includes titanium nitride; andThe conductive core material is composed of tungsten.20.The integrated assembly of claim 18, wherein the terminal area of the conductive level has a first vertical thickness; wherein the non-terminal area of the conductive level has a second vertical thickness; and wherein The first vertical thickness is greater than the second vertical thickness by an amount ranging from about 1 nm to about 20 nm.21.The integrated assembly of claim 18, wherein the amount is in the range of about 1 nm to about 8 nm.22.The integrated assembly of claim 18, wherein the second vertical thickness is in the range of about 15 nm to about 40 nm.23.The integrated assembly according to claim 18, wherein the insulating level has a first area vertically between the terminal areas of adjacent conductive levels, and has all vertically positioned at the adjacent conductive levels. The second area between the non-terminal areas; and wherein the gap extends across the first and second areas.24.The integrated assembly according to claim 18, wherein the insulating level has a first area vertically between the terminal areas of adjacent conductive levels, and has all vertically positioned at the adjacent conductive levels. The second area between the non-terminal areas; and wherein the composition of the first area is different from the second area.25.The integrated assembly of claim 24, wherein the first region includes one or more of SiC, SiOC, and SiNC, wherein the chemical formula indicates the main component rather than the specific stoichiometry.26.The integrated assembly of claim 18, wherein:The high-k dielectric material is arranged in the first section of the vertical stack;The charge blocking material is arranged in the second section of the vertical stack; andThe charge storage material is arranged in the third section of the vertical stack.27.A method of forming an integrated assembly, which includes:Forming a vertical stack of alternating first and second levels; the first level includes a first material, and the second level includes a second material;Forming an opening extending through the stack, the opening having a peripheral side wall;Forming a liner along the peripheral sidewall; the liner is a carbonaceous material; the liner has a first area along the first level and a second area along the second level;Forming a dielectric barrier material adjacent to the lining;Forming a charge blocking material adjacent to the dielectric resistance barrier material;Forming a charge storage material adjacent to the charge blocking material;Forming a tunneling material adjacent to the charge storage material;Forming a channel material adjacent to the tunneling material;Removing the second material to leave gaps between the first levels and exposing the second area of the liner;Oxidize the exposed second area of the liner to form an oxidized section of the liner; the oxidized section of the liner is the first section of the liner; the first section of the liner is connected to the liner The second segment alternates vertically;Removing the first section of the liner to expose the area of the dielectric barrier material; andA conductive level is formed in the void; the conductive level has a front end, and the front surface of the front end is along the exposed area of the dielectric barrier material and directly abuts against the exposed area.28.The method of claim 27, wherein the first section of the liner has a terminal portion that extends beyond the second level to follow the first level.29.The method of claim 28, wherein the terminal portion extends beyond the second level by at least about 1 nm.30.27. The method of claim 27, wherein the carbon-containing material comprises SiOC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is present in a concentration ranging from about 4 at% to about 20 at%.31.27. The method of claim 27, wherein the carbon-containing material comprises SiC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is present in a concentration ranging from about 4 at% to about 20 at%.32.28. The method of claim 27, wherein the carbon-containing material comprises SiCN, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is present in a concentration in the range of about 1 ppm to about 5 at%.33.The method of claim 27, wherein the first material is silicon dioxide and the second material is silicon nitride.34.The method of claim 27, wherein the void is a first void, and the method further comprises removing the first material to leave a second void.35.28. The method of claim 27, wherein the exposed region of the dielectric barrier material is a first region of the dielectric barrier material; wherein the void is a first void; and the method further comprises:Removing the first material to leave a second void, the second void exposing the second section of the liner;Oxidizing the exposed second section of the liner;Removing the oxidized second section of the liner to expose the second region of the dielectric barrier material;Lining the second gap with a sacrificial material to narrow the second gap;Extending the narrowed second gap through the second region of the dielectric resistance barrier material, through the charge blocking material, and through the charge storage material; andThe sacrificial material is removed.36.A method of forming an integrated assembly, which includes:Forming a vertical stack of alternating first and second levels; the first level includes a first material, and the second level includes a second material;Forming an opening extending through the stack, the opening having a peripheral side wall;Forming a dielectric barrier material adjacent to the peripheral sidewall;Forming a charge blocking material adjacent to the dielectric resistance barrier material;Forming a charge storage material adjacent to the charge blocking material;Forming a tunneling material adjacent to the charge storage material;Forming a channel material adjacent to the tunneling material;Removing the second material to leave a first gap between the first levels;A conductive layer is formed in the first gap; the conductive layer has a front end with a front surface; the front surface is along the dielectric barrier material and directly abuts against the dielectric barrier material;Removing the first material to leave a second void;Lining the second gap with a sacrificial material to narrow the second gap;Extending the narrowed second gap through the dielectric resistance barrier material, the charge blocking material, and the charge storage material; andThe sacrificial material is removed.37.The method of claim 36, further comprising forming a liner material along the peripheral sidewall, and wherein the dielectric barrier material is formed along the liner material.38.The method of claim 37, wherein the liner material comprises metal.39.The method of claim 38, wherein the metal includes one or both of tungsten and ruthenium.40.The method of claim 37, wherein the liner material includes carbon.41.The method of claim 40, wherein the liner material includes a combination of the carbon and one or more of silicon, oxygen, and nitrogen.42.The method according to claim 41, wherein the lining material comprises SiOC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is present in a concentration in the range of about 4at% to about 20at% .43.The method according to claim 41, wherein the lining material comprises SiC, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is present in a concentration in the range of about 4at% to about 20at% .44.The method of claim 41, wherein the liner material comprises SiCN, wherein the chemical formula indicates a main component rather than a specific stoichiometry; and wherein the carbon is present in a concentration in the range of about 1 ppm to about 5 at%. |
Integrated assembly and method of forming an integrated assemblyTechnical fieldIntegrated assembly (for example, integrated NAND memory). The method of forming an integrated assembly.Background techniqueThe memory provides data storage for the electronic system. Flash memory is a type of memory that is widely used in modern computers and devices. For example, a modern personal computer may have a BIOS stored on a flash memory chip. As another example, computers and other devices are increasingly using flash memory in solid state drives to replace traditional hard drives. As yet another example, flash memory is popular among wireless electronic devices because it enables manufacturers to support standardized new communication protocols and provides the ability to upgrade devices remotely to enhance features.NAND may be the basic architecture of flash memory, and may be configured to include vertically stacked memory cells.Before describing NAND in detail, it may be helpful to describe the relationship of the memory arrays within an integrated arrangement more generally. 1 shows a block diagram of a prior art device 1000, which includes a memory array 1002 having a plurality of memory cells 1003 arranged in rows and columns, and access lines 1004 (for example, word lines for conducting signals WL0 to WLm) and first Data lines 1006 (for example, bit lines for conducting signals BL0 to BLn). The access line 1004 and the first data line 1006 can be used to transfer information to and from the memory cell 1003. The row decoder 1007 and the column decoder 1008 decode the address signals A0 to AX on the address line 1009 to determine which of the memory cells 1003 is to be accessed. The sense amplifier circuit 1015 is used to determine the value of the information read from the memory cell 1003. The I/O circuit 1017 transfers the value of information between the memory array 1002 and an input/output (I/O) line 1005. The signals DQ0 to DQN on the I/O line 1005 may represent the value of information read from the memory cell 1003 or to be written into the memory cell 1003. Other devices can communicate with the device 1000 through the I/O line 1005, the address line 1009, or the control line 1020. The memory control unit 1018 is used to control the memory operations to be performed on the memory unit 1003 and uses signals on the control line 1020. The device 1000 can receive the supply voltage signal Vcc on the first supply line 1030 and the supply voltage signal Vss on the second supply line 1032. The device 1000 includes a selection circuit 1040 and an input/output (I/O) circuit 1017. The selection circuit 1040 can reply to the signals CSEL1 to CSELn via the I/O circuit 1017 to select the information on the first data line 1006 and the second data line 1013 that can indicate information that will be read from the memory cell 1003 or will be programmed into the memory cell 1003 Value signal. The column decoder 1008 can selectively activate the CSEL1 to CSELn signals based on the A0 to AX address signals on the address line 1009. During read and program operations, the selection circuit 1040 can select signals on the first data line 1006 and the second data line 1013 to provide communication between the memory array 1002 and the I/O circuit 1017.The memory array 1002 of FIG. 1 may be a NAND memory array, and FIG. 2 shows a schematic diagram of a three-dimensional NAND memory device 200 that can be used for the memory array 1002 of FIG. 1. The device 200 includes multiple strings of charge storage devices. In the first direction (Z-Z'), each string of charge storage devices may include, for example, 32 charge storage devices stacked on top of each other, wherein each charge storage device corresponds to, for example, 32 layers (for example, Tier0 to Tier31). one of. The charge storage devices of the corresponding string may share a common channel region, for example, a common channel region formed in a column of corresponding semiconductor material (eg, polysilicon) surrounding this string of charge storage devices. In the second direction (X-X'), each of the plurality of strings of, for example, sixteen first groups may include, for example, sharing a plurality of (e.g., 32) access lines ( That is, eight strings of "global control gate (CG) lines", also called word lines WL). Each access line can be coupled to the charge storage device in the layer. When each charge storage device includes a cell capable of storing two bits of information, the charge storage devices coupled through the same access line (and therefore correspond to the same layer) can be logically grouped into, for example, two pages, such as P0/P32 , P1/P33, P2/P34, etc. In the third direction (Y-Y'), for example, each of the plurality of strings of the eight second groups may include sixteen strings coupled through a corresponding one of the eight data lines . The size of the memory block may include 1,024 pages, which is approximately 16MB in total (for example, 16WL x 32 layers x 2 bits = 1,024 pages/block, block size = 1,024 pages x 16KB/page = 16MB). The number of strings, layers, access lines, data lines, first groups, second groups, and/or pages may be greater or less than those shown in FIG. 2.3 shows a cross-sectional view of the memory block 300 of the 3D NAND memory device 200 of FIG. 2 in the XX' direction, including one of the sixteen first string groups described with respect to FIG. 2 Fifteen strings of charge storage devices. The multiple strings of memory blocks 300 may be grouped into multiple subsets 310, 320, 330 (for example, tile columns), such as tile column I, tile column j, and tile column K, where each subset (for example, tile column) The slice column) includes "partial blocks" of the memory block 300. A global drain side select gate (SGD) line 340 may be coupled to the SGD of the plurality of strings. For example, the global SGD line 340 may be coupled to multiple (e.g., three) sub SGD lines 342, 344, 346 via a corresponding one of the multiple (e.g., three) sub SGD drivers 332, 334, 336, each of which The SGD line corresponds to the corresponding subset (e.g., tile column). Each of the sub SGD drivers 332, 334, and 336 can couple or cut the SGD of the string of the corresponding partial block (for example, a tile column) in parallel independently of the SGD of the other partial blocks. The global source side select gate (SGS) line 360 may be coupled to the SGS of the plurality of strings. For example, the global SGS line 360 may be coupled to a plurality of sub SGS lines 362, 364, 366 via a corresponding one of the plurality of sub SGS drivers 322, 324, 326, where each sub SGS line corresponds to a corresponding subset (for example, a tile column) . Each of the sub-SGS drivers 322, 324, and 326 can be independently coupled to or cut off the SGS of the string of the corresponding partial block (for example, a tile column) independently of the SGS of the other partial blocks. A global access line (eg, a global CG line) 350 may be coupled to a corresponding layer of charge storage device corresponding to each of the plurality of strings. Each global CG line (eg, global CG line 350) may be coupled to a plurality of sub-access lines (eg, sub-CG lines) 352, 354, 356 via a corresponding one of the plurality of substring drivers 312, 314, and 316. Each of the substring drivers can be independently coupled to or disconnected from the charge storage device corresponding to the corresponding partial block and/or layer in parallel independently of the charge storage device of other partial blocks and/or other layers. The charge storage devices corresponding to respective subsets (e.g., partial blocks) and corresponding layers may include "partial layers" (e.g., single "tiles") of charge storage devices. Strings corresponding to respective subsets (e.g., partial blocks) may be coupled to a corresponding one of sub-sources 372, 374, and 376 (e.g., "tile sources"), where each sub-source is coupled to a corresponding power source.Alternatively, the NAND memory device 200 is described with reference to the schematic illustration of FIG. 4.The memory array 200 includes word lines 2021 to 202N, and bit lines 2281 to 228M.The memory array 200 also includes NAND strings 2061 to 206M. Each NAND string includes charge storage transistors 2081 to 208N. The charge storage transistor may use floating gate materials (for example, polysilicon) to store charges, and may also use charge trapping materials (for example, silicon nitride, metal nanodots, etc.) to store charges.The charge storage transistor 208 is located at the intersection of the word line 202 and the string 206. The charge storage transistor 208 represents a non-volatile memory cell for storing data. The charge storage transistor 208 of each NAND string 206 is sourced between a source selection device (e.g., source-side select gate SGS) 210 and a drain selection device (e.g., drain-side select gate SGD) 212 The drains are connected in series. Each source selection device 210 is located at the intersection of the string 206 and the source selection line 214, and each drain selection device 212 is located at the intersection of the string 206 and the drain selection line 215. The selection devices 210 and 212 may be any suitable access devices, and are generally illustrated by the blocks in FIG. 4.The source of each source selection device 210 is connected to a common source line 216. The drain of each source selection device 210 is connected to the source of the first charge storage transistor 208 of the corresponding NAND string 206. For example, the drain of the source selection device 2101 is connected to the source of the charge storage transistor 2081 of the corresponding NAND string 2061. The source selection device 210 is connected to the source selection line 214.The drain of each drain select device 212 is connected to a bit line (ie, digit line) 228 at the drain contact. For example, the drain of the drain selection device 2121 is connected to the bit line 2281. The source of each drain selection device 212 is connected to the drain of the last charge storage transistor 208 of the corresponding NAND string 206. For example, the source of the drain selection device 2121 is connected to the drain of the charge storage transistor 208N of the corresponding NAND string 2061.The charge storage transistor 208 includes a source 230, a drain 232, a charge storage area 234, and a control gate 236. The charge storage transistor 208 has its control gate 236 coupled to the word line 202. The columns of charge storage transistors 208 are those transistors within the NAND string 206 that are coupled to a given bit line 228. The rows of charge storage transistors 208 are those transistors that are commonly coupled to a given word line 202.It is desirable to develop improved NAND architectures and improved methods for manufacturing NAND architectures.Summary of the inventionIn one aspect, the present disclosure relates to an integrated assembly including: a vertical stack of alternating insulating and conductive levels; the conductive level has a terminal area and a non-terminal area near the terminal area; so The terminal area is thicker in the vertical direction than the non-terminal area; a channel material, which extends vertically through the stack; a tunneling material, which is adjacent to the channel material; a charge storage material, which is adjacent to the Tunneling material; high-k dielectric material, which is located between the charge storage material and the terminal area of the conductive level; the insulating level has a vertical gap between the terminal areas of adjacent conductive levels The first area has a second area vertically between the non-terminal areas of the adjacent conductive level; and the first area of the insulating level includes carbon.In another aspect, the present disclosure relates to an integrated assembly comprising: a vertical stack of alternating insulating and conductive levels; the conductive level has a terminal area and a non-terminal area near the terminal area; The terminal area is thicker in the vertical direction than the non-terminal area; the conductive layer includes a conductive lining material along the outer periphery of the conductive core material; the composition of the conductive lining material is different from the conductive core material; The terminal area includes only the conductive lining material; the non-terminal area includes both the conductive lining material and the conductive core material; the conductive lining material is along the non-terminal and terminal areas of the conductive level Having a substantially uniform thickness; the terminal area is joined to the non-terminal area at a corner having an angle of about 90°; the non-terminal area is substantially vertically centered with respect to the terminal area along the conductive level A channel material, which extends vertically through the stack; a tunneling material, which is adjacent to the channel material; a charge storage material, which is adjacent to the tunneling material; a charge blocking material, which is adjacent to the charge storage material And a high-k dielectric material, which is between the charge blocking material and the terminal area of the conductive layer.In another aspect, the present disclosure relates to a method of forming an integrated assembly, comprising: forming a vertical stack of alternating first and second levels; the first level includes a first material, and the second The level includes a second material; an opening extending through the stack is formed, the opening having a peripheral side wall; a liner is formed along the peripheral side wall; the liner is a carbonaceous material; the liner is along the first A level has a first area and a second area along the second level; a dielectric barrier material is formed adjacent to the liner; a charge blocking material is formed adjacent to the dielectric barrier material; a charge storage is formed adjacent to the charge blocking material Material; forming a tunneling material adjacent to the charge storage material; forming a channel material adjacent to the tunneling material; removing the second material to leave voids between the first levels and exposing all of the liner The second region; the exposed second region of the liner is oxidized to form an oxidation section of the liner; the oxidation section of the liner is the first section of the liner; the first section of the liner Alternate vertically with the second section of the liner; removing the first section of the liner to expose the area of the dielectric barrier material; and forming a conductive layer in the void; the conductive layer has a front end, The front surface of the front end is along the exposed area of the dielectric barrier material and directly abuts the exposed area.In yet another aspect, the present disclosure relates to a method of forming an integrated assembly, comprising: forming a vertical stack of alternating first and second levels; the first level includes a first material, and the second The level includes a second material; forming an opening extending through the stack, the opening having a peripheral sidewall; forming a dielectric barrier material adjacent to the peripheral sidewall; forming a charge blocking material adjacent to the dielectric barrier material; The charge blocking material forms a charge storage material; a tunneling material is formed adjacent to the charge storage material; a channel material is formed adjacent to the tunneling material; the second material is removed to leave a first layer between the first levels A gap; forming a conductive level in the first gap; the conductive level has a front end with a front surface; the front surface is along the dielectric barrier material and directly against the dielectric barrier material; removing the The first material to leave a second gap; the second gap is lined with a sacrificial material to narrow the second gap; the narrowed second gap extends through the dielectric barrier material, the charge Blocking material and the charge storage material; and removing the sacrificial material.Description of the drawingsFigure 1 shows a block diagram of a prior art memory device having a memory array containing memory cells.Figure 2 shows a schematic diagram of the prior art memory array of Figure 1 in the form of a 3D NAND memory device.FIG. 3 shows a cross-sectional view of the prior art 3D NAND memory device of FIG. 2 in the XX′ direction.Figure 4 is a schematic diagram of a prior art NAND memory array.5 and 6 are schematic cross-sectional side views of areas of the integrated assembly shown at an example continuous process stage of an example method for forming an example NAND memory array.FIG. 6A is a diagrammatic top view of a portion of the integrated assembly of FIG. 6. FIG.Figures 7-9 are schematic cross-sectional side views of the regions of the integrated assembly of Figure 5 shown at an example continuous process stage of an example method for forming an example NAND memory array. The process stage of FIG. 7 may follow the process stage of FIG. 6.FIG. 9A is a top-down diagrammatic view of a portion of the integrated assembly of FIG. 9. FIG.10-14 are schematic cross-sectional side views of the regions of the integrated assembly of FIG. 5 shown at an example continuous process stage of an example method for forming an example NAND memory array. The process stage of FIG. 10 may follow the process stage of FIG. 9.14A is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 5 shown at an example continuous process stage, which may follow the process stage of FIG. 14.15 is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 5 shown at an example continuous process stage, which may be after the process stage of FIG. 1415A is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 5 shown at an example continuous process stage, which may follow the process stage of FIG. 15.16 is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 5 shown at an example continuous process stage, which may follow the process stage of FIG. 15.16A is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 5 shown at an example continuous process stage, which may follow the process stage of FIG. 16.17-20 are schematic cross-sectional side views of the regions of the integrated assembly of FIG. 5 shown at an example continuous process stage of an example method for forming an example NAND memory array. The process stage of FIG. 17 may follow the process stage of FIG. 16.21-27 are schematic cross-sectional side views of the regions of the integrated assembly of FIG. 5 shown at an example continuous process stage of an example method for forming an example NAND memory array. The process stage of FIG. 21 may follow the process stage of FIG. 10.28 to 35 are schematic cross-sectional side views of the regions of the integrated assembly of FIG. 5 shown at an example continuous process stage of an example method for forming an example NAND memory array. The process stage of FIG. 28 may follow the process stage of FIG. 6.36-40 are schematic cross-sectional side views of the area of the integrated assembly of FIG. 5 shown at an example continuous process stage of an example method for forming an example NAND memory array. The process stage of FIG. 36 may follow the process stage of FIG. 31.Detailed waysSome embodiments include integrated assemblies that have alternating conductive and insulating levels; and have carbon-containing materials in the area of the insulating levels. Some embodiments include methods of forming an integrated assembly. The method may use etch stop materials (eg, carbon-containing materials, metal-containing materials, etc.) to protect the dielectric barrier material during removal of materials adjacent to the dielectric material. Alternatively, the method may omit the etching stop material, and may alternatively use etching conditions that selectively remove one or more materials relative to the dielectric barrier material.The operation of a NAND memory cell includes the movement of charge between the channel material and the charge storage material. For example, the programming of a NAND memory cell may include moving charge (ie, electrons) from the channel material into the charge storage material, and then store the charge in the charge storage material. The erasure of the NAND memory cell may include moving holes into the charge storage material to recombine with the electrons stored in the charge storage material, and thereby release the charge from the charge storage material. The charge storage material may include a charge trapping material (e.g., silicon nitride, metal dots, etc.). A problem with conventional NAND may be that the charge trapping material extends across multiple memory cells of the memory array, which may cause charge migration from one memory cell to another. Charge migration can cause data retention problems. Some embodiments include a NAND architecture that has a fracture in the charge trapping material in the region between the memory cells; and such fracture can advantageously hinder the transfer of charge between the memory cells.Example embodiments are described with reference to FIGS. 5 to 40.Referring to FIG. 5, the construction (integrated assembly, integrated structure) 10 includes a vertical stack 12 of alternating first levels 14 and second levels 16. The first level 14 includes a first material 60 and the second level 16 includes a second material 62. The first and second materials may include any suitable composition, and have different compositions from each other. In some embodiments, the first material 60 may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide; and the second material 62 may include silicon nitride, consist essentially of silicon nitride, or consist of silicon dioxide. Composition of silicon nitride. The levels 14 and 16 may have any suitable thickness; and may have the same thickness as each other, or may have different thicknesses from each other. In some embodiments, levels 14 and 16 may have a vertical thickness in the range of about 10 nanometers (nm) to about 400 nm. In some embodiments, the levels 14 and 16 may have a vertical thickness in the range of about 10 nm to about 50 nm. In some embodiments, the first level 14/second level 16 may have a vertical thickness in the range of about 15 nm to about 40 nm, in the range of about 15 nm to about 20 nm, and so on.The stack 12 is shown as being supported on the substrate 18 (ie, formed above the substrate 18). The substrate 18 may include a semiconductor material; and, for example, may include single crystal silicon, consist essentially of single crystal silicon, or consist entirely of single crystal silicon. The base 18 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any structure that includes semiconducting materials, including but not limited to bulk semiconducting materials, such as semiconducting wafers (in the form of individual or a combination of other materials), and layers of semiconducting materials (in the form of Alone or in combination with other materials). The term "substrate" refers to any support structure, including but not limited to the semiconductor substrate described above. In some applications, the substrate 18 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit manufacturing. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusing materials, insulator materials, and the like.A gap is provided between the stack 12 and the substrate 18 to indicate that other components and materials may be provided between the stack 12 and the substrate 18. Such other components and materials may include additional stack levels, source line levels, source-side select gates (SGS), and the like.Referring to FIG. 6, an opening 64 extending through the stack 12 is formed. The opening 64 has a side wall 65 extending along the first material 60 and the second material 62.6A is a top view of one level 14 of the area of the assembly 10 at the process stage of FIG. 6, showing that the opening 64 may have a closed shape (circular, oval, square, or other polygonal shape, etc.) when viewed from above. In the illustrated embodiment, the opening 64 is circular when viewed from above. The side wall 65 along the cross section of FIG. 6 is part of the continuous side wall 65 as shown by the top view of FIG. 6A. The side wall 65 may be referred to as the peripheral side wall of the opening, or as the peripheral side wall surface of the opening. The terms "peripheral side wall" and "peripheral side wall surface" can be used interchangeably. The use of one term in some cases and the use of another term in other cases may provide language variants within this disclosure to simplify the antecedent basis in the claims that follow.The opening 64 may represent a large number of substantially the same openings formed at the process stage of FIGS. 6 and 6A and used to fabricate NAND memory cells of the NAND memory array. The term "substantially the same" means the same within reasonable tolerances of manufacturing and measurement.Referring to FIG. 7, a liner 20 is formed along the peripheral side wall 65. The lining includes a lining material 22. The liner material 22 can be used as an etch stop in subsequent processing, and can include any suitable composition.In some embodiments, the lining material 22 may be a carbonaceous material. For example, the lining material 22 may include a combination of one or more of carbon and silicon, oxygen and nitrogen, consisting essentially of a combination of one or more of carbon and silicon, oxygen and nitrogen, or a combination of carbon and silicon. A combination of one or more of, oxygen and nitrogen.In some embodiments, the lining material 22 may include SiOC, consist essentially of SiOC, or consist of SiOC, where the chemical formula indicates the main component rather than the specific stoichiometry; and where the carbon is at about 1 atomic% (at% ) Exists at a concentration in the range of about 50 at%. In some embodiments, carbon may be present in SiOC at a concentration ranging from about 4 at% to about 20 at%.In some embodiments, the liner material 22 may include SiC, consist essentially of SiC, or consist of SiC, where the chemical formula indicates the main component rather than a specific stoichiometry; and where the carbon is at about 1 atomic% (at% ) Exists at a concentration in the range of about 50 at%. In some embodiments, carbon may be present in SiC at a concentration ranging from about 4 at% to about 20 at%.In some embodiments, the lining material 22 may include SiNC, consist essentially of SiNC, or consist of SiNC, where the chemical formula indicates the main component rather than the specific stoichiometry; and where the carbon is in about one part per million ( 1 ppm) to about 5 at% of the concentration range exists.In some embodiments, the liner material 22 may include one or more metals (for example, one or two of tungsten and ruthenium), consist essentially of one or more metals, or consist of one or more metals.The liner may include any suitable horizontal thickness T. In some embodiments, such horizontal thickness may be in the range of about 1 nm to about 12 nm; in the range of about 2 nm to about 4 nm; and so on.Although the liner 20 is shown as having a single homogeneous composition, in other embodiments (not shown), the liner 20 may include two or more laminates of different compositions.The liner 20 can be considered to have a first area 24 along the first level 14 and a second area 26 along the second level 16.Referring to FIG. 8, a high-k dielectric material (dielectric resistance barrier material) 28 is formed along the liner 20 (adjacent to the liner 20). The dielectric barrier material 28 may be considered adjacent to the sidewall 65 of the opening 64, but in the illustrated embodiment, it is spaced from the sidewall by the liner 20.The term "high-k" means a dielectric constant greater than that of silicon dioxide. In some embodiments, the high-k dielectric material 28 may include, consist essentially of, or consist of: aluminum oxide (AlO), hafnium dioxide (HfO), hafnium silicate (HfSiO), zirconium oxide (ZrO) ) And one or more of zirconium silicate (ZrSiO); wherein the chemical formula indicates the main component rather than the specific stoichiometry.The high-k dielectric material 28 has a substantially uniform thickness, where the term "substantially uniform" means uniform within reasonable tolerances of manufacturing and measurement. The high-k dielectric material 28 may be formed to any suitable thickness; and in some embodiments, may be formed to a thickness in the range of about 1 nm to about 5 nm.Referring to FIGS. 9 and 9A (wherein FIG. 9A is a top view of one level 14 of FIG. 9 ), a charge blocking material 34 is formed along the dielectric barrier material 28. The charge blocking material 34 may include any suitable composition; and in some embodiments, it may include one or both of silicon oxynitride (SiON) and silicon dioxide (SiO2), consisting essentially of silicon oxynitride (SiON) It is composed of one or two of silicon dioxide (SiO2), or one or both of silicon oxynitride (SiON) and silicon dioxide (SiO2).A charge storage material 38 is formed adjacent to the charge blocking material 34. The charge storage material 38 may include any suitable composition. In some embodiments, the charge storage material 38 may include a charge trapping material; for example, silicon nitride, silicon oxynitride, conductive nanodots, and the like. For example, in some embodiments, the charge storage material 38 may include silicon nitride, consist essentially of silicon nitride, or consist of silicon nitride. In an alternative embodiment, the charge storage material 38 may be configured to include a floating gate material (e.g., polysilicon).In the embodiment shown in FIG. 9, the charge storage material 38 has a flat configuration. The term "flat configuration" means that the material 38 has a substantially continuous thickness and extends substantially vertically, rather than wavy.Adjacent to the charge storage material 38, a gate dielectric material (ie, tunneling material, charge transfer material) 42 is formed. The gate dielectric material 42 may include any suitable composition. In some embodiments, the gate dielectric material 42 may include, for example, one or more of silicon dioxide, silicon nitride, silicon oxynitride, aluminum oxide, hafnium dioxide, zirconium oxide, and the like. The gate dielectric material 42 can be band-gap engineered to achieve desired electrical properties; and therefore can include a combination of two or more different materials.The channel material 44 is formed adjacent to the gate dielectric material 42 and extends vertically along the stack 12. The channel material 44 includes a semiconductor material; and may include any suitable composition or combination of compositions. For example, the channel material 44 may include one or more of silicon, germanium, III/V semiconductor materials (e.g., gallium phosphide), semiconductor oxides, etc.; wherein the term III/V semiconductor material refers to A semiconductor material of elements from groups III and V of the periodic table (groups III and V are the old nomenclature and are now called groups 13 and 15). In some embodiments, the channel material 44 may include silicon, consist essentially of silicon, or consist of silicon.The insulating material 36 is formed adjacent to the channel material 44 and fills the rest of the opening 64 (FIG. 8). The insulating material 36 may include any suitable composition; and in some embodiments, may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide.In the embodiment shown in FIGS. 9 and 9A, the channel material 44 is configured as a loop surrounding the insulating material 36. This configuration of channel material can be considered to include a hollow channel configuration, where the insulating material 36 is provided within a "hollow body" in the toroidal channel configuration. In other embodiments (not shown), the channel material may be configured as a solid pillar configuration.Referring to Figure 10, the second material 62 (Figure 9) is removed, leaving voids 30 along the second level 16 (ie, between the first levels 14). The void 30 may be referred to as a first void in order to distinguish it from other voids formed in a subsequent process stage.The void 30 may be formed by any suitable process that selectively removes the material 62 relative to the materials 60 and 22 (FIG. 9). In some embodiments, such processes may use hot phosphoric acid.The second area 26 of the liner 20 is exposed through the void 30.Referring to FIG. 11, the exposed second region 26 (FIG. 10) of the liner 20 is oxidized to form an oxidation section 46. In the illustrated embodiment, the stippling method is used to help the reader identify the oxidation section 46. The oxidation section 46 may be referred to as the first section. Such first sections 46 alternate vertically with the non-oxidized second sections 48 of the liner 20. In the illustrated embodiment, the oxidized first section extends beyond the second region 26 of the liner (FIG. 10, where such first region is the region along the second level 16) to contain the terminals along the first level 14. Part 50. The terminal portion 50 may be considered to extend a distance D beyond the second level. In some embodiments, such a distance D may be 0 (ie, the terminal portion 50 may not be present). In other embodiments, the distance D may be greater than 0, greater than 0.5 nm, greater than 1 nm, greater than 2 nm, and so on. In some example embodiments, the distance D may be in the range of about 0 to about 10 nm, in the range of about 0 to about 4 nm, and so on.The oxidation zone (oxidation stage, first stage) 46 may be formed under any suitable conditions; including, for example, exposure to one or more of O2, H2O2, O3, and the like.In some embodiments, the liner material 22 includes a carbon-containing material, and the oxidized region 46 includes an oxidized form of the carbon-containing material. Such oxidation formation may have the physical characteristics of powdered materials or fluff.Referring to Figure 12, the oxidation section 46 is removed (Figure 11). Such removal can be achieved using any suitable treatment. For example, if the oxidation section 46 includes silicon, carbon, and oxygen, the removal of such sections may use an etchant including hydrofluoric acid. The removal of the oxidation section 46 exposes the surface 29 of the dielectric barrier material 28.It should be noted that in some embodiments, the oxidation of FIG. 11 may be omitted, and the exposed section 26 of the lining material 22 of FIG. 10 may simply be removed by one or more appropriate etches to form a configuration similar to that of FIG. 12 Configuration. For example, in some embodiments, the lining material 22 may include one or more metals, and these metals may be removed by appropriate etching without first being oxidized.Referring to FIG. 13, a conductive region 32 is formed in the void 30 (FIG. 10).The conductive area 32 may include two or more conductive materials; and in the illustrated embodiment, a pair of conductive materials 52 and 54 are included. The conductive materials 52 and 54 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.) One or more of conductively doped semiconductor materials (for example, conductively doped silicon, conductively doped germanium, etc.) and/or conductively doped semiconductor materials. The compositions of the conductive materials 52 and 54 are different from each other.The material 52 may be referred to as a conductive core material, and the material 54 may be referred to as a conductive liner material. The conductive lining material 54 is along the outer periphery of the conductive core material 52.In some embodiments, the conductive core material 52 may include one or more metals (for example, may include tungsten), and the conductive liner material 54 may include one or more metal nitrides (for example, may include titanium nitride).In the illustrated embodiment, the high-k dielectric material 28 directly abuts the conductive liner material 54.Level 16 can be regarded as a conductive level at the process stage of FIG. 13, where such conductive level includes conductive area 32. At the process stage of FIG. 13, the conductive level 16 and the insulating level 14 alternate in the vertical stack 12.The conductive layer 16 has a terminal area 56 facing the dielectric barrier material 28 and has a non-terminal area 58 near the terminal area 56. In the illustrated embodiment, the terminal area 56 includes only the conductive liner material 54 and the non-terminal area 58 includes both the conductive liner material 54 and the conductive core material 52. The conductive lining material 54 has a substantially uniform thickness along the non-terminal and terminal areas (wherein the term "substantially uniform thickness" means a thickness that is uniform within reasonable tolerances of manufacturing and measurement).The conductive level 16 can be considered to have a front surface 57 along the terminal area 56. Such a front surface extends along the dielectric barrier material 28 and directly abuts against the dielectric barrier material 28. In some embodiments, the dielectric barrier material 28 may be considered to include the exposed surface 29 at the process stage of FIG. 12, and the front surface 57 may be considered to directly abut such surface 29 of the dielectric barrier material 28.The terminal area 56 is joined to the non-terminal area 58 at the corner 66. In the illustrated embodiment, such corners have an angle of about 90°. The term "about 90°" means 90° within reasonable tolerances of manufacturing and measurement.The terminal area 56 is shown as being substantially straight along the vertical direction, and specifically shown as being vertically straight along the dielectric barrier material 28. This can be advantageous because it can improve the coupling of the terminal region 56 to the charge storage material 38 compared to a conventional arrangement in which the terminal regions of similar conductive levels can be curved rather than straight.The terminal area 56 has a first vertical dimension D1, and the non-terminal area 58 has a second vertical dimension D2. The first vertical dimension D1 may be equal to or greater than the second vertical dimension D2 (ie, the terminal area 56 may be thicker than the non-terminal area 58 in the vertical direction). In some embodiments, the first vertical thickness D1 may be greater than the second vertical thickness D2 by an amount ranging from about 1 nm to about 20 nm; greater than an amount ranging from about 1 nm to about 8 nm, and so on.In the illustrated embodiment, the non-terminal area 58 is substantially vertically centered relative to the terminal area 56 along each of the conductive levels 16 (wherein the term "substantially vertically centered" means within reasonable tolerances of manufacturing and measurement Vertically centered).The insulating level 14 can be considered to have a first area 68 between the terminal areas 56 of the vertically adjacent conductive levels 16 and a second area 70 between the non-terminal areas 58 of the vertically adjacent conductive levels. In the embodiment shown in FIG. 13, the first area 68 and the second area 70 include different compositions. Specifically, the first area 68 includes the lining material 22 and the second area 70 includes the insulating material 60. In some embodiments, the insulating material 60 may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide; and the liner material 22 may include carbon (for example, may include carbon and silicon, oxygen and nitrogen. A combination of one or more).The conductive level 16 can be regarded as the memory cell level of the NAND configuration (also referred to herein as the word line level). The NAND configuration includes strings of memory cells (ie, NAND strings), where the number of memory cells in the string is determined by the number of vertically stacked levels 16. A NAND string can include any suitable number of levels of memory cells. For example, a NAND string may have 8 memory cell levels, 16 memory cell levels, 32 memory cell levels, 64 memory cell levels, 512 memory cell levels, 1024 memory cell levels, and so on. The vertical stack 12 is indicated as extending vertically beyond the area shown, showing that there may be more vertical stack levels than those specifically shown in the diagram of FIG. 13.The NAND memory cell 40 includes a dielectric resistance barrier material 28, a charge blocking material 34, a charge storage material 38, a gate dielectric material 42 and a channel material 44. The NAND memory cell 40 shown forms part of a string of vertically extending memory cells. Such strings may refer to a large number of substantially identical NAND strings formed during the manufacture of the NAND memory array (wherein the term "substantially identical" means identical within reasonable tolerances of manufacture and measurement).Each of the NAND memory cells 40 includes a control gate region 72 within the conductive level 16. The control gate region 72 includes a control gate similar to the control gate described above with reference to FIGS. 1 to 4. The conductive level 16 includes a region 74 adjacent (close to) the control gate region 72. The area 74 may be referred to as a routing area or a word line area. The control gate area 72 includes the terminal area 56 of the conductive level 16 and the routing area 74 includes the non-terminal area 58 of the conductive level 16.The configuration of FIG. 13 may be the final structure of the memory configuration (e.g., configured as an assembly containing NAND memory). Alternatively, the configuration of FIG. 13 may be subjected to further processing to form a memory configuration. For example, FIG. 14 shows a process stage that may follow the process stage of FIG. 13. The first material 60 (FIG. 13) is removed to form a second void 76 along the level 14 (ie, leaving the second void 76). The formation of the second void 76 exposes the remaining section 48 of the liner material 22.FIG. 14A shows a process stage that may follow the process stage of FIG. 14. Specifically, the ends of the void 76 may be capped with an electrostatic insulating material 78 (e.g., silicon dioxide) to form a final assembly (e.g., a NAND memory assembly) including alternating insulating levels 14 and conductive levels 16; wherein The insulation level 14 includes the void 76, the cover material 78 and the remaining section 48 of the liner material 22. The voids 76 are between the non-terminal regions 58 of the vertically adjacent conductive levels 16, and the lining material 22 is between the terminal regions 56 of the vertically adjacent conductive levels 16. In other words, the insulating level 14 can be considered to include the liner material 22 in the first region 68 and include the voids 76 in the second region 70.FIG. 15 shows another process stage that may follow the process stage of FIG. 14. The segments 48 of the liner material 22 are oxidized using a treatment similar to that described above with reference to FIG. 11. The oxidation section 48 may be referred to as the oxidized second section of the liner material.FIG. 15A shows a process stage that may follow the process stage of FIG. 15. Specifically, the end of the void 76 is capped with a processing electrostatic insulating material 78 similar to the processing described above with reference to FIG. 14A to form a final assembly (eg, a NAND memory assembly). The oxidized regions 48 are between the terminal regions 56 of the vertically adjacent conductive levels 16.FIG. 16 shows another process stage that may follow the process stage of FIG. 15. The oxidized section 48 (FIG. 15) is removed to expose the section 31 of the dielectric barrier material 28. In some embodiments, the exposed section 31 may be referred to as the second region of the dielectric barrier material to distinguish it from the first region 29 of the dielectric barrier material exposed at the process stage of FIG. 12. The oxidation section 48 may be removed using a process similar to that described above with reference to FIG. 12.FIG. 16A shows a process stage that may follow the process stage of FIG. 16. Specifically, the end of the void 76 is capped with a processing electrostatic insulating material 78 similar to the processing described above with reference to FIG. 14A to form a final assembly (eg, a NAND memory assembly). The gap 76 extends between the terminal areas 56 of the vertically adjacent conductive levels 16 and between the non-terminal areas 58 of the vertically adjacent conductive levels 16.FIG. 17 shows a process stage that may follow the process stage of FIG. 16. The second void 76 is lined with the sacrificial material 80 to narrow the second void 76. The sacrificial material 80 may include any suitable composition; and in some embodiments, may include silicon nitride, consist essentially of silicon nitride, or consist of silicon nitride. It can be considered that the sacrificial material 80 is configured as a strip 82.Referring to FIG. 18, the narrowed second gap 76 extends through the dielectric resistance barrier material 28, the charge blocking material 34, and the charge storage material 38. The extended gap 76 divides the dielectric resistance spacer material 28 into vertically spaced first linear segments 84, the charge blocking material 34 is divided into vertically spaced second linear segments 86, and the charge storage material 38 is divided into vertical Straightly spaced third linear segment 88.In the embodiment shown in Figure 18, the segments 84, 86, and 88 have a substantially flat configuration. Also, the channel material 44 has a substantially flat configuration. Compared to a non-flat configuration, a flat channel material can have a positive effect on the string current. Also, the flat section 88 of the charge storage material may have a good charge distribution.The embodiment of FIG. 18 shows that the void 76 extends through the materials 28, 34, and 38 and stops at the tunneling material 42. In other embodiments, the void 76 may extend through the tunneling material.Referring to Figure 19, the sacrificial material 80 is removed (Figure 18).Referring to FIG. 20, the end of the void 76 is capped with a processing electrostatic insulating material 78 similar to the processing described above with reference to FIG. 14A to form a final assembly (for example, a NAND memory assembly). The gap 76 extends between the terminal areas 56 of the vertically adjacent conductive levels 16 and between the non-terminal areas 58 of the vertically adjacent conductive levels 16.As discussed above, in some embodiments, the exposed section 26 of the liner material 22 of FIG. 10 may be directly removed using appropriate etching instead of being oxidized according to the process of FIG. 11. Fig. 21 shows a process stage that may follow the process stage of Fig. 10 and shows the exposed section 26 of the liner material 22 (Fig. 10) removed using one or more suitable etches. In some embodiments, the lining material 22 may include one or more metals (for example, one or two of tungsten and ruthenium), and the exposed section 26 may be opposed to the dielectric barrier material 28 and the insulating material 60. Such metals are selectively etched to remove. If etching removes one material faster than another material, then it is considered that etching is selective to the one material relative to the other material; etching may include, but is not limited to, one material relative to the other material Etching with 100% selectivity.Referring to FIG. 22, the assembly 10 at a processing stage subsequent to the processing stage of FIG. 21 is shown, and this assembly is similar to the assembly described above with reference to FIG. Specifically, the conductive materials 52 and 54 are formed in the void 30 (FIG. 21).The configuration of FIG. 22 may be the final structure of the memory configuration (e.g., configured as an assembly containing NAND memory). Alternatively, the configuration of FIG. 22 may be subjected to further processing to form a memory configuration. For example, FIG. 23 shows a process stage that may follow the process stage of FIG. 22. Materials 60 and 22 have been removed from level 14 by suitable etching, leaving voids 76 along level 14.Referring to FIG. 24, the sacrificial material 80 is formed in the void 76 using a process similar to that described above with reference to FIG. 17 to narrow the void.Referring to FIG. 25, the narrowed void 76 extends through the dielectric barrier material 28, the charge blocking material 34, and the charge storage material 38 using a process similar to that described above with reference to FIG. 18.Referring to FIG. 26, the sacrificial material 80 (FIG. 25) is removed using a process similar to that described above with reference to FIG. 19.Referring to FIG. 27, the end of the void 76 is covered with a processing electrostatic insulating material 78 similar to the processing described above with reference to FIG. 14A to form a final assembly (for example, a NAND memory assembly).In some embodiments, the lining material 22 (Figure 9) may be omitted. For example, FIG. 28 shows an assembly 10 similar to the assembly of FIG. 9 but without the lining material 22. The process stage of FIG. 28 may follow the process stage of FIG. 6.Referring to FIG. 29, the sacrificial material 62 (FIG. 28) is removed, leaving voids 30 along the level 16.Referring to FIG. 30, conductive materials 52 and 54 are formed in the void 30 (FIG. 29).The configuration of FIG. 30 may be the final structure of the memory configuration (e.g., configured as an assembly containing NAND memory). Alternatively, the configuration of FIG. 30 may be subjected to further processing to form a memory configuration. For example, FIG. 31 shows a process stage that may follow the process stage of FIG. 30. Material 60 has been removed from level 14 using a suitable etch, leaving voids 76 along level 14.Referring to FIG. 32, the sacrificial material 80 is formed in the void 76 using a process similar to that described above with reference to FIG. 17 to narrow the void.Referring to FIG. 33, the narrowed void 76 extends through the dielectric resistance barrier material 28, the charge blocking material 34, and the charge storage material 38 using a process similar to that described above with reference to FIG. 18.Referring to FIG. 34, the sacrificial material 80 (FIG. 33) is removed using a process similar to that described above with reference to FIG. 19.Referring to FIG. 35, the end of the void 76 is covered with a processing electrostatic insulating material 78 similar to the processing described above with reference to FIG. 14A to form a final assembly (for example, a NAND memory assembly).The process of FIG. 32 shows the sacrificial material 80 formed in the void 76 before etching through the dielectric barrier material 28. In other embodiments, the dielectric barrier material 28 may be etched before the sacrificial material 80 is formed in the void 76. For example, FIG. 36 shows a processing stage that may be subsequent to the processing stage of FIG. 31 and shows the dielectric barrier material 28 that has been etched to expose the surface 35 of the charge blocking material 34. In the illustrated embodiment, the etching of the dielectric barrier material recesses such material relative to the front face (front surface) 57 of the conductive level 16, thereby leaving the cavity 90. In other embodiments, the dielectric barrier material may not be recessed relative to the front face 57, and therefore the cavity 90 may not be formed.Referring to FIG. 37, the sacrificial material 80 is formed in the void 76 using a process similar to that described above with reference to FIG. 17 to narrow the void.Referring to FIG. 38, the narrowed void 76 extends through the charge blocking material 34 and the charge storage material 38 using a process similar to that described above with reference to FIG. 18.Referring to FIG. 39, the sacrificial material 80 (FIG. 38) is removed using a process similar to that described above with reference to FIG. 19.Referring to FIG. 40, the end of the void 76 is covered with a processing electrostatic insulating material 78 similar to the processing described above with reference to FIG. 14A to form a final assembly (for example, a NAND memory assembly).In operation, the charge storage material 38 may be configured to store information in the memory cell 40 of the various embodiments described herein. The value of the information stored in each memory cell (where the term "value" means one bit or more) may be based on the amount of charge (for example, the number of electrons) stored in the charge storage area of the memory cell. The respective charges may be controlled (e.g., increased or decreased) based at least in part on the value of the voltage applied to the associated gate 72 (wherein the example gate 72 is labeled in FIG. 13) and/or based on the value of the voltage applied to the channel material 44 The amount of charge in the storage area.The tunneling material 42 forms the tunneling area of the memory cell 40. Such tunneling regions may be configured to achieve the desired transfer (e.g., transport) of charges (e.g., electrons) between the charge storage material 38 and the channel material 44. The tunneling region can be configured (ie, engineered) to achieve selected criteria, such as but not limited to equivalent oxide thickness (EOT). EOT quantifies the electrical characteristics of the tunneling region (e.g., capacitance) in terms of representative physical thickness. For example, EOT can be defined as the thickness of a theoretical silicon dioxide layer, which requires the same capacitance density as a given dielectric, but does not consider leakage current and reliability.The charge blocking material 34 may provide a mechanism for preventing the flow of charge from the charge storage material 38 to the associated gate 72.The dielectric resistance barrier material (high-k material) 28 can be used to suppress reverse tunneling of charge carriers from the gate 72 to the charge storage material 38. In some embodiments, the dielectric barrier material 28 can be considered to form a dielectric barrier region within the memory cell 40.The assemblies and structures discussed above can be used within an integrated circuit (the term "integrated circuit" means an electronic circuit supported by a semiconductor substrate); and can be incorporated into an electronic system. Such electronic systems can be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and dedicated modules, and can include multi-layer, multi-chip modules. The electronic system can be any of a wide range of systems, such as cameras, wireless devices, displays, chipsets, set-top boxes, games, lamps, vehicles, clocks, televisions, cellular phones, personal computers, automobiles, industrial control systems, airplanes Wait.Unless otherwise specified, the various materials, substances, components, etc. described herein can be formed by any suitable method now known or to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms "dielectric" and "insulating" can be used to describe materials that have insulating electrical properties. The terms are considered synonymous in this disclosure. In some cases, the term "dielectric" and in other cases the term "insulating" (or "electrical insulating") may be used to provide language changes in the present disclosure to simplify the premises in the appended claims, rather than being used Indicates any significant chemical or electrical differences.Both the terms "electrically connected" and "electrically coupled" can be used in the present disclosure. The terms are considered synonymous. In some cases one term and in other cases another term may be used to provide language changes within the present disclosure to simplify the prerequisite basis in the appended claims.The specific orientations of the various embodiments in the drawings are for illustration purposes only, and in some applications, the embodiments may be rotated relative to the orientation shown. The description provided herein and the appended claims relate to any structure having the described relationship between various features, regardless of whether the structure is in a particular orientation of the various figures or rotated relative to such an orientation.Unless otherwise indicated, the accompanying cross-sectional views only show the features in the cross-sectional plane and do not show the materials behind the cross-sectional plane in order to simplify the drawings.When a structure is referred to as being "on another structure," "adjacent to another structure," or "against another structure," the structure may be directly on the other structure or an intermediate structure may also be present. In contrast, when a structure is said to be "directly" on, "directly adjacent" or "directly against" another structure, there is no intermediate structure. The terms "directly below", "directly above", etc. do not indicate direct physical contact (unless explicitly stated otherwise), but instead indicate upright alignment.The structure (e.g., layer, material, etc.) may be referred to as "vertically extending" to indicate that the structure generally extends upward from the underlying base (e.g., substrate). The vertically extending structure may or may not extend substantially orthogonally to the upper surface of the base.Some embodiments include an integrated assembly having a vertical stack of alternating insulating and conductive levels. The conductive layer has a terminal area, and has a non-terminal area near the terminal area. The terminal area is thicker in the vertical direction than the non-terminal area. The channel material extends vertically through the stack. The tunneling material is adjacent to the channel material. The charge storage material is adjacent to the tunneling material. The high-k dielectric material is between the charge storage material and the terminal area of the conductive level. The insulating level has a first area vertically between the terminal areas of adjacent conductive levels, and has a second area vertically between the non-terminal areas of adjacent conductive levels. The first area of the insulation level contains carbon.Some embodiments include an integrated assembly that includes a vertical stack of alternating insulating and conductive levels. The conductive layer has a terminal area, and has a non-terminal area near the terminal area. The terminal area is thicker in the vertical direction than the non-terminal area. The conductive level includes a conductive liner material along the outer periphery of the conductive core material. The composition of the conductive lining material is different from the conductive core material. The terminal area includes only conductive lining material. The non-terminal area includes both the conductive liner material and the conductive core material. The conductive liner material has a substantially uniform thickness along the non-terminal and terminal areas of the conductive level. The terminal area is joined to the non-terminal area at a corner having an angle of about 90°. The non-terminal area is substantially vertically centered with respect to the terminal area along the conductive level. The channel material extends vertically through the stack. The tunneling material is adjacent to the channel material. The charge storage material is adjacent to the tunneling material. The charge blocking material is adjacent to the charge storage material. The high-k dielectric material is between the charge blocking material and the terminal area of the conductive level.Some embodiments include a method of forming an integrated assembly. A vertical stack of alternating first and second levels is formed. The first level includes the first material, and the second level includes the second material. An opening is formed that extends through the stack. The opening has a peripheral side wall. A lining is formed along the peripheral side wall. The lining is a carbonaceous material. The liner has a first area along the first level and a second area along the second level. A dielectric barrier material is formed adjacent to the liner. The adjacent dielectric barrier material forms a charge blocking material. A charge storage material is formed adjacent to the charge blocking material. A tunneling material is formed adjacent to the charge storage material. A channel material is formed adjacent to the tunneling material. The second material is removed to leave voids between the first levels and expose the second area of the liner. The exposed second area of the liner is oxidized to form an oxidized section of the liner. The oxidation section of the lining is the first section of the lining. The first section of the liner alternates vertically with the second section of the liner. Remove the first section of the liner to expose the area of the dielectric barrier material. A conductive layer is formed in the void. The conductive layer has a front end, and the front surface of the front end is along the exposed area of the dielectric barrier material and directly abuts against the exposed area.Some embodiments include a method of forming an integrated assembly. A vertical stack of alternating first and second levels is formed. The first level includes the first material, and the second level includes the second material. An opening is formed that extends through the stack. The opening has a peripheral side wall. A dielectric barrier material is formed adjacent to the peripheral sidewall. The adjacent dielectric barrier material forms a charge blocking material. A charge storage material is formed adjacent to the charge blocking material. A tunneling material is formed adjacent to the charge storage material. A channel material is formed adjacent to the tunneling material. The second material is removed to leave a first gap between the first levels. A conductive level is formed in the first gap. The conductive layer has a front end with a front surface. The front surface is along the dielectric barrier material and directly abuts against the dielectric barrier material. The first material is removed to leave a second void. The second void is lined with a sacrificial material to narrow the second void. The narrowed second gap extends through the dielectric resistance barrier material, the charge blocking material, and the charge storage material. Remove sacrificial material.According to the regulations, the subject matter disclosed herein has been described in more specific or less specific language in terms of structural and methodological features. However, it should be understood that the claims are not limited to the specific features shown and described, as the components disclosed herein include example embodiments. Therefore, the claims have the entire scope as stated in the writing, and should be properly interpreted according to the principle of equivalents. |
A microelectronic device comprises a first substrate (110) having a first electrically conductive path (111) therein and a second substrate (120) above the first substrate and having a second electrically conductive path (121) therein, wherein the first electrically conductive path and the second electrically conductive path are electrically connected to each other and form a portion of a current loop (131) of an inductor (130). |
CLAIMS What is claimed is: 1. A microelectronic device comprising: a first substrate having a first electrically conductive path therein; and a second substrate above the first substrate, the second substrate having a second electrically conductive path therein, wherein: the first electrically conductive path and the second electrically conductive path are electrically connected to each other and form a portion of a current loop of an inductor. 2. The microelectronic device of claim 1 wherein: the inductor has an inductor core that is characterized by an absence of metal. 3. The microelectronic device of claim 1 further comprising: a die containing voltage regulation circuitry, wherein the inductor is connected to the voltage regulation circuitry. 4. The microelectronic device of claim 1 wherein: the second substrate comprises a substrate core; and a thickness of the substrate core is no greater than 400 micrometers. 5. The microelectronic device of claim 1 wherein: the first substrate has a first surface area and comprises a first set of interconnects having a first pitch; and the second substrate has a second surface area and comprises a second set of interconnects having a second pitch at a first surface thereof and a third set of interconnects having a third pitch at a second surface thereof, wherein: the second substrate is coupled to the first substrate using the second set of interconnects; the first pitch is larger than the second pitch; the second pitch is larger than the third pitch; and the first surface area is larger than the second surface area. 6. The microelectronic device of claim 5 wherein: the first set of interconnects form part of the first electrically conductive path; andthe second set of interconnects form part of the second electrically conductive path. 7. A microelectronic device comprising: a first substrate having a first electrically conductive path therein; a second substrate above the first substrate, the second substrate having a second electrically conductive path therein; and a die above the second substrate, wherein: the first electrically conductive path and the second electrically conductive path are electrically connected to each other and form a portion of a current loop of an inductor. 8. The microelectronic device of claim 7 wherein: the inductor has an inductor core that is characterized by an absence of metal. 9. The microelectronic device of claim 7 wherein: the second substrate comprises a substrate core; and a thickness of the substrate core is no greater than 400 micrometers. 10. The microelectronic device of claim 7 wherein: the first substrate has a first surface area and comprises a first set of interconnects having a first pitch; and the second substrate has a second surface area and comprises a second set of interconnects having a second pitch at a first surface thereof and a third set of interconnects having a third pitch at a second surface thereof, wherein: the second substrate is coupled to the first substrate using the second set of interconnects; the first pitch is larger than the second pitch; the second pitch is larger than the third pitch; and the first surface area is larger than the second surface area. 11. The microelectronic device of claim 10 wherein: the first set of interconnects form part of the first electrically conductive path; and the second set of interconnects form part of the second electrically conductive path. 12. A method of manufacturing a microelectronic device, the method comprising: providing a first substrate having a first electrically conductive path therein; providing a second substrate having a second electrically conductive path therein;and connecting the first substrate and the second substrate to each other such that the first electrically conductive path and the second electrically conductive path are electrically connected to each other and form a portion of a current loop of an inductor. 13. The method of claim 12 further comprising: connecting a die to the second substrate. 14. The method of claim 13 wherein: providing the first substrate comprises forming a first plurality of vias and a first plurality of traces therein; providing the second substrate comprises forming a second plurality of vias and a second plurality of traces therein; and physically connecting the first substrate and the second substrate to each other comprises arranging the first and second pluralities of vias and the first and second pluralities of traces such that the portion of the current loop comprises a first one of the first plurality of vias, a first one of the second plurality of vias, a first one of the second plurality of traces, a second one of the second plurality of vias, a second one of the first plurality of vias, and a first one of the first plurality of traces. 15. The method of claim 13 further comprising: voiding metal in a core of the inductor. 16. The method of claim 15 wherein: voiding metal in the core of the inductor comprises applying to at least one of the first substrate and the second substrate a mask that prevents metal from being formed in the core. |
MICROELECTRONIC DEVICE AND METHOD OF MANUFACTURING SAME FIELD OF THE INVENTION The disclosed embodiments of the invention relate generally to microelectronic device packaging, and relate more particularly to inductive loops in microelectronic device packaging. BACKGROUND OF THE INVENTION Integrated circuit dies and other microelectronic devices are typically enclosed within a package that, among other functions, enables electrical connections to be made between the die and a socket, a motherboard, or another next-level component. As die sizes shrink and interconnect densities increase, such electrical connections must be scaled so as to match both the smaller pitches typically found at the die and the larger pitches typically found at the next-level component. One approach to interconnect scaling within microelectronic packages is to use multiple substrates to handle the space transformation from die bump pitch, where a typical pitch value may be 150 micrometers (microns or μιη) to system board level pitch, where a typical pitch value may be 1000 μιη, i.e., 1.0 millimeter (mm). This multiple- substrate architecture requires one or more of the substrates to be thinner than a typical server substrate (having a 400 μιη core compared to an 800 μιη core, for example), in order to stay within maximum height requirements and to provide a solution for high speed input/output (I/O) signals. BRIEF DESCRIPTION OF THE DRAWINGS The disclosed embodiments will be better understood from a reading of the following detailed description, taken in conjunction with the accompanying figures in the drawings in which: FIG. 1 is a cross-sectional view of a microelectronic device according to an embodiment of the invention; FIG. 2 is a flowchart illustrating a method of manufacturing a microelectronic device according to an embodiment of the invention; and FIG. 3 is a conceptualized plan view of a current loop of the microelectronic device of FIG. 1 according to an embodiment of the invention. For simplicity and clarity of illustration, the drawing figures illustrate the generalmanner of construction, and descriptions and details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the discussion of the described embodiments of the invention. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present invention. The same reference numerals in different figures denote the same elements, while similar reference numerals may, but do not necessarily, denote similar elements. The terms "first," "second," "third," "fourth," and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Similarly, if a method is described herein as comprising a series of steps, the order of such steps as presented herein is not necessarily the only order in which such steps may be performed, and certain of the stated steps may possibly be omitted and/or certain other steps not described herein may possibly be added to the method. Furthermore, the terms "comprise," "include," "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. The term "coupled," as used herein, is defined as directly or indirectly connected in an electrical or non- electrical manner. Objects described herein as being "adjacent to" each other may be in physical contact with each other, in close proximity to each other, or in the same general region or area as each other, as appropriate for the context in which the phrase is used. Occurrences of the phrase "in one embodiment" herein do not necessarily all refer to thesame embodiment. DETAILED DESCRIPTION OF THE DRAWINGS In one embodiment of the invention, a microelectronic device comprises a first substrate having a first electrically conductive path therein and a second substrate above the first substrate and having a second electrically conductive path therein, wherein the first electrically conductive path and the second electrically conductive path are electrically connected to each other and form a portion of a current loop of an inductor. It was mentioned above that a proposed multiple-substrate architecture requires one or more of the substrates to be thinner than a typical server substrate in order to stay within maximum height requirements and to provide a solution for high speed I/O signals. A key component for enabling a fully integrated voltage regulator (FIVR) on future generation products is an inductor contained within the microelectronic package. Currently, the best inductor structures use thick-core substrates with large plated through holes (PTHs), but high speed I/O demands require the use of thinner substrate cores with smaller PTHs. Cost pressures are also driving a trend toward thinner substrate cores. The need to use a thinner core, however, dictates that the current loop area in the inductor is reduced, leading to lower inductor performance. On-package inductance is critical for enabling FIVR and future power delivery architectures. Embodiments of the invention address the inductor performance concerns by forming inductor structures continuously through both the first and second substrates. This architecture effectively increases the area (e.g., the length) of the current loop, leading to higher inductor performance. In other words, embodiments of the invention enable an increased separation (z-height) between inductor coils and back side metal, which is a key parameter to increase the inductor performance in an air core inductor. Embodiments of the invention thus address and resolve an inherent conflict between power delivery requirements and signal integrity requirements in future substrates. Referring now to the drawings, FIG. 1 is a cross-sectional view of a microelectronic device 100 according to an embodiment of the invention. As illustrated in FIG. 1, microelectronic device 100 comprises a substrate 1 10 having an electrically conductive path 1 11 therein. Substrate 110 may comprise any suitable type of package substrate or other die carrier. In one embodiment, the substrate 110 comprises a multilayer substrate including anumber of alternating layers of metallization and dielectric material. Each layer of metallization comprises a number of conductors (e.g., traces), and these conductors may comprise any suitable conductive material, such as copper. Further, each metal layer is separated from adjacent metal layers by the dielectric layers, and adjacent metal layers may be electrically interconnected by microvias or other conductive vias. The dielectric layers may comprise any suitable insulating material— e.g., polymers, including both thermoplastic and thermosetting resins or epoxies, ceramics, etc.— and the alternating layers of metal and dielectric material may be built-up over a core layer of a dielectric material (or perhaps a metallic core). As an example, and as illustrated in FIG. 1, electrically conductive path 1 11 can comprise one or more microvias that electrically connect adjacent internal layers of substrate 1 10. Such microvias can be arranged one on top of another in a straight line or they can be staggered such that they only partially overlap. Another possible microvia arrangement is one in which the microvias do not overlap at all but rather are connected by electrically conductive traces that run between them. As another example, electrically conductive path 11 1 can comprise a plated through hole or the like that extends throughout the entire extent of substrate 110. Microelectronic device 100 further comprises a substrate 120 that is located above substrate 1 10 and has an electrically conductive path 121 therein, and a die 160 located above substrate 120. Although electrically conductive path is shown as passing into die 160, corresponding electrically conductive paths of other (non-illustrated) embodiments may instead pass underneath the die without extending into it. Microelectronic device 100 may further comprise die-side capacitors 170 and/or additional components 180, which, for example, could be resistors, capacitors, inductors, active devices, stiffeners, or the like. In one embodiment, substrate 120 has a substrate core having a thickness that is no greater than 400 μιη. As an example, electrically conductive path 121 can comprise a plated through hole or the like that extends through a core 125 of substrate 120. Electrically conductive path 121 may then further comprise a metal trace or the like that passes through build-up or similar layers 126 that surround core 125. Alternatively, substrate 120 may be made up entirely of such build-up or similar layers and may not have a core, in which case substrate 120 may have a total thickness in a range of approximately 200-500 μιη. Whatever their details, electrically conductive paths 121 and 11 1 are electricallyconnected to each other and form a portion of a current loop 131 of an inductor 130. A conceptualized depiction of current loop 131 is shown in plan view in FIG. 3, described below. To reiterate concepts that were touched upon earlier herein, or that are otherwise relevant to the current discussion, the precise regulation of power is an increasingly- important function of high-density, high-performance microelectronic devices. Local voltage regulators, including FIVRs, are essential components of this effort; high-quality integrated passive devices, including inductors, are, in turn, important components of functioning voltage regulators. Accordingly, inductor 130 may be useful in managing power regulation for microelectronic device 100. Among other things, this means that inductor 130 may be a component of, or may be connected to, voltage regulation circuitry in die 160. The inductor structure of embodiments of the invention may experience increased performance by voiding metal (e.g., copper) in the region of substrate 1 10 that lies within the inductor core area during manufacture of the substrate. This process does not require any special procedures and thus does not increase costs over those associated with the normal manufacturing process of substrate 1 10. Accordingly, in one embodiment inductor 130 has a core 135 that is characterized by an absence of metal. In other words, in one embodiment, inductor 130 acts like an air core inductor. In one embodiment, microelectronic device 100 may be used as a way to achieve pitch translation between a die and an associated printed circuit board (PCB) or the like. To that end, the system level interface at the PCB (indicated by reference numeral 150 in FIG. 1) may be handled by substrate 1 10, while the die level interface may be handled by substrate 120. In a particular manifestation of this (or another) embodiment, substrate 110 has a first surface area and comprises a set of interconnects 1 17 having a first pitch, substrate 120 has a second surface area and comprises a set of interconnects 127 having a second pitch at a first surface thereof and a set of interconnects 128 having a third pitch at a second surface thereof. It should be noted that substrate 120 is coupled to substrate 110 using interconnects 127, the first pitch is larger than the second pitch, which in turn is larger than the third pitch, and the first surface area is larger than the second surface area. In this embodiment, interconnects 117 form part of electrically conductive path 11 1 and interconnects 127 and 128 form part of electrically conductive path 121. FIG. 2 is a flowchart illustrating a method 200 of manufacturing a microelectronicdevice according to an embodiment of the invention. As an example, method 200 may result in the formation of a microelectronic device that is similar to microelectronic device 100 that is first shown in FIG. 1. A step 210 of method 200 is to provide a first substrate having a first electrically conductive path therein. As an example, the first substrate and the first electrically conductive path can be similar to, respectively, substrate 1 10 and electrically conductive path 11 1 that are both shown in FIG. 1. In one embodiment, step 210 comprises forming a first plurality of vias and a first plurality of traces therein. As an example, these can include components that are similar to those shown in FIG. 1. A step 220 of method 200 is to provide a second substrate having a second electrically conductive path therein. As an example, the second substrate and the second electrically conductive path can be similar to, respectively, substrate 120 and electrically conductive path 121 that are both shown in FIG. 1. In one embodiment, step 220 comprises forming a second plurality of vias and a second plurality of traces therein. As an example, these can include components that are similar to those shown in FIG. 1. A step 230 of method 200 is to connect a die to the second substrate. As an example, the die can be similar to die 160 that is shown in FIG. 1. A step 240 of method 200 is to connect the first substrate and the second substrate to each other such that the first electrically conductive path and the second electrically conductive path are electrically connected to each other and form a portion of a current loop of an inductor. As an example, the inductor and the current loop can be similar to, respectively, inductor 130 and current loop 131 that are both shown in FIG. 1. In one embodiment, step 240 comprises arranging the first and second pluralities of vias and the first and second pluralities of traces such that the portion of the current loop comprises a first one of the first plurality of vias, a first one of the second plurality of vias, a first one of the second plurality of traces, a second one of the second plurality of vias, a second one of the first plurality of vias, and a first one of the first plurality of traces. In one embodiment, step 210, step 220, and/or another step of method 200 comprises voiding metal in a core of the inductor. As an example, this can comprise applying to at least one of the first substrate and the second substrate a mask that prevents metal from being formed in the inductor core. FIG. 3 is a plan view of current loop 131 of microelectronic device 100 according to an embodiment of the invention. It will be readily apparent from its appearance thatFIG. 3 is a highly-conceptualized drawing, included herein more for its broad structural overview than for its illustration of details. As shown, FIG. 3 is related to FIG. 1 in that FIG. 1 is a cross-section taken along line 1-1 in FIG. 3. Visible in FIG. 3 is a top portion of electrically conductive path 121 as introduced in FIG. 1. Underneath that top portion, and thus not visible in FIG. 3, would be the balance of electrically conductive path 121 as well as electrically conductive path 11 1, along with other portions of current loop 131. The portion of current loop 131 represented in FIG. 3 by electrically conductive path 121, trace 322, and feature 323 corresponds roughly to the portion of current loop 131 that is visible in FIG. 1. Current loop 131 continues with additional electrically conductive features that correspond to those mentioned above and that collectively loop around a space that acts as a core of inductor 130. Although the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made without departing from the spirit or scope of the invention. Accordingly, the disclosure of embodiments of the invention is intended to be illustrative of the scope of the invention and is not intended to be limiting. It is intended that the scope of the invention shall be limited only to the extent required by the appended claims. For example, to one of ordinary skill in the art, it will be readily apparent that the microelectronic device and the related structures and methods discussed herein may be implemented in a variety of embodiments, and that the foregoing discussion of certain of these embodiments does not necessarily represent a complete description of all possible embodiments. Additionally, benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element or elements that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as critical, required, or essential features or elements of any or all of the claims. Moreover, embodiments and limitations disclosed herein are not dedicated to the public under the doctrine of dedication if the embodiments and/or limitations: (1) are not expressly claimed in the claims; and (2) are or are potentially equivalents of express elements and/or limitations in the claims under the doctrine of equivalents. |
Technologies for accelerated orchestration and attestation include multiple edge devices. An edge appliance device performs an attestation process with each of its components to generate component certificates. The edge appliance device generates an appliance certificate that is indicative of the component certificates and a current utilization of the edge appliance device and provides the appliance certificate to a relying party. The relying party may be an edge orchestrator device. The edge orchestrator device receives a workload scheduling request with a service level agreement requirement.The edge orchestrator device verifies the appliance certificate and determines whether the service level agreement requirement is satisfied based on the appliance certificate. If satisfied, the workload is scheduled to the edge appliance device. Attestation and generation of the appliance certificate by the edge appliance device may be performed by an accelerator of the edge appliance device. Other embodiments are described and claimed. |
1.An edge device device used for device certification, the edge device device comprising:The certification manager is used to perform a certification process on the components of the edge device device to generate a component certificate; andThe platform verifier is configured to: (i) generate a device certificate, wherein the device certificate indicates the component certificate and the current utilization of the edge device device, and (ii) provide the device certificate to the relying party.2.5. The edge device device of claim 1, wherein the edge device device includes an accelerator, and wherein the accelerator includes the certification manager and the platform verifier.3.The edge device device of claim 1, wherein:The platform verifier is further configured to receive authenticated telemetry from the component, wherein the authenticated telemetry indicates the current utilization of the component; andGenerating the device certificate includes generating the device certificate based on the current utilization of the component.4.The edge device device of claim 1, wherein the component includes an accelerator, a computing platform, a memory component, a storage component, or a functional block of the edge device device.5.The edge device device of claim 1, wherein the component includes a decomposition resource of the edge device device.6.The edge device device of claim 1, wherein:The certification manager is further configured to: (i) identify multiple components of the edge device device, wherein the multiple components include components, and (ii) perform a certification process on each of the multiple components to Generating a component certificate for each of the plurality of components; andThe device certificate indicates the component certificate of each component of the plurality of components.7.5. The edge device device of claim 1, wherein performing the certification process includes receiving a component certificate, the component certificate indicating a trusted execution environment provided by the component.8.8. The edge device device of claim 7, wherein the component certificate indicates a security attribute of the trusted execution environment.9.The edge device device of claim 1, wherein performing the certification process includes securely receiving a component certificate indicating the hardware configuration and firmware configuration of the component.10.8. The edge device device of claim 9, wherein the component certificate includes a hash value indicating the hardware configuration and firmware configuration of the component.11.The edge device device of claim 1, wherein providing the device certificate to the relying party includes providing the device certificate to a remote orchestrator device.12.The edge device device of claim 1, wherein providing the device certificate to the relying party includes providing the device certificate to a root of trust of platform activities.13.A method for device certification, the method includes:Perform a certification process on the components of the edge device device through the edge device device to generate a component certificate;Generating a device certificate through an edge device device, wherein the device certificate indicates the component certificate and the current utilization of the edge device device; andThe device certificate is provided to the relying party through the edge device device.14.The method of claim 13, wherein:Performing the certification process includes performing the certification process through the accelerator of the edge device device;Generating a device certificate includes generating a device certificate through the accelerator; andProviding the device certificate includes providing the device certificate through the accelerator.15.The method of claim 13, further comprising:Receiving authenticated telemetry from the component through the edge device device, wherein the authenticated telemetry indicates the current utilization of the component;Wherein generating the device certificate includes generating the device certificate based on the current utilization of the component.16.The method of claim 13, further comprising:Identify multiple components of the edge device device by the edge device device, wherein the multiple components include the component; andPerforming a certification process on each component of the plurality of components through the edge device device to generate a component certificate for each component of the plurality of components;The device certificate indicates the component certificate of each component of the plurality of components.17.The method of claim 13, wherein performing the certification process includes securely receiving a component certificate indicating the hardware configuration and firmware configuration of the component.18.A virtualization system for device orchestration, the virtualization system includes:A workload orchestrator, configured to receive a workload scheduling request, where the workload scheduling request indicates a service level agreement requirement associated with the workload; andThe aggregation certification manager is configured to receive a device certificate from an edge device device, wherein the device certificate indicates an aggregation component certificate and the current utilization of the edge device device, wherein the aggregation component certificate indicates a plurality of edge device devices The configuration of each component of the component;The workload orchestrator is further configured to: (i) determine whether the edge device device meets the service level agreement requirements based on the device certificate, and (ii) in response to determining that the edge device device meets the The service level agreement requires that the workload be dispatched to the edge device device.19.The virtualization system according to claim 18, wherein:The aggregate certification manager is further configured to: in response to receiving the device certificate, verify the device certificate;Wherein scheduling the workload further includes scheduling the workload in response to the verification of the device certificate.20.The virtualization system of claim 19, wherein verifying the device certificate comprises: comparing the device certificate with an expected certificate, wherein the expected certificate indicates the value of each component of the plurality of components of the edge device device Expected configuration.21.The virtualization system according to claim 18, wherein the device certificate indicates a trusted execution environment provided by the edge device device.22.21. The virtualization system of claim 21, wherein determining whether the edge device device meets the service level agreement requirements includes evaluating the security attributes of the trusted execution environment.23.A method for device arrangement, the method includes:Receiving a workload scheduling request through a computing device, where the workload scheduling request indicates a service level agreement requirement associated with the workload;Receive a device certificate from an edge device device through a computing device, where the device certificate indicates the aggregate component certificate and the current utilization of the edge device device, and the aggregate component certificate indicates the status of each component of the edge device device ConfigurationDetermining whether the edge device device meets the service level agreement requirements based on the device certificate by a computing device; andIn response to determining that the edge device device satisfies the service level agreement requirements, the workload is scheduled to the edge device device through the computing device.24.The method of claim 23, further comprising:Verifying the device certificate by the computing device in response to receiving the device certificate;Wherein scheduling the workload further includes scheduling the workload in response to the verification of the device certificate.25.The method of claim 23, wherein the device certificate indicates a trusted execution environment provided by the edge device device. |
Technology for accelerating orchestration and certification of the trust chain of edge devicesBackground techniqueCertain cloud computing architectures can provide functions as a service (FaaS) services. A typical FaaS system allows the client to call specific functions on demand without the need to perform a dedicated service process. The FaaS function can be performed by a device composed of multiple components. The number of users or the number of users performing FaaS services can be unlimited.Description of the drawingsIn the drawings, the concepts described herein are illustrated by way of example and not by way of limitation. For simplicity and clarity of description, the elements illustrated in the drawings are not necessarily drawn to scale. Where deemed appropriate, reference numerals have been repeated between multiple drawings to indicate corresponding or similar elements.Figure 1 is a simplified block diagram of at least one embodiment of a system for accelerated orchestration and certification;2 is a simplified block diagram of at least one embodiment of the various embodiments of the system of FIG. 1;Fig. 3 is a simplified flow chart of at least one embodiment of a method for aggregate certification that can be executed by the edge device device of the system of Figs. 1-2;Fig. 4 is a simplified flowchart of at least one embodiment of a method for certification and orchestration that can be executed by the edge orchestration device of the system of Figs. 1-2; andFigure 5 is a simplified block diagram of at least one embodiment of an edge architecture that may include the system of Figures 1-2.detailed descriptionAlthough the concept of the present disclosure is susceptible to various modifications and alternative forms, specific embodiments of the present disclosure have been shown in the drawings as examples and will be described in detail herein. However, it should be understood that there is no intention to limit the concept of the present disclosure to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives that conform to the present disclosure and the appended claims.References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc. indicate that the described embodiment may include specific features, structures, or characteristics, but each embodiment may or may not necessarily Including the specific feature, structure or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment. Further, when a specific feature, structure or characteristic is described in combination with an embodiment, it is considered that implementing such a feature, structure or characteristic in combination with other embodiments, whether explicitly described or not, falls within the knowledge of those skilled in the art . Additionally, it should be appreciated that terms included in the list in the form of "at least one of A, B, and C" can mean (A); (B); (C); (A and B); (A) And C); (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); ( B and C); or (A, B and C).In some cases, the disclosed embodiments can be implemented in hardware, firmware, software, or any combination thereof. The various embodiments disclosed can also be implemented as readable and executable by one or more processors carried by or stored on a transient or non-transitory machine-readable (for example, computer-readable) storage medium instruction. In addition, the disclosed embodiments may be initially encoded as a set of preliminary instructions (eg, encoded on a machine-readable storage medium), which may require preliminary processing operations to prepare the instructions on the destination device carried out. Preliminary processing can include combining instructions with data existing on the device, converting instructions into different formats, performing compression, decompression, encryption, and/or decryption, combining multiple files including different parts of instructions, and combining instructions Perform collection or similar operations with other codes that exist on the device (such as libraries, operating systems, etc.). The preliminary processing may be performed by the source computing device (for example, the device that will send the instruction), the destination computing device (for example, the device that will execute the instruction), or an intermediate device. A machine-readable storage medium can be embodied as any storage device, mechanism, or other physical structure (for example, volatile or non-volatile memory, media disk, or storage device) for storing or transmitting information in a form readable by a machine. Other media equipment).In the drawings, some structural or method features may be shown in a specific arrangement and/or order. However, it should be appreciated that such specific arrangement and/or ordering may not be necessary. Conversely, in some embodiments, such features can be arranged in a different manner and/or order than shown in the exemplary drawings. In addition, the inclusion of structural or method features in a particular drawing does not imply that such features are required in all embodiments, and in some embodiments, such features may not be included, or such features may be similar to other features. Combine.Referring now to FIG. 1, a system 100 for accelerating orchestration and certification includes multiple edge devices 102 and multiple endpoint devices 104. In use, as described further below, one or more edge devices 102 can be combined into the edge device device 102 or established in other ways, and the edge device device 102 is used to perform a function as a service (FaaS) Request or other services. The edge device device 102 uses acceleration logic to generate a device certificate. The device certificate certifies the configuration and utilization of one or more components of the edge device device 102. The edge device device 102 provides the device certificate to the orchestrator (such as the edge orchestrator device 102). The edge orchestrator device 102 verifies the device certificate and compares the device certificate with the service level agreement (SLA) requirements associated with the tenant workload. Therefore, the system 100 allows verification of the complete root of trust of the components of the edge equipment with low latency. In addition, the system 100 allows verification of the workload plan before issuing the SLA, thereby extending the root of trust verification to workload scheduling.Each edge device 102 can be embodied as any type of device capable of performing the functions described herein. For example, the edge device 102 can be embodied as, but not limited to, a computer, a server, a workstation, a multi-processor system, a distributed computing device, a switch, a router, a network device, a virtualization system (for example, in a virtual system such as (multiple) One or more functions executed in a virtualized environment (multiple) such as a machine or container(s), where the underlying hardware resources appear as physical hardware for the software executed in the virtualized environment(s), but Separated from software through an abstraction layer) and/or consumer electronics. Additionally or alternatively, the edge device 102 may be embodied as one or more computing skids, memory skids, or other racks, skids, computing bases, or other components of a physically decomposed computing device. As shown in FIG. 1, the illustrative edge device 102 includes a computing engine 120, an I/O subsystem 122, a memory 124, a data storage device 126, and a communication subsystem 128. Additionally, in some embodiments, one or more of the illustrative components may be incorporated into another component or otherwise form part of another component. For example, in some embodiments, the memory 124 or a portion thereof may be incorporated in the calculation engine 120.The calculation engine 120 may be embodied as any type of calculation engine capable of performing the functions described herein. For example, the calculation engine 120 can be embodied as a single-core or multi-core processor(s), a digital signal processor, a microcontroller, a field programmable gate array (FPGA), or other configurable circuits, an application specific integrated circuit (ASIC) ) Or other processors or processing/control circuits or their virtualized versions. Similarly, the memory 124 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 124 may store various data and software used in the operation of the edge device 102, such as an operating system, applications, programs, libraries, and drivers. As shown in the figure, the memory 124 may be communicatively coupled to the computing engine 120 via the I/O subsystem 122, and the I/O subsystem 122 may be embodied as the computing engine 120, memory 124, and Circuits and/or components for input/output operations between other components. For example, the I/O subsystem 122 can be embodied as or otherwise include a memory controller hub, an input/output control hub, a sensor hub, a host controller, a firmware device, a communication link (ie, a point-to-point link, a bus Links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems used to facilitate input/output operations. In some embodiments, the memory 124 may be directly coupled to the calculation engine 120, for example, via an integrated memory controller hub. In addition, in some embodiments, the I/O subsystem 122 may form part of a system-on-chip (SoC), and may be combined in a single system along with the computing engine 120, memory 124, accelerator 130, and/or other components of the edge device 102. Integrated circuit chip.The data storage device 126 may be embodied as any type of one or more devices configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid state drives, non-volatile Flash memory or other data storage devices. The communication subsystem 128 may be embodied as any communication circuit, device, or collection thereof that can implement communication between the edge device 102 and other remote devices through the network 114. The communication subsystem 128 may be configured to use any one or more communication technologies (for example, wired or wireless communication) and associated protocols (for example, Ethernet, WiMAX, 3G, 4G LTE, 5G, etc.) to achieve Such communication.The accelerator 130 can be embodied as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a coprocessor, or other digital logic capable of performing acceleration functions (for example, acceleration application functions, acceleration network functions, or other acceleration functions) equipment. Illustratively, the accelerator 130 is an FPGA, which can be embodied as an integrated circuit including programmable digital logic resources, which can be configured after manufacturing. For example, the FPGA may include an array of configurable logic blocks that communicate through configurable data exchange. Accelerator 130 may be coupled to, for example, a peripheral bus (e.g., PCI Express bus) or inter-processor interconnection (e.g., on-chip interconnection (IDI) or fast path interconnection (QPI)) or via any other suitable interconnection. Calculation engine 120. In some embodiments, the accelerator 130 may be incorporated or otherwise coupled with one or more other components of the edge device 102, such as a network interface controller (NIC) of the communication subsystem 128.Each endpoint device 104 can be embodied as any type of computing or computer device capable of performing the functions described herein, including but not limited to computers, mobile computing devices, wearable computing devices, network facilities, web devices, distributed computing systems , Autonomous vehicles, autonomous aircraft, Internet of Things (IoT) sensors, IoT gateways, industrial automation equipment, processor-based systems, and/or consumer electronic devices. In this way, each endpoint device 104 may include similar components and features to the edge device 102, such as a computing engine 120, an I/O subsystem 122, a memory 124, a data storage 126, a communication subsystem 128, and/or various peripheral devices. The various components of each endpoint device 104 may be similar to the corresponding components of the edge device 102. The description of the corresponding components of the edge device 102 is applicable to the corresponding components of the endpoint device 104, and will not be repeated for clarity of this description.As discussed in more detail below, the edge device 102 and the endpoint device 104 may be configured to transmit and receive data between each other and/or with other devices of the system 100 via the network 106. The network 106 may be embodied as any number of various wired and/or wireless networks, or a mixture or combination thereof. For example, the network 106 may be embodied or include in other ways: a mobile access network, an edge network infrastructure, a wired or wireless local area network (LAN), and/or a wired or wireless wide area network (WAN). Thus, the network 106 may include any number of additional devices, such as additional base stations, access points, computers, routers, and switches used to facilitate communication between the devices of the system 100. In the illustrative embodiment, the network 106 is embodied as an edge network structure.Referring now to FIG. 2, in an illustrative embodiment, each edge device device 102a establishes an environment 200 during operation. The illustrative environment 200 includes an accelerator 130 and one or more components 206. The accelerator 130 includes a certification manager 202 and a platform verifier 204. Each component 206 includes a prover 208. The components of the environment 200 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 200 may be embodied as a circuit or collection of electrical devices (eg, the certification manager circuit 202, the platform verifier circuit 204, and/or the component circuit 206) . It should be understood that in such embodiments, one or more of the attestation manager circuit 202, the platform verifier circuit 204, and/or the component circuit 206 may form the calculation engine 120, the I/O subsystem, the memory 124, and the data storage Part of device 126, accelerator 130, and/or other components of edge device 102. Additionally, in some embodiments, one or more of the illustrative components may form part of another component and/or one or more of the illustrative components may be independent of each other.The certification manager 202 is configured to identify the components 206 contained in the edge device 102a. The certification manager 202 is further configured to perform a certification process on each component 206 of the edge device device 102a. The certification process generates a component certificate for each component 206. The component certificate indicates the firmware 210 of the component 206 and/or the hardware or firmware configuration 212 of the component 206. The certification manager 202 is further configured to receive authenticated telemetry 214 from each component 206. Authenticated telemetry 214 indicates the current utilization of this component 206.The platform verifier 204 is configured to generate a device certificate. The device certificate indicates the aggregated component certificate of the component 206 and the current utilization of the edge device device 102a. The platform verifier 204 is further configured to provide a device certificate to the relying party. For example, the relying party may be the remote edge orchestrator device 102b or the root of trust for platform activities (e.g., accelerator 130). For example, the root of trust for platform activities may be the platform verifier 204, where the platform verifier 204 is implemented in an accelerator 130 such as an FPGA.Each component 206 can be embodied as a computing engine 120 or other computing platform (for example, a processor, SoC, or other computing element and a motherboard or other associated circuit board), a memory device 124 (for example, a DIMM or other memory element) ), a data storage device 126, an accelerator 130, a function block, an IP block, or another component of the edge device device 102. In some embodiments, the component 206 may include one or more decomposed components, such as a memory skid, a storage skid, a computing skid, an accelerator skid, or other rack-scale design decomposition components. The component 206 includes a prover 208 configured to perform an attestation process that includes generating a component certificate of the component 206. The component certificate indicates the component's firmware 210, hardware or firmware configuration 212, and/or certified telemetry 214. For example, the component certificate may include or be based on a hash value indicating firmware 210 and/or configuration 212. The configuration 212 may indicate hardware characteristics, firmware characteristics, or other configurations of the component 206. For example, the component certificate may indicate the trusted execution environment provided by the component 206, including one or more security attributes of the trusted execution environment.Still referring to FIG. 2, in an illustrative embodiment, the edge orchestrator device 102b establishes an environment 220 during operation. The illustrative environment 220 includes a workload orchestrator 222 and an aggregate proof manager 224. The components of the environment 220 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 220 may be embodied as a circuit or collection of electronic devices (eg, workload orchestrator circuit 222 and/or aggregate certification manager circuit 224). It should be understood that in such an embodiment, one or more workload orchestrator circuits 222 and/or aggregate certification manager circuits 224 may constitute the calculation engine 120, the I/O subsystem 122, the accelerator 130, and/or the edge device 102 Part of the other components. Additionally, in some embodiments, one or more of the illustrative components may form part of another component and/or one or more of the illustrative components may be independent of each other.The aggregate certification manager 224 is configured to receive the device certificate from the edge device device 102a. As described above, the device certificate indicates the aggregate component certificate and the current utilization of the edge device device 102a. The aggregate component certificate indicates the configuration of each component 206 of the edge device device 102a, including the firmware 210 and/or the configuration 210. The aggregate certification manager 224 is further configured to verify the device certificate. Verifying the device certificate may include comparing the device certificate with an expected certificate indicating the expected configuration 210 of each component 206 of the edge device device 102a.The workload orchestrator 222 is configured to receive a workload scheduling request indicating a service level agreement (SLA) requirement associated with the workload. The workload orchestrator 222 is further configured to determine whether the edge device device 102a meets the SLA requirement based on the device certificate, and if so, schedule the workload to the edge device device 102a. For example, the workload orchestrator 222 may evaluate the security attributes of the trusted execution environment provided by the component 206 to determine whether the edge device device 102a meets the SLA requirements.Additionally or alternatively, in some embodiments, the aggregated certification manager 224 may be configured to perform functions similar to the certification manager 202 and/or the platform verifier 204, thereby managing the certification between edge device devices 102 ( Similar to the certification between the components 206 at the device level). In those embodiments, the attestation manager 202 of the edge device device 102 may behave to the aggregate attestation manager 224 as if the attestation manager 202 is the attester 208. In those embodiments, each attestation manager 202 may forward the attestation originating from the attester 208, or each attestation manager 202 may aggregate the attestation into a representative (e.g., from the platform verifier 204) platform verification results. Simplified certification statement.Referring now to FIG. 3, in use, the edge device device 102a may execute the method 300 for aggregated proof. It should be understood that, in some embodiments, the operations of the method 300 may be performed by one or more components (such as the accelerator 130) of the environment 200 of the edge device device 102a as shown in FIG. 2. The method 300 begins at block 302, where the edge device device 102a identifies or otherwise selects the components 206 included in the edge device device 102a. In some embodiments, in block 304, the edge device device 102b may select one or more accelerators 130. In some embodiments, in block 306, the edge device device 102b may select one or more computing platforms. Each computing platform may include a computing engine 120 and an associated motherboard or other supporting circuits. In some embodiments, in block 308, the edge device device 102b may select one or more memories or storage components. For example, the edge device device 102b may select one or more memory DIMMs, non-volatile flash memory chips, SSDs, 3D XPoint memory DIMMs, or other volatile or non-volatile memory or storage components. In some embodiments, in block 310, the edge device device 102b may select one or more functional blocks, IP blocks, or other subcomponents of SoC or other computer chips. In some embodiments, in block 312, the edge device device 102b may select one or more remote and/or disassembled components 206. For example, the edge device device 102b may identify one or more remote edge device devices 102, such as a memory skid, a storage skid, a computing skid, an accelerator skid, or other rack or rack-scale design decomposed components.In block 314, the edge device device 102a performs an attestation process on the component 206 of the edge device device 102a. During the certification process, in block 316, the edge device device 102a receives the component certificate from the component 206. The component certificate includes verifiable assertions on the identity and configuration of the component 206. Therefore, the component certificate may indicate the firmware 210 version of the component 206, the specific hardware or firmware characteristics of the component 206, or other attributes of the component 206. For example, the component certificate may indicate specific security attributes of the trusted execution environment provided by the edge device device 102a. The security attribute may indicate a password or isolation protection available to the code or data processed by the edge device device 102a, such as a key or other sensitive data.The edge device device 102a and the component 206 may execute any suitable certification protocol. For example, in some embodiments, the component 206 may execute Implicit Identity Based Device Attestation issued by the Trusted Computing Group (TCG). In those embodiments, the prover 208 of the component 206 may be embodied as a trusted device identifier combination engine (DICE) or otherwise include a trusted device identifier combination engine (DICE). DICE measures the first variable code of component 206 (e.g., part or all of firmware 210), and securely combines the measurement with a unique device secret (e.g., using a hash or one-way function) to generate a composite device identifier ( CDI). The component 206 derives an asymmetric key pair based on the CDI used as the device identity of the component 206, and generates a certificate based on the key pair. In some embodiments, the component 206 may derive the alias key based on the firmware 210 (eg, based on the updatable firmware 210) and use the device identifier to generate the alias certificate. Therefore, the device certificate and the alias certificate indicate the identity of the component 206 (for example, a unique device secret) and the configuration of the component 206 (for example, part or all of the contents of the firmware 210). Accordingly, the device certificate and/or the alias certificate can be used as the component certificate of the component 206.In block 318, the edge device device 102a may verify the component certificate of the component 206. The edge device device 102a may use any suitable verification technique to verify the certificate. For example, for device certification based on implicit identity, the edge device device 102a may use the public key of the device identifier of the component 206 to verify the certificate. The public key may be provided by the component 206, and in some embodiments, the public key may be authenticated by a trusted party (such as a manufacturer, supplier, or other entity associated with the component 206). If the certificate is not successfully verified, the edge device device 102a may generate an error, or otherwise indicate that the verification is unsuccessful. When the verification is successful, the method 300 proceeds to block 320.In block 320, the edge device device 102a may receive the authenticated telemetry 214 from the component 206. Authenticated telemetry 214 indicates utilization of component 206. For example, telemetry 214 may indicate processor utilization, memory or storage utilization, or other utilization statistics of calculation engine 120. Telemetry 214 can be authenticated by component 206, for example, by signing with component 206's device identifier, alias identifier, or other key. The edge device device 102a can verify the authenticated telemetry 214 (e.g., using a device certificate).In block 322, the edge device device 102a determines whether there are additional components 206 left to prove. If so, the method 300 loops back to block 314 to continue to perform the certification of the remaining components 206. If there are no additional components, the method 300 proceeds to block 324.In block 324, the edge device device 102a generates a device certificate. The device certificate is based on the aggregate certificate of all component certificates and the current utilization of the edge device device 102a. For example, the edge device device 102a may concatenate the certificates of all components 206 with the current usage, and then generate a device certificate on the concatenation.In block 326, the edge device device 102a provides a device certificate to the relying party. In some embodiments, in block 328, the edge device device 102a may provide the device certificate to the remote edge orchestrator device 102b. In some embodiments, in block 330, the edge device device 102b may provide another edge device 102 with a device certificate. For example, the device certificate may be provided to the root of trust (e.g., accelerator) of the platform activity of another edge device 102. Therefore, the system 100 can perform nested, aggregated proofs. After the device certification is provided, the method 300 loops back to block 302 to continue executing the certification.Referring now to FIG. 4, in use, the edge orchestrator device 102b may execute the method 400 for certification and orchestration. It should be understood that, in some embodiments, the operations of the method 400 may be performed by one or more components of the environment 220 of the edge orchestrator device 102b as shown in FIG. 2. The method 400 begins at block 402, where the edge orchestrator device 102b receives a workload scheduling request from a tenant. The request may identify one or more virtual machines, functions as a service (FaaS) instances, or other workloads to be executed by the edge device device 102a. In block 404, the edge orchestrator device 102b receives a service level agreement (SLA) requirement for the workload request. SLA requirements can identify one or more processing power, latency, storage, or other requirements associated with the workload.In block 406, the edge orchestrator device 102b identifies the edge device device 102a used to execute the workload. For example, the edge orchestrator device 102b may select the edge device device 102a from the pool of available edge devices 102. In some embodiments, the edge orchestrator device 102b may form the edge device device 102a from multiple decomposed components. For example, the edge orchestrator device 102b may form the edge device device 102a from multiple computing skids, accelerator skids, memory skids, storage skids, and/or other edge devices 102.In block 408, the edge orchestrator device 102b receives the device certificate from the edge device device 102a. As described above, the device certificate indicates the aggregated component certificate of the component 206 of the edge device device 102a and the current utilization of the edge device device 102a.In block 410, the edge orchestrator device 102b verifies the device certificate. The edge orchestrator device 102b can verify the component certificate of each component 206 of the edge device device 102a and the utilization information of the device certificate. In block 412, the edge orchestrator device 102b may compare each component certificate with the corresponding expected certificate. For example, the expected certificate may be associated with the expected identity or the expected firmware 210 version of each particular component 206 of the edge device device 102a. In block 414, the edge orchestrator device 102b checks whether the device certificate is verified. If not, the method 400 loops back to block 402 to process additional workload requests. The edge orchestrator device 102b may indicate an error, or otherwise indicate that the device certificate is not verified. Returning to block 414, if the device certificate is successfully verified, the method 400 proceeds to block 416.In block 416, the edge orchestrator device 102b compares the SLA requirements with the certified component 206 and utilization of the edge device device 102a. For example, the edge orchestrator 102b may determine whether the component 206 of the edge device device 102a provides the characteristics or specific components requested by the SLA requirements. As another example, the edge orchestrator 102b may determine whether the edge device device 102a can meet the performance or latency standards requested by the SLA requirement based on the current utilization of the edge device device 102a. In some embodiments, in block 418, the edge orchestrator device 102b may evaluate one or more security functions of the trusted execution environment provided by the edge device device 102a. For example, the computing engine 120 may provide a trusted execution environment, such as a SGX secure enclave. The device certificate may indicate the password or other isolation protection provided by the trusted execution environment to the code, key, or other sensitive data.In block 420, the edge orchestrator device 102b determines whether the edge device device 102a meets the SLA requirements. If not, the method 400 loops back to block 402 to process additional workload requests. In some embodiments, the edge orchestrator device 102b may indicate an error, or otherwise indicate that the SLA requirement cannot be met. Additionally or alternatively, in some embodiments, the edge orchestrator device 102b may suggest lowering the SLA based on the capabilities indicated in the device certificate. Referring back to block 420, if the SLA requirements can be met, the method 400 branches to block 422, where the edge orchestrator 102b schedules the workload to the edge device device 102a. The edge device device 102a uses the component 206 to execute the workload. For example, the workload can be executed in a trusted execution environment with the protection indicated by the above-mentioned application certificate. After scheduling the workload, the method 400 loops back to block 402 to continue processing the workload scheduling request.Referring now to FIG. 5, diagram 500 illustrates an edge architecture that may include system 100. As shown in the figure, the edge architecture includes multiple levels 502, 504, 506, and 508. Each level includes multiple nodes that can communicate with other nodes of the same level and/or nodes of other levels via an edge structure. As shown, the endpoint device 104 can be included in the thing/endpoint hierarchy 502. The thing/endpoint hierarchy 502 may include a large number of endpoint devices 104, which are heterogeneous, may be mobile, and may be widely distributed geographically. The access/edge layer 504 may include access network components, such as wireless towers, access points, base stations, intermediate nodes, gateways, fog nodes, central offices, and other access network or edge components. The components of the access/edge level 504 may be distributed on a building, small cell, neighborhood, or cell scale. Therefore, the components of the access/edge level 504 may be relatively close to the components of the thing/endpoint level 502 physically. The core network level 506 may include core network routers, network gateways, servers, and other more centralized computing devices. The components of the core network level 506 can be distributed regionally or nationally. The cloud/Internet tier 508 may include Internet backbone routers, cloud service providers, data centers, and other cloud resources. The components of the cloud/Internet level 508 can be globally distributed. As shown in the figure, the edge device 102 (eg, edge device device 102a and/or edge orchestrator device 102b) may be included in all of the access/edge tier 504, core network tier 506, and/or cloud/Internet tier 508 .As shown in the figure, the edge architecture is organized according to a logical gradient 510 from global, cloud-based components to local endpoint devices. Compared to components closer to the core of the network (ie closer to the cloud/Internet level 508), components closer to the edge of the network (ie closer to the endpoint level 502) may be smaller but larger in number and have fewer processing resources And lower power consumption. However, network communication between components closer to the edge of the network may be faster and/or have lower latency than communication that traverses a level closer to the network core. The same logical gradient 510 can be applied to components within a certain level. For example, the access/edge layer 504 may include a large number of widely distributed base stations, street cabinets, and other access nodes, as well as a smaller but more complex central office or other aggregation nodes. Therefore, compared with the traditional cloud computing-based FaaS architecture, by including the key caching function in the access/edge level 504 or other components close to the network edge (eg, logically close to the endpoint device 104), the system 100 can Improve waiting time and performance.In addition to the mobile edge computing implementation described above, it should be understood that the aforementioned systems and methods can be deployed in any environment where devices are arranged and the devices interoperate in a manner similar to that described with reference to FIG. 1 (for example, smart factories, smart cities, Smart buildings, etc.), although the name of each device may be different in different implementations. For example, in a smart factory, the above system and method can improve the accuracy, efficiency and/or safety of performing one or more manufacturing operations, especially in which operations are performed in real time or near real time (for example, low latency is very important) in the case of. In a smart city, the above system and method can improve the accuracy, efficiency and/or safety of the operation of a traffic control system, an environmental monitoring system and/or other automated or semi-automated systems. Similarly, in smart buildings, the above disclosure can be applied to improve the operation of any system that relies on sensors to collect and act on the collected information (for example, threat detection and evacuation management systems, video surveillance systems, elevator control systems, etc.) .It should be understood that, in some embodiments, the methods 300 and/or 400 may be embodied as various instructions stored on a computer-readable medium, and these instructions may be used by the computing engine 120, the I/O subsystem 122, and the I/O subsystem 122 of the edge device 102. The accelerator 130 and/or other components execute so that the edge device 102 executes the corresponding method 300 and/or 400. The computer-readable medium can be embodied as any type of medium that can be read by the edge device 102, including but not limited to the memory 124 of the edge device 102, the data storage device 126, the firmware device, other memories or data storage devices, and the Portable media and/or other media read by peripherals of the device 102.ExampleIllustrative examples of the technology disclosed herein are provided below. Embodiments of these techniques may include any one or more of the examples described below and any combination thereof.Example 1 includes an edge device device for device certification. The edge device device includes: a certification manager for performing a certification process on components of the edge device device to generate a component certificate; and a platform validator for:( i) Generate a device certificate, where the device certificate indicates the component certificate and the current utilization of the edge device device, and (ii) provide the device certificate to the relying party.Example 2 includes the subject matter of Example 1, and wherein the edge device device includes an accelerator, and wherein the accelerator includes a certification manager and a platform verifier.Example 3 includes the subject matter of either of Examples 1 and 2, and wherein the platform verifier is further used to receive authenticated telemetry from the component, wherein the authenticated telemetry indicates the current utilization of the component; and generating a device certificate includes generating a device certificate based on the current utilization of the component Generate a device certificate.Example 4 includes the subject matter of any of Examples 1-3, wherein the components include accelerators, computing platforms, memory components, storage components, or functional blocks of edge device devices.Example 5 includes the subject matter of any one of Examples 1-4, and wherein the components include decomposition resources of edge device devices.Example 6 includes the subject matter of any one of Examples 1-5, and wherein the certification manager is further used to: (i) identify multiple components of the edge device device, wherein the multiple components include components, and (ii) the multiple components Each of the components performs a certification process to generate a component certificate for each of the multiple components; and the device certificate indicates a component certificate for each of the multiple components.Example 7 includes the subject matter of any of Examples 1-6, and wherein performing the certification process includes receiving a component certificate indicating a trusted execution environment provided by the component.Example 8 includes the subject matter of any of Examples 1-7, and wherein the component certificate indicates the security attributes of the trusted execution environment.Example 9 includes the subject matter of any of Examples 1-8, and wherein performing the attestation process includes securely receiving a component certificate indicating the hardware configuration and firmware configuration of the component.Example 10 includes the subject matter of any of Examples 1-9, and wherein the component certificate includes a hash value indicating the hardware configuration and firmware configuration of the component.Example 11 includes the subject matter of any one of Examples 1-10, and wherein providing the device certificate to the relying party includes providing the device certificate to the remote orchestrator device.Example 12 includes the subject matter of any of Examples 1-11, and wherein providing the device certificate to the relying party includes providing the device certificate to the root of trust of the platform activity.Example 13 includes a computing device for device orchestration, the computing device including a workload orchestrator for receiving a workload scheduling request, wherein the workload scheduling request indicates a service level agreement requirement associated with the workload; and a proof of aggregation A manager for receiving a device certificate from an edge device device, where the device certificate indicates the aggregate component certificate and the current utilization of the edge device device, wherein the aggregate component certificate indicates the configuration of each component of the multiple components of the edge device device; The load scheduler is further used to: (i) determine whether the edge device device meets the service level agreement requirements based on the device certificate, and (ii) schedule the workload to the edge device device in response to determining that the edge device device meets the service level agreement requirements.Example 14 includes the subject matter of Example 13, and wherein the aggregate certification manager is further configured to: in response to receiving the device certificate, verify the device certificate; wherein scheduling the workload further includes scheduling the workload in response to the verification of the device certificate.Example 15 includes the subject matter of any one of Examples 13 and 14, and wherein verifying the device certificate includes comparing the device certificate with an expected certificate, where the expected certificate indicates an expected configuration of each of the multiple components of the edge device device.Example 16 includes the subject matter of any of Examples 13-15, and wherein the device certificate indicates a trusted execution environment provided by the edge device device.Example 17 includes the subject matter of any one of Examples 13-16, and wherein determining whether the edge device device meets the service level agreement requirements includes evaluating the security attributes of the trusted execution environment.Example 18 includes a method for device certification. The method includes: performing a certification process on a component of the edge device device by the edge device device to generate a component certificate; generating a device certificate by the edge device device, wherein the device certificate indicates the component certificate and Current utilization of edge device equipment; and provision of device certificates to relying parties through the edge device equipment.Example 19 includes the subject matter of Example 18, wherein performing the certification process includes performing the certification process through an accelerator of an edge device device; generating a device certificate includes generating a device certificate through an accelerator; and providing a device certificate includes providing a device certificate through an accelerator.Example 20 includes the subject matter of any one of Examples 18 and 19, and further includes receiving authenticated telemetry from the component via the edge device device, wherein the authenticated telemetry indicates the current utilization of the component; wherein generating the device certificate includes generating a device certificate based on the current utilization of the component Generate a device certificate.Example 21 includes the subject matter of any one of Examples 18-20, and wherein the components include accelerators, computing platforms, memory components, storage components, or functional blocks of edge device devices.Example 22 includes the subject matter of any one of Examples 18-21, and wherein the components include disaggregated resources of edge device devices.Example 23 includes the subject matter of any one of Examples 18-22, and further includes identifying multiple components of the edge device device through the edge device device, wherein the multiple components include components; and through the edge device device to each of the multiple components The component performs a certification process to generate a component certificate for each of the multiple components; wherein the device certificate indicates the component certificate of each of the multiple components.Example 24 includes the subject matter of any of Examples 18-23, and wherein performing the attestation process includes receiving a component certificate indicating a trusted execution environment provided by the component.Example 25 includes the subject matter of any of Examples 18-24, and wherein the component certificate indicates the security attributes of the trusted execution environment.Example 26 includes the subject matter of any of Examples 18-25, and wherein performing the attestation process includes securely receiving a component certificate indicating a hardware configuration and a firmware configuration of the component.Example 27 includes the subject matter of any of Examples 18-26, and wherein the component certificate includes a hash value indicating the hardware configuration and firmware configuration of the component.Example 28 includes the subject matter of any of Examples 18-27, and wherein providing the device certificate to the relying party includes providing the device certificate to the remote orchestrator device.Example 29 includes the subject matter of any of Examples 18-28, and wherein providing the device certificate to the relying party includes providing the device certificate to the root of trust of the platform activity.Example 30 includes a method for device orchestration, the method comprising: receiving a workload scheduling request through a computing device, wherein the workload scheduling request indicates a service level agreement requirement associated with the workload; and receiving from the edge device through the computing device The device certificate, where the device certificate indicates the aggregate component certificate and the current utilization of the edge device device, wherein the aggregate component certificate indicates the configuration of each component of the multiple components of the edge device device; the computing device determines whether the edge device device is based on the device certificate Satisfying the service level agreement requirement; and scheduling the workload to the edge device device through the computing device in response to determining that the edge device device meets the service level agreement requirement.Example 31 includes the subject matter of Example 30, and further includes: verifying, by the computing device, the device certificate in response to receiving the device certificate; wherein scheduling the workload further includes scheduling the workload in response to the verification of the device certificate.Example 32 includes the subject matter of any of Examples 30 and 31, and wherein verifying the device certificate includes comparing the device certificate with an expected certificate, wherein the expected certificate indicates an expected configuration of each of the multiple components of the edge device device.Example 33 includes the subject matter of any of Examples 30-32, and wherein the device certificate indicates a trusted execution environment provided by the edge device device.Example 34 includes the subject matter of any of Examples 30-33, and wherein determining whether the edge device device meets service level agreement requirements includes evaluating the security attributes of the trusted execution environment.Example 35 includes a computing device that includes a processor; and a memory having a plurality of instructions stored therein, which when executed by the processor cause the computing device to execute any one of Examples 18-34 Methods.Example 36 includes one or more non-transitory computer-readable storage media including multiple instructions stored thereon that, in response to being executed, cause the computing device to perform the method of any one of Examples 18-34.Example 37 includes a computing device that includes means for performing the method of any of Examples 18-34. |
An organic polymer memory cell is provided having an organic polymer layer and an electrode layer formed over a first conductive (e.g., copper) layer (e.g., bitline). The memory cells are connected to a second conductive layer (e.g., forming a wordline), and more particularly the top of the electrode layer of the memory cells to the second conductive layer. Optionally, a conductivity facilitatinglayer is formed over the conductive layer. Dielectric material separates the memory cells. The memory cells are self-aligned with the bitlines formed in the first conductive layer and the wordlines formed in the second conductive layer. |
1.An organic memory device, including:As the first conductive layer of the bit line;An organic polymer layer covering at least a part of the first conductive layer;A conductivity promoting layer covering the first conductive layer and located under the organic polymer layer, capable of applying and receiving electric charges, so as to promote the conductivity of the organic polymer layer;An electrode layer covered on the organic polymer layer;A second conductive layer covering the electrode layer as a word line; andA dielectric material surrounding at least the organic polymer layer and the electrode layer;Wherein, the conductivity promoting layer includes at least copper sulfide, copper oxide, manganese oxide, titanium dioxide, indium oxide, silver sulfide, silver copper sulfide composite, gold sulfide, cerium sulfate, ammonium persulfate, iron oxide, lithium composite, and One of palladium hydride.2.The organic memory device of claim 1, the organic polymer layer has an impedance that is selectively programmable to one of at least two states.3.The organic memory device according to claim 1, wherein the organic polymer layer is a conjugated organic material.4.The memory device according to claim 1, wherein the first conductive layer includes at least copper, aluminum, chromium, germanium, gold, magnesium, manganese, indium, iron, nickel, palladium, platinum, silver, titanium, zinc, and alloys thereof , Indium tin oxide, polysilicon, doped amorphous silicon, metal silicide,Invar,brass, stainless steel, and magnesium-silver alloy.5.The organic memory device of claim 1, the dielectric material comprises at least one of SiO, SiO2, Si3N4, SiN, SiOxNy, SiOxFy, polysilicon, amorphous silicon, TEOS, PSG, and BPSG.6.A method for forming a storage unit includes:Forming a conductivity promoting layer capable of applying and receiving electric charges on the first conductive layer to promote the conductivity of the organic polymer layer;An organic polymer layer is applied on the conductivity promoting layer, and the first conductive layer serves as a bit line;Applying an electrode layer on the organic polymer layer;Applying a second conductive layer on the electrode layer;Etching the second conductive layer to form a word line, and further etching the organic polymer layer and the electrode layer; andAfter applying and etching the second conductive layer, a dielectric material is applied to isolate at least around the second conductive layer, the organic polymer layer, and the electrode layer. The etching of the second conductive layer is beneficial to the organic memory cell and Self-alignment of the bit line and the word line;Wherein, the conductivity promoting layer includes at least copper sulfide, copper oxide, manganese oxide, titanium dioxide, indium oxide, silver sulfide, silver copper sulfide composite, gold sulfide, cerium sulfate, ammonium persulfate, iron oxide, lithium composite, and One of palladium hydride.7.The method of claim 6, the organic polymer layer is selected from one or more of the group consisting of: polyacetylene, polyphenylacetylene, polydiphenylacetylene, polyaniline, poly (p -Phenylvinylidene), polythiophene, polyporphyrin, porphyrin macrocyclics, thiol-derivatized polyporphyrin, polymetallocene, polyxylylene, polyvinylidene, polypyrrole, polydiphenylene Acetylene and polystyrene.8.The method of claim 6, wherein the electrode layer includes at least one of amorphous carbon, tantalum, TaN, titanium, and TiN. |
Organic storage device and method for forming storage unitTechnical fieldThe present invention relates generally to organic memory devices, and, in particular, to the formation of self-aligned memory elements and word lines.Background techniqueThe volume, use and complexity of computers and electronic devices continue to grow. Computers have become more powerful, and new and improved electronic devices are still in continuous development (eg, digital audio players, video players). In addition, the growth and utilization of digital media (eg, digital audio, video images, and the like) have further contributed to the development of these devices. Such growth and development have greatly increased the data that computers and electronic devices require / necessary to store and maintain.Generally, data is stored and maintained in one or more types of storage devices. Storage devices include long-term storage media, such as hard disk drives, optical disk drives and related media, digital video disk (DVD) drives, and the like. Long-term storage media can usually store large amounts of data at a lower price, but the speed is slower than other types of storage devices. Storage devices also include storage devices, which usually (but not always) store media for short periods of time. Short-term storage media is substantially faster than long-term storage media. Such short-term memory devices include, for example, dynamic random access memory (DRAM), static random access memory (SRAM), double speed data transfer (DDR) memory, fast paging mode dynamic random access memory (FPMDRAM), extended data Output dynamic random access memory (EDODRAM), synchronous dynamic random access memory (SDRAM), video random access memory (VRAM), flash memory, read only memory (ROM), and the like.Memory devices can be subdivided into volatile and non-volatile types. Volatile memory devices usually lose their information after a power interruption, and usually require periodic refresh cycles to maintain their information. Volatile memory devices include, for example, random access memory (RAM), DRAM, SRAM, and the like. Non-volatile memory devices can retain their information regardless of whether they maintain power. Non-volatile memory devices include (but are not limited to) ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory EEPROM, and the like. Volatile memory devices generally have a faster operating rate at a lower cost than nonvolatile memory devices.The memory device usually has a memory cell array. The data in each memory cell can be accessed or "read", "written", and "erased". The data of the storage unit is maintained in the "off" or "on" state, which may also be referred to as "0" or "1". A typical memory device can access a specific number of bits (for example, 8 memory cells per bit). For volatile memory devices, memory cells must be periodically "refreshed" to maintain their state. Such memory devices are usually made of semiconductor devices that can perform their various functions, and can be switched and maintained in two states. A general semiconductor device used for memory devices is a metal oxide semiconductor field effect transistor (MOSFET).The use of laptop computers and electronic devices has significantly increased the demand for storage devices. Digital cameras, digital audio players, personal digital assistants and the like usually seek to use large-capacity storage devices (eg, flash memory, smart memory cards, small flash memory cards ...). The demand for increased information storage capacity must be matched with storage devices with increased storage capacity (eg, increased storage capacity per die or chip). For example, a stamp-sized silicon chip can contain tens of millions of transistors, and each transistor is as small as hundreds of nanometers. However, silicon-based devices have approached their basic physical size limitations. Inorganic solid-state devices usually have a complex structure, which results in higher prices and loss of data storage density. In order to maintain the information stored in the volatile semiconductor memory made of inorganic semiconductor materials, it is necessary to continuously supply current, resulting in overheating and high power consumption. Non-volatile semiconductor devices have a low data storage rate, and have a relatively high power consumption and complexity.In addition, as the size of inorganic solid-state devices decreases and the degree of integration increases, the sensitivity of alignment tolerances will increase and make it more difficult to manufacture. Forming a small minimum size feature does not mean that the minimum size can be used to make a working circuit. The alignment tolerance must be much smaller than the small minimum size, for example, a quarter of the minimum size.Shrinking the inorganic solid-state devices creates the problem of diffusion lengths of dopants. Due to the reduction in size, the doping diffusion length in silicon causes difficulties in process design. Therefore, many adjustments are made to reduce dopant mobility and reduce high temperature time. However, it is still unclear whether such adjustments can continue indefinitely.When a voltage is applied at the semiconductor junction (in the reverse bias direction), depletion regions can be created around the junction. The width of the depletion region depends on the degree of doping of the semiconductor. If the depletion region extends and touches another depletion region, a problem of perforation or inability to control current may occur.A higher degree of doping helps reduce the spacing required to avoid perforation. However, if the voltage change per unit distance is large, since the voltage change per unit distance is large, it means that the intensity of the electric field is large, so further difficulties arise. When an electron moves through such a great gradient, it may accelerate to an energy level that is significantly higher than the energy of the minimum conduction band. Such electrons are known as hot electrons, which may even have enough energy to pass through the insulator, causing irreversible deterioration of the semiconductor device.Miniaturization and integration make the isolation of monolithic semiconductor substrates more challenging. In particular, in some cases it is more difficult to isolate the sides of the device from each other. Another difficulty is the adjustment of leakage current. Yet another difficulty is the diffusion of carriers in the substrate; free carriers can diffuse to tens of microns and neutralize the stored charge. Therefore, the requirements for further reducing the size and increasing the density of the inorganic memory device are limited. In addition, it is particularly difficult for inorganic non-volatile memory devices to simultaneously achieve shrinking devices and increasing performance, especially when it is necessary to maintain low manufacturing costs.Summary of the inventionTo understand some aspects of the present invention, the following summary is a brief description. This summary does not cover the entire content of the invention. The purpose is not to confirm important or key elements of the present invention, nor to limit the entire scope of the present invention. The purpose of the summary is to use the simplest concepts as a preamble to the detailed description of the invention.According to an aspect of the present invention, an organic polymer memory cell includes an organic polymer layer and an electrode layer formed on a first conductive (eg, copper) layer (eg, bit line). The memory cell is connected to the second conductive layer (for example, forming a word line), and more specifically, the top of the electrode layer of the memory cell is connected to the second conductive layer. A conductivity promoting layer may be formed on the first conductive layer as needed. The storage unit is separated by a dielectric material. The memory cell is self-aligned with the bit line formed in the first conductive layer and the word line formed in the second conductive layer.The organic polymer layer and the conductivity promoting layer may be collectively referred to as a selective conductive medium. The conductive properties (eg, conductive, non-conductive, semi-conductive) of this medium can be changed in a controlled manner by applying various voltages through the medium (eg, via the electrode layer and the first conductive layer).The organic polymer layer may include a conjugated organic material, for example, a small organic molecule and a conjugated polymer. The polymer backbone of the conjugated organic polymer may extend longitudinally between the electrode layer and the first conductive layer (eg, generally generally perpendicular to the stack). The conjugated organic molecule can be linear or branched to keep the backbone conjugated. The characteristic of such conjugated molecules is that they have overlapping π orbitals and can act as two or more resonance structures. The conjugated nature of conjugated organic materials allows selective conductive media to have controllable conductive properties. Such conjugated organic materials have the ability to donate and accept charges (holes and / or electrons). Generally speaking, conjugated organic molecules have at least two relatively stable oxidation-reduction states. These two relatively stable states allow the conjugated organic polymer to donate and accept charges and to have electrical interaction with the conductivity promoting compound.The conductivity-promoting layer also has the ability to donate and accept charges (holes and / or electrons), as well as to provide the selective conductive medium with controllable conductive properties. Generally speaking, the conductivity promoting layer has at least two relatively stable oxidation-reduction states. These two relatively stable states allow the conductivity-promoting layer to give and receive electric charges and produce electrical interaction with the organic polymer layer. The specific conductivity promoting layer can be selectively used so that its two relatively stable states can be matched with the two relatively stable states of the conjugated organic molecules of the organic polymer layer.The conductivity promoting layer is used to promote charge transfer between the electrode layer, the first conductive layer, and the second conductive layer (eg, word line). In addition, the conductivity-promoting layer facilitates the injection of charge carriers (for example, electrons or holes) into the organic polymer layer, and increases the concentration of charge carriers in the polymer layer, thereby causing the conductivity of the organic polymer layer Change. In addition, the conductivity promoting layer can also store opposite charges to balance the total charge of the memory cell.When the organic polymer layer is formed, the conductivity promoting layer can act as a catalyst in some cases. In this regard, the backbone of the conjugated organic molecule may be initially formed next to the conductivity promoting layer, and may be substantially perpendicular to the surface of the conductivity promoting layer to grow or to extend in combination. As a result, the backbone of the conjugated organic molecules can self-align in the direction across the stack.The memory cell may have two states, a conductive (low impedance or "on") state or a non-conductive (high impedance or "off") state. The memory cell may also have / maintain a majority of states, unlike traditional memory devices that are limited to only two states (eg, off or on). The storage unit can use various degrees of conductivity to determine other states. For example, the memory cell may have a low-impedance state, such as an extremely high-conductivity state (very low-impedance state), a high-conductivity state (low-impedance state), a conductive state (medium-impedance state), and a non-conductive state (high-impedance state) , So that multiple bits of data are stored in a single memory cell, for example, 2 or more bits of information or 4 or more bits of data (for example, 4 states can provide 2 bits of information, 8 states can provide 3 bits of information ...).During the operation of a typical device, if the polymer layer is an n-type conductor (n-type conductor), electrons from the electrode layer flow through the selective conductive medium to the first conductive layer (bit line) according to the voltage applied to the electrode by the word line ). Or, if the organic polymer layer is a p-type conductor (p-type conductor), holes from the electrode layer flow to the first conductive layer (bit line), or if it is an n-type With a p-type conductor, electrons and holes flow into the polymer layer. Therefore, the current from the electrode layer can flow to the first conductive layer via the selective conductive medium.Here, the above-mentioned and related objects can be achieved by using the specific illustrative aspects of the present invention described in conjunction with the description of the following drawings. However, the above-mentioned aspects are only illustrative, and the principles of the present invention can employ various methods, and the present invention covers all types of aspects and their equivalents. Other objects, advantages, and novel features of the present invention will be more clearly shown from the following detailed description with accompanying drawings.BRIEF DESCRIPTION1 is a partial cross-sectional view illustrating a wafer having a memory cell formed thereon according to an aspect of the present invention;2 is an array of memory cells, composed of cells formed according to an aspect of the present invention;3 is a partial wafer cross-sectional view illustrating a conductive layer having a conductivity promoting layer according to an aspect of the present invention;4 is a cross-sectional view illustrating a portion of the wafer in FIG. 3 having an organic polymer layer formed on a conductivity promoting layer;5 is a cross-sectional view illustrating a part of the wafer in FIG. 4 having an electrode layer formed on an organic polymer layer;6 is a cross-sectional view illustrating a portion of the wafer in FIG. 5 having a patterned photoresist layer formed on an electrode layer;7 is a cross-sectional view illustrating a portion of the wafer in FIG. 6 having pores formed in the patterned photoresist layer;8 is a cross-sectional view illustrating a portion of the wafer in FIG. 7 having pores etched into the electrode layer;9 is a cross-sectional view illustrating a portion of the wafer in FIG. 8 having pores etched into the organic polymer layer;10 is a cross-sectional view illustrating a portion of the wafer in FIG. 9 having pores etched into the conductivity promoting layer;11 is a cross-sectional view illustrating a portion of the wafer in FIG. 10 having pores etched into the first conductive layer;12 is a partial cross-sectional view illustrating another wafer in FIG. 11;13 is a cross-sectional view illustrating a portion of the wafer with the remaining photoresist layer removed in FIG. 12;14 is a cross-sectional view illustrating a portion of the wafer in FIG. 12 with the remaining photoresist layer on the stack coated with dielectric material and filled into the hole;15 is a cross-sectional view illustrating a part of the wafer in FIG. 14 having the second conductive layer formed on the dielectric material and the electrode layer;16 is a partial cross-sectional view illustrating another wafer in FIG. 15 with a patterned photoresist layer formed on the second conductive layer;17 is a cross-sectional view illustrating a part of the wafer in FIG. 16 having pores formed in the second conductive layer formed with word lines;18 is a cross-sectional view illustrating a portion of the wafer in FIG. 17 having pores formed in the electrode layer;19 is a cross-sectional view illustrating a portion of the wafer in FIG. 18 having pores formed in the organic polymer layer;20 is a cross-sectional view illustrating a portion of the wafer in FIG. 19 where the dielectric material is coated on the stack with the remaining photoresist layer removed and filled into the hole;21 is a flowchart of a method of forming a memory cell according to an aspect of the present invention;22 is a flowchart further illustrating the method of FIG. 21;FIG. 23 is a flowchart further illustrating the method of FIGS. 21 and 22;24 is a diagram illustrating the effect of the intrinsic electric field on the interface between the conductivity promoting layer and the polymer layer according to one or more aspects of the present invention;25 is a diagram illustrating the charge carrier distribution of an exemplary memory cell according to one or more aspects of the present invention;26 is another chart illustrating charge carrier distribution of an exemplary memory cell according to one or more aspects of the present invention;FIG. 27 is another graph illustrating the charge carrier distribution of an exemplary memory cell according to one or more aspects of the present invention;28 is another chart illustrating charge carrier distribution of an exemplary memory cell according to one or more aspects of the present invention;29 is a graph illustrating charge carrier concentration at an interface of an exemplary memory cell according to one or more aspects of the present invention;30 is another graph illustrating charge carrier concentration at an interface of an exemplary memory cell according to one or more aspects of the present invention.detailed descriptionThe present invention will now be explained by referring to the drawings, in which all the same components are represented by the same component numbers. In the following description, for the purpose of clear explanation, many specific details are described in detail to facilitate the overall understanding of the present invention. However, it is obvious to those skilled in the art that one or more aspects of the present invention can be implemented with less specific details. In other examples, to facilitate one or more aspects of the present invention, known configurations and devices may appear in the block diagram.1 is a cross-sectional view of a portion of a wafer 100 forming one or more layers of organic polymer memory structures or cells 156 according to one or more aspects of the present invention. This organic polymer storage unit 156 includes an organic polymer layer 116 and an electrode layer 120 formed on a first conductive (eg, copper) layer 108 (eg, bit line). The memory cell 156 is connected to the second conductive layer 136 (for example, the word line 148 is formed), and more specifically, the top of the electrode layer 120 of the memory cell 156 is connected to the second conductive layer 136. In the illustrated example, the conductivity promoting layer 112 is formed on the first conductive layer 108. The storage unit 156 is separated by the dielectric material 152.The organic polymer layer 116 and the conductivity promoting layer 112 may be collectively referred to as a selective conductive medium. The conductive properties (eg, conductive, non-conductive, semi-conductive) of this medium can be applied through the medium (eg, via the electrode layer 120 and the first conductive layer 108 (eg, bit line)) in a controlled manner Various voltages are changed.The organic polymer layer 116 may be comprised of a conjugated organic material, for example, a small organic molecule and a conjugated polymer. The polymer backbone of the conjugated organic polymer may extend longitudinally between the electrode layer 120 and the first conductive layer 108 (eg, generally generally perpendicular to the stack). The conjugated organic molecule can be linear or branched so that the backbone maintains its conjugated properties. The characteristic of such conjugated molecules is that they have overlapping π orbitals and can act as two or more resonance structures. The conjugated nature of conjugated organic materials allows selective conductive media to have controllable conductive properties. Such conjugated organic materials have the ability to donate and accept charges (holes and / or electrons). Generally speaking, conjugated organic molecules have at least two relatively stable oxidation-reduction states. These two relatively stable states allow the conjugated organic polymer to donate and accept charges and to have electrical interaction with the conductivity promoting compound.The conductivity promoting layer 112 also has the ability to donate and accept charges (eg, holes and / or electrons), and can control the conductive properties of the selective conductive medium. Generally speaking, the conductivity promoting layer has at least two relatively stable oxidation-reduction states. These two relatively stable states allow the conductivity-promoting layer 112 to give and receive electric charges and generate electrical interaction with the organic polymer layer 116. The specific conductivity promoting layer 112 can be selectively used so that its two relatively stable states can be matched with the two relatively stable states of the conjugated organic molecules of the organic polymer layer 116.The conductivity promoting layer 112 facilitates charge transfer between the electrode layer 120, the first conductive layer 108, and the second conductive layer 136 (eg, word line 148). In addition, the conductivity-promoting layer 112 facilitates the injection of charge carriers (eg, electrons or holes) into the organic polymer layer 116 and increases the concentration of charge carriers in the polymer layer, thereby causing the organic polymer layer 116 Changes in conductivity. In addition, the conductivity promoting layer 112 can also store opposite charges to balance the total charge of the memory cell 156.When the organic polymer layer 116 is formed, the conductivity promoting layer 112 may act as a catalyst in some cases. In this regard, the backbone of the conjugated organic molecule may be initially formed near the conductivity-promoting layer 112, and substantially perpendicular to the surface of the conductivity-promoting layer to grow or to extend in combination. As a result, the backbone of the conjugated organic molecules can be self-aligned in the lateral stacking direction.The storage unit 156 may have two states, namely a conductive (low impedance or "on") state or a non-conductive (high impedance or "off") state. The storage unit 156 can also have / maintain a plurality of states, and is different from a conventional storage device limited to only two states (for example, off or on). The storage unit 156 can use various degrees of conductivity to determine other states. For example, the memory cell 156 may have a low-impedance state, such as an extremely high-conductivity state (very low-impedance state), a high-conductivity state (low-impedance state), a conductive state (medium-impedance state), and a non-conductive state (high-impedance state) ), So that multiple bits of information are stored in a single memory cell, for example, 2 or more bits of information or 4 or more bits of information (for example, 4 states can provide 2 bits of information, 8 states can provide 3 bits of information information...).During operation of a typical device, if the organic polymer layer 116 is an n-type conductor, electrons from the electrode layer 120 flow through the selective conductive medium to the first conductive layer 108 according to the voltage applied to the electrode by the word line 148. Alternatively, if the organic polymer layer 116 is a p-type conductor, holes from the electrode layer 120 flow to the first conductive layer 108, or if it is an n-type and p-type conductor having both appropriate energy bands, then The electrons and holes flow into the organic polymer layer 116. Therefore, the current from the electrode layer 120 may flow to the first conductive layer 108 via the selective conductive medium.Switching the memory cell 156 to a specific state is called programming or writing. Programming is accomplished through selective conductive media by applying specific voltages (eg, 9 volts, 2 volts, 1 volt ...). The specific voltage, also called threshold voltage, can vary according to the respective desired state, and is generally substantially higher than the voltage applied under normal operation. Therefore, there are different threshold voltages according to respective required states (for example, "off", "on" ...). The threshold voltage value varies depending on many factors, including the material constituting the memory cell 156 and the thickness of each layer.Generally speaking, for example, an external stimulus that applies an electric field that exceeds a critical value ("on" state) may allow the applied voltage to write, read, or erase information in the storage unit 156; otherwise, if the critical value is not exceeded Value ("off" state) external stimulus, it can avoid the application of voltage to write or erase the information of the storage unit 156.In order to read information from the storage unit 156, a voltage or electric field (for example, 2 volts, 1 volt, 5 volts) is applied. Then, an impedance voltage measurement is performed, which can determine what operating state the one or more memory cells are in (eg, high impedance, very low impedance, low impedance, medium impedance, and the like). As mentioned above, the impedance is the same as, for example, "on" (for example, 1) or "off" (for example, 0) of a two-phase device, or "00", "01", "10", or "four" for a four-phase device 11 "related. It should be understood that other different states can be interpreted as other bits. When erasing the information written to the memory cell 156, a negative voltage exceeding a critical value or a voltage of opposite polarity to the write signal may be applied.According to an aspect of the present invention, the memory cell 156 may be self-aligned with the bit line 132 formed in the first conductive layer 108 and the word line 148 formed in the second conductive layer 136.Turning now to FIG. 2, it illustrates an array 200 of memory cells. The array is usually formed on a silicon-based wafer, and includes a plurality of columns 202 called bit lines and a plurality of rows 204 called word lines. The intersection of the bit line and the word line constitutes the address of a specific memory cell. Data can be stored in the memory cell (eg, 0 or 1). For example, the state (for example, 0 or 1) of the memory cell displayed at 210 is a function of the third row and eighth column of the array 200. For example, in a dynamic random access memory (DRAM), the memory cell includes a transistor-capacitor pair. In order to write into the memory cell, the charge is transferred to the appropriate column (for example, via CAS206) to activate the individual transistors in the column, and the state of each capacitor should be transferred to the appropriate row (for example, via RAS208) ). When reading the state of the cell, a sense amplifier (sense amplifier) measures the charge level of the capacitor. If it is higher than 50%, it can be regarded as 1 when reading; otherwise it is regarded as 0. It should be understood that although the array 200 shown in FIG. 2 contains 64 memory cells (eg, 8 rows x 8 columns), the present invention can be applied to any number of memory cells and is not limited to any particular configuration, configuration, and / or any Number of storage units.3 to 20 are partial cross-sectional views illustrating one or more memory cells formed thereon. These figures illustrate the formation of one or more memory cells on a wafer according to one or more aspects of the present invention. Those skilled in the art will understand and realize that one or more memory cells according to aspects of the present invention may be manufactured using various methods that deviate from the processes described herein. The method of deviation is still within the scope of the present invention.Part of the wafer 100 illustrated in FIG. 3 has a substrate 104, a first conductive layer 108, and a conductivity promoting layer 112 formed on the first conductive layer 108. The first conductive layer 108 may serve as a bit line, and may include, for example, copper and any other suitable conductive materials such as aluminum, chromium, germanium, gold, magnesium, manganese, indium, iron, nickel, palladium, platinum, silver, titanium , Zinc, its alloys, indium tin oxide, polycrystalline silicon, doped amorphous silicon, metal silicide, and the like. Examples of alloys that can be used as conductive materials includeInvar,brass, stainless steel, magnesium-silver alloy, and various other alloys. The thickness of the first conductive layer 108 varies depending on the implementation or planned use of the manufactured memory device. However, some exemplary thickness ranges include about 0.01 microns or more, and about 10 microns or less; about 0.05 microns or more and about 5 microns or less; and / or about 0.1 microns or more and about 1 microns or less .The conductivity promoting layer 112 may include, for example, any one or more of copper sulfide (Cu2-xSy, CuS), copper oxide (CuO, Cu2O), manganese oxide (MnO2), titanium dioxide (TiO2), indium oxide (I3O4) , Silver sulfide (Ag2-xS2, AgS), silver copper sulfide composite (AgyCu2-xS2), gold sulfide (Au2S, AuS), cerium sulfate (Ce (SO4) 2), ammonium persulfate ((NH4) 2S2O8)) , Iron oxides (Fe3O4), lithium complexes (LixTiS2, LixTiSe2, LixNbSe3, LixNb3Se3), palladium hydride (HxPd) (where x and y are selected to produce the desired properties), and the like, and usually have a The ability to accept charge (holes and / or electrons). The conductivity promoting layer 112 may be formed using any suitable technique, including, for example, growth, deposition, spin coating, and / or sputtering techniques. The conductivity promoting layer 112 may be coated to any suitable thickness. However, it must be understood that the first conductive layer 108 is generally thicker than the conductivity promoting layer 112. In one aspect, the thickness of the first conductive layer 108 is approximately greater than 50 to 250 times the thickness of the conductivity promoting layer 112. In another aspect, the thickness of the first conductive layer 108 is approximately greater than 100 to 500 times the thickness of the conductivity promoting layer 112. However, it should be understood that other suitable ratios may be utilized in accordance with aspects of the present invention.FIG. 4 is a diagram illustrating a portion of a wafer 100 having an organic polymer layer 116 formed on the conductivity promoting layer 112 and the first conductive layer 108. The organic polymer layer 116 may be applied on the bottom layer using any suitable technique, for example, by spin coating. Put a certain amount of polymer material in the center of the wafer, and then quickly rotate the wafer to distribute it evenly on the surface of the wafer. It should be understood that this organic polymer layer 116 may include, for example, any one or more of polyacetylene (cis or trans); polyphenylacetylene (cis or trans); polydiphenylacetylene; polyaniline ; Poly (p-phenylethylene); polythiophene; polyporphyrins; porphyrinic macrocycles, thiol-derivatized polyporphyrins; polymetallocenes, For example, polyferrocenes; polyphthalocyanines; polyvinylenes; polypyrroles; polystiroles, polydiphenylacetylene (DPA), silicon , About 1.5% copper (in the I and II valence state), and about 28% oxygen and the like. In addition, the properties of the polymer can be changed by doping appropriate dopants (eg, salts). The appropriate thickness of the organic polymer layer 116 depends on the actual implementation and / or use of manufacturing the memory device, and an example of a suitable thickness includes a range between about 300 angstroms and 5,000 angstroms.5 is a diagram illustrating a portion of a wafer 100 having an electrode layer 120 formed on an organic polymer layer 116, an optional conductivity promoting layer 112, a first conductive layer 108, and a substrate 104. FIG. The electrode layer 120 may include, for example, any one or more of amorphous carbon, tantalum, tantalum nitride (TaN), titanium, and titanium nitride (TiN), and may be formed by any suitable manufacturing technique. One technique for forming the electrode layer 120 is a spin coating method, which includes depositing a mixture material forming the electrode layer 120, and then rapidly rotating the wafer 100 to uniformly distribute the material on the wafer 100. Alternatively, or in addition, the electrode layer 120 may also be formed using sputtering, growth, and / or deposition techniques, including, for example, physical vapor deposition (PVD), chemical vapor deposition (CVD), low pressure chemical vapor deposition (LPCVD) , Plasma Enhanced Chemical Vapor Deposition (PECVD), High Density Chemical Vapor Deposition (HDCVD), Rapid Thermal Chemical Vapor Deposition (RTCVD), Metal Organic Chemical Vapor Deposition (MOCVD), and Pulsed Laser Deposition (PLD). It should be understood that this electrode layer 120 may have any appropriate thickness depending on the actual implementation and / or use of manufacturing the memory device. A suitable thickness of the electrode layer 120 includes, for example, a range between 100 angstroms and 1,500 angstroms. It should be further understood that the organic polymer layer 116 is generally thicker than the electrode layer 120. In one aspect, the thickness of the organic polymer layer 116 is about 10 to 500 times greater than the thickness of the electrode layer 120. In another aspect, the thickness of the organic polymer layer 116 is about 25 to 250 times greater than the thickness of the electrode layer 120. However, it should be understood that other suitable ratios may be utilized in accordance with aspects of the present invention.In FIG. 6, it illustrates the portion of the wafer 100 having the photoresist layer 124 formed on the electrode layer 120, the organic polymer layer 116, the conductivity promotion layer 112, the first conductive layer 108, and the substrate 104. This photoresist layer 124 can be used, for example, by growing, depositing, spin coating, and / or sputtering to form an appropriate thickness on the underlayer, and as a pattern or opening for etching the underlayer and developing the photoresist layer 124 Mask. By way of example, a deep ultraviolet photochemically amplified photoresist material with partial third-butoxycarbonyloxy (t-butoxycarbonyloxy) substituted for poly-p-hydroxystyrene can be used. Photoresist materials are commercially available from many sources, including Shipley, Kodak, Hoechst-Celanese, Brewer and IBM Corporation. The photoresist layer 124 may be a positive or negative photoresist material, and depending on the type of utilization of the photoresist material, the exposed or unexposed portions of the photoresist material may be subsequently removed or developed.In the example described here, the photoresist layer 124 is exposed to form one or more patterns 128 within the photoresist layer 124. The patterned photoresist layer 124 may be formed using, for example, electromagnetic radiation having a relatively short wavelength (eg, shorter than 200 nm). It should be appreciated that the photoresist layer 124 can be selectively exposed to radiation; that is, a portion of the photoresist layer 124 can be selectively exposed to radiation to form a pattern 128.FIG. 7 illustrates the portion of the wafer 100 after the selectively exposed photoresist layer 124 is developed (for example, the exposed or unexposed portions of the photoresist layer 124 are removed through interaction with an appropriate developer). Part of the photoresist layer 124 is removed to form openings or apertures 128. The choice of developer depends on the specific chemical composition of the photoresist layer 124. For example, a portion of the photoresist layer 124 can be removed using an aqueous alkaline solution. Alternatively, one or more diluted aqueous acid solutions may be used to selectively remove the exposed portions of the photoresist layer 124, for example, hydroxide solution, water, and organic solvent solution.8 illustrates the portion of the wafer 100 where the photoresist layer 124 is used as an etching mask to etch the electrode layer 120 to form one or more holes 128 in the stack. The electrode layer 120 etched in this way can form the top electrode of the final memory cell.In FIG. 9, the hole 128 is further etched into the organic polymer layer 116. The organic polymer layer 116 may be dry-etched using, for example, O2 / N2 + CO and / or O2 / N2 etchant components.FIG. 10 illustrates the portion of the wafer 100 that continues to etch the conductivity promoting layer 112 to form one or more holes 128 using the photoresist layer 124 as an etch mask. In FIG. 11, the hole 128 is further etched into the first conductive layer 104 to form a bit line 132.Referring generally to FIG. 12, it illustrates a second cross-sectional view of the wafer 100 along line 130-130 in the process stage of FIG. The bit line 132 in FIG. 12 is a continuous row.In FIG. 13, the remaining portion of the photoresist layer 124 is removed. For example, the residual portion of the photoresist layer 124 is removed by O2 plasma lithography or chemical stripping solution.Returning to the orientation of the wafer 100 shown in FIG. 11, FIG. 14 illustrates that the dielectric or insulating material 134 is deposited between the electrode layer 120, the organic polymer layer 116, the conductivity promoting layer 112, and the first conductive layer 108 and On the substrate 104. The dielectric material 134 fills the holes 128 formed in the electrode layer 120, the organic polymer layer 116, the conductivity promoting layer 112, and the first conductive layer 108, and forms a sufficient height similar to the word line (not shown) located above . This dielectric material 134 can be formed to a height of, for example, about less than or equal to 2 microns. The dielectric material 134 includes, for example, silicon oxide (SiO), silicon dioxide (SiO2), silicon nitride (Si3N4), (SiN), silicon oxynitride (SiOxNy), fluorinated silicon oxide (SiOxFy), polycrystalline silicon, non-crystalline Shaped silicon, TEOS, phosphorosilicate glass (PSG), borophosphosilicate glass (BPSG), any suitable spin-on glass, polyimide, or any other suitable dielectric material.It should be understood that the dielectric material 134 may be coated in multiple stages. For example, the dielectric material 134 may use a conformal dielectric substance to perform preliminary deposition in the hole at a low deposition rate. In one example, other dielectric materials 134 are applied by a rapid deposition process, for example, spin coating, sputtering, thermal oxidation, and nitridation of single crystal silicon and polycrystalline silicon, through silicidation that directly reacts with the deposited metal Object formation method, chemical vapor deposition (CVD), physical vapor deposition (PVD), low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), high density chemical vapor deposition (HDCVD), rapid thermal chemical vapor deposition ( RTCVD), metal organic chemical vapor deposition (MOCVD) and / or pulsed laser deposition (PLD).Referring to FIG. 15, a second conductive layer 136 is deposited on the electrode layer 120, the organic polymer layer 116, the conductivity promotion layer 112, the first conductive layer 108 and the substrate 104. In particular, the second conductive layer 136 and the electrode layer 120 are connected. In one example, the second conductive layer 136 includes deposited aluminum.FIG. 16 illustrates the formation of a patterned second conductive layer 136 of word lines (not shown). The photoresist layer 140 and the selective etching method as described above may be used for patterning. Part of the photoresist layer 140 is removed to form an opening or hole 144. The hole 144 may form a word line 148. In addition to forming the word line 148 by selective etching of the second conductive layer 136 (FIG. 17), the etching can selectively remove portions of the electrode layer 120 (FIG. 18) and the organic polymer layer 116 (FIG. 19), The storage unit 156 is further isolated. The opening or hole 144 may be filled with the dielectric material 152 as described above (FIG. 20). Therefore, the memory cell 156 may be self-aligned with the bit line 132 formed in the first conductive layer 108 and the word line 148 formed in the second conductive layer 136.The storage unit 156 can be used in any device requiring memory. For example, the storage device can be applied to computers, equipment, industrial equipment, handheld devices, telecommunications equipment, medical equipment, research and development equipment, transportation vehicles, radar / satellite devices, and the like. Portable devices, especially portable electronic devices, can meet the needs of portable devices due to their lightness, thinness and shortness. Examples of hand-held devices include mobile phones and other two-way transmission devices, personal digital assistants, handheld electronic notebooks, pagers, notebook computers, remote controls, recorders (video and sound), radios, small TVs and web browsers, Cameras, and their analogues.In view of the above shown and explained, according to one or more aspects of the present invention, referring to the flowcharts of FIGS. 22, 23, and 24, a method for performing the same can be better understood. In order to simplify the description, this method is represented by a series of functional blocks. However, it should be understood that the present invention is not limited to the order of the blocks. According to the present invention, the functions described in the blocks may appear in different orders and / or simultaneously. In addition, according to one or more aspects of the present invention, not all functions described in the blocks are necessarily present within a process. It should be understood that various functions of the block may perform the functions related to the block via software, hardware combined with it, or any other suitable device (eg, device, system, process, component). It should also be understood that the blocks are only intended to illustrate specific aspects of the invention in a simplified manner, so fewer and / or more blocks may be used to illustrate aspects of the invention.Turning now to FIGS. 21, 22, and 23, a method 2100 of manufacturing a memory cell according to one or more aspects of the present invention is described. In block 2104, a conductivity promoting layer is formed on the first conductive layer (eg, the first conductive layer may serve as a bit line). The conductivity promoting layer can be formed using any suitable method, including, for example, growth, deposition, spin coating, and / or sputtering techniques. The conductivity-promoting layer may be coated at any suitable thickness depending on the implementation and use of the manufactured memory device. However, it must be understood that the conductive layer is generally thicker than the conductivity promoting layer. According to one aspect of the invention, the thickness of the conductive layer is approximately greater than 50 to 250 times the thickness of the conductivity promoting layer. In another aspect of the invention, the thickness of the conductive layer is approximately greater than 100 to 500 times the thickness of the conductivity promoting layer. However, it should be understood that other suitable ratios can also be utilized in accordance with aspects of the present invention.The conductivity promoting layer may include, for example, any one or more of copper sulfide (Cu2-xSy, CuS), copper oxide (CuO, Cu2O), manganese oxide (MnO2), titanium dioxide (TiO2), indium oxide (In3O4), Silver sulfide (Ag2-xS2, AgS), silver copper sulfide composite (AgyCu2-xS2), gold sulfide (Au2S, AuS), cerium sulfate (Ce (SO4) 2), ammonium persulfate ((NH4) 2S2O8)), Iron oxides (Fe3O4), lithium complexes (LixTiS2, LixTiSe2, LixNbSe3, LixNb3Se3), palladium hydride (HxPd) (where x and y are selected to produce the desired properties), and usually have donor and accept charges (holes and / or Or electronic). The conductive layer may include, for example, copper and any other suitable conductive materials, such as aluminum, chromium, germanium, gold, magnesium, manganese, indium, iron, nickel, palladium, platinum, silver, titanium, zinc, and alloys thereof, indium oxide Tin, polysilicon, doped amorphous silicon, metal silicide, and the like. Examples of alloys that can be used as conductive materials include Hastelloy, Kovar, Invar, Monel, Inconel, brass, stainless steel, magnesium-silver alloy, and various other alloys. Some exemplary thickness ranges for conductive layers include about 0.01 microns or more and about 10 microns or less; about 0.05 microns or more and about 5 microns or less; and / or about 0.1 microns or more and about 1 microns or less.In block 2108, an organic polymer layer is formed on the conductivity promoting layer and the first conductive layer. The organic polymer layer can be applied on the bottom layer using any suitable technique, for example, by spin coating. Put a certain amount of polymer material in the center of the wafer, and then quickly rotate the wafer to distribute it evenly on the surface of the wafer. It should be understood that this organic polymer layer may include, for example, any one or more types of polyacetylene (cis or trans); polyphenylacetylene (cis or trans); polydiphenylacetylene; polyaniline ; Poly (p-phenylvinylene); polythiophene; polyporphyrin; porphyrin macrocyclics, thiol-derivatized polyporphyrin; polymetallocene, for example, polyferrocene; polyxylylene ; Polyethylene; Polypyrrole; Polydiphenylacetylene (DPA), silicon, about 1.5% copper (in the I and II valence state), and about 28% oxygen and the like. In addition, the properties of the polymer can be changed by doping appropriate dopants (eg, salts). The appropriate thickness of the organic polymer layer depends on the actual implementation and / or use of the manufactured memory device, and an example of a suitable thickness includes a range between about 300 angstroms and 5,000 angstroms.In block 2112, an electrode layer is formed on the organic polymer layer, the conductivity promoting layer, and the first conductive layer. This electrode layer may include, for example, any one or more of amorphous carbon, tantalum, tantalum nitride (TaN), titanium, titanium nitride (TiN), and may be formed by any suitable manufacturing technique. One technique for forming the electrode layer is a spin coating method, which includes depositing a mixture material forming the electrode layer, and then quickly rotating the wafer to evenly distribute the material on the wafer. Alternatively, or in addition, sputtering, growth, and / or deposition techniques can also be used to form the electrode layer, which includes, for example, physical vapor deposition (PVD), chemical vapor deposition (CVD), low pressure chemical vapor deposition (LPCVD), Plasma enhanced chemical vapor deposition (PECVD), high density chemical vapor deposition (HDCVD), rapid thermal chemical vapor deposition (RTCVD), metal organic chemical vapor deposition (MOCVD) and pulsed laser deposition (PLD). It should be understood that this electrode layer may have any suitable thickness depending on the actual implementation and / or use of the manufactured memory device. Suitable thicknesses of the electrode layer include, for example, a range between 100 angstroms and 1,500 angstroms. It should be further understood that the organic polymer layer is generally thicker than the electrode layer. According to one aspect of the invention, the thickness of the organic polymer layer is about 10 to 500 times greater than the thickness of the electrode layer. In another aspect of the invention, the thickness of the organic polymer layer is about 25 to 250 times greater than that of the electrode layer. However, it should be understood that other suitable ratios may be utilized in accordance with aspects of the present invention.Next in block 2116, a photoresist layer is formed on the electrode layer, the organic polymer layer, the conductivity promoting layer, and the first conductive layer. This photoresist layer can be used, for example, by growth, deposition, spin coating, and / or sputtering to form an appropriate thickness on the underlayer, and as a mask for etching the underlayer and forming patterns or openings in the developed photoresist layer membrane. By way of example, a deep ultraviolet photochemically amplified photoresist material with partial third-butoxycarbonyloxy substituted for poly-p-hydroxystyrene can be used. Photoresist materials are commercially available from many sources, including Shipley Corporation, Kodak Company, Herast-Theranis Corporation, Brewer, and IBM Corporation. The photoresist layer may be a positive or negative photoresist material, and the exposed or unexposed portions of the photomaterial may be subsequently removed or developed depending on the type of utilization of the photoresist material.At block 2120, the photoresist layer is exposed to form one or more patterns within the photoresist layer. The photoresist layer can be patterned using, for example, electromagnetic radiation having a relatively short wavelength (eg, shorter than 200 nm). It will be understood that the photoresist layer 124 can be selectively exposed to radiation; that is, a portion of the photoresist layer can be selectively exposed to radiation to form a pattern.After exposing the photoresist layer, at block 2124, the selectively exposed photoresist layer is developed, for example, by exposing exposed or unexposed portions of the photoresist layer through interaction with a suitable developer. Part of the photoresist layer is removed to form an opening or hole. The choice of developer depends on the specific chemical composition of the photoresist layer. For example, an aqueous alkaline solution can be used to remove part of the photoresist layer. Alternatively, one or more diluted aqueous acid solutions, hydroxide solutions, water, and organic solvent solutions can be used to selectively remove the exposed portions of the photoresist layer.At block 2128, the photoresist layer is used as an etching mask to continue to etch the electrode layer to form holes in the stack. At a block 2132, the hole is further etched into the organic polymer layer. This organic polymer layer can be dry etched using, for example, O2 / N2 + CO and / or O2 / N2 etchant components.At a block 2136, the conductivity promoting layer continues to be etched. At a block 2140, the first conductive layer is etched and the remaining portion of the photoresist is removed.At a block 2144, a dielectric or insulating material is applied on the electrode layer, organic polymer layer, conductivity promoting layer, and first conductive layer. The dielectric material is filled into the holes formed in the electrode layer and the organic polymer layer, and a sufficient height corresponding to the word line formed thereafter is formed. This dielectric material may, for example, form a height of less than or equal to 2 microns. It will be appreciated that this dielectric material can be applied using multiple stages. For example, the dielectric material may use a conformal dielectric substance to perform preliminary deposition in the hole at a low deposition rate. The application of the remaining dielectric materials can be based on rapid deposition processes such as spin coating, sputtering, thermal oxidation, and nitridation of monocrystalline and polycrystalline silicon, through silicidation formation methods that directly react with the deposited metal, and chemical vapor deposition (CVD), physical vapor deposition (PVD), low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), high density chemical vapor deposition (HDCVD), rapid thermal chemical vapor deposition (RTCVD), metal organic chemical vapor Deposition (MOCVD) and / or Pulsed Laser Deposition (PLD). Dielectric materials can include, for example, silicon oxide (SiO), silicon dioxide (SiO2), silicon nitride (Si3N4), (SiN), silicon oxynitride (SiOxNy), fluorinated silicon oxide (SiOxFy), polysilicon, non-silicon Shaped silicon, TEOS, phosphorosilicate glass (PSG), borophosphosilicate glass (BPSG), any suitable spin-on glass, polyimide, or any other suitable dielectric material.At a block 2148, a second conductive layer is formed (eg, a word line is formed). At block 2152, the second conductive layer is patterned using photoresist. In block 2156, the photoresist is developed. At a block 2160, the second conductive layer is etched. At a block 2164, the electrode layer is etched.Next, in block 2168, the organic polymer layer is etched and the remaining portion of the photoresist is removed. In block 2172, a dielectric material is applied.It will be appreciated that the conductivity-promoting layer (eg, Cu2-xSy, where x and y are selected to produce the desired properties) used in polymer memory cells plays an important role. It significantly improves the conductivity of the organic polymer layer. This characteristic is at least partly attributed to the following functions: charge carriers generated by Cu2-xSy, generation of charge depletion layer, distribution of charge carriers, and renewal of charge carriers after the electric field is reversed Distribution causes loss of stored data. The following discussion is to describe and explain the concentration and behavior of charge carriers.In the following examples, it uses a conductive polymer and utilizes Cu2-xSy (where x and y are selected to produce the desired properties) as the conductivity promoting material. For the generation of charge carriers, the copper in Cu2-xSy is in a non-quantitative oxidation state: 1.8≤x≤2.0). Its ability to obtain electrons from conductive polymers is quite strong and produces the following equation:Cu2-xSy + polymer → Cu (I) S- + polymer + (1)As a result, an intrinsic electric field is generated due to charge accumulation at the interface between Cu2-xSy and the polymer. This situation is shown in Figure 24, which graphically illustrates the effect of the intrinsic electric field at the interface between Cu2-xSy and the polymer. When an external field is applied, this oxidized polymer (polymer +) is a charge carrier. The conductivity of a polymer depends on its concentration and mobility.σ = q pμ (2)Where q is the charge of the carrier, p is the concentration of the carrier and μ is the mobility.Referring now to the charge depletion layer, applying a similar principle as applied to semiconductors, the potential energy function can be expressed as:V (x) = qNp (dpx-x2 / 2) / ε (3)Where Np is the average concentration of charge carriers, ε is the dielectric constant of the polymer, and dp is the width of the charge depletion region. The value of Np can be solved using the following equation:dp=[2ϵ(Vb±V)qNp]1/2---(4)Where V is the applied external field voltage. The sign of the forward voltage is "-". The reverse voltage sign is "+".The voltage function of equation (3) can be approximately estimated to simplify the derivative.Regarding charge carrier distribution, such as p-type doped semiconductors, two typical processes can occur in the electric field. This traffic can be expressed as:J=-qDdpdx+qμpE---(5)Where D is the diffusion constant of the charge carriers, and E is the electric field at x. If there is no current, the distribution of carriers is:p (x) = p (0) exp ([V (0) -V (x) / Vt) (6)Where p (0) is the concentration, V (0) is the voltage at the respective interface, and Vt = kT / q.When the forward voltage is extremely large, the current flow J> 0, the voltage distribution in the memory cell can be assumed and the analysis equation can be calculated to obtain the steady-state current. Overall, under a forward voltage, the charge distribution p (x) is an increasing function of x. When a reverse voltage is applied, V (x)> V (0), then the charge concentration is a decreasing function of x.The retention time of the final characteristic is related to the fact that the forward voltage generates more charge carriers and more charge carriers accumulate on the other end of the passive (Cu2-xSy) layer (away from the polymer) . However, once the voltage is removed, the charge carrier concentration will be reset, which includes two processes: the charge carriers diffuse toward the Cu2-xSy layer, and the charge carriers recombine at the interface.Fick ’s law can be used to describe the first process of this charge carrier diffusion towards the Cu2-xSy layer. The following formula can be used to describe the recombination process of charge carriers:Cu (I) S- + polymer + → Cu (II) S + polymer (7)The hold time is the time required for the redistribution of charge carriers to their original state. It is possible that the reaction rate is relatively faster than the diffusion rate. Therefore, it can be determined by the diffusion process in general.The exemplary memory cell described here is the discussion according to the above equations 1 to 9 and the description of FIGS. 25 to 32. The parameters of this exemplary memory cell are intrinsic voltage Vb = 0.02V, equilibrium constant Keq = 2.17 × 10-4, and the concentration of Cu2-xSy and polymer at the interface is [polymer] 0 = [Cu2-xSy] 0 = 1083 / cubic centimeter, the polymer thickness is d = 5 x 10-5 cm (0.5 microns), and the thickness of CuS is dCuS = 5 x 10-7 cm (0.005 microns). Six typical examples are calculated to illustrate the electrical operation of the organic memory cell according to an aspect of the present invention.FIG. 25 illustrates a graph 2500 of an exemplary memory cell charge carrier distribution 2502 as a function of the distance between the Cu2-xSy and organic polymer interface according to an aspect of the present invention. The charge carrier concentration 2502 is shown as a decreasing function according to the interface distance (x). This graph 2500 assumes external voltage V = 0 and current flow J = 0. The charge carrier concentration 2502 is calculated from Equation 6 under the assumption of a constant electric field. However, the points shown are independent of the assumed constant electric field.Turning now to FIG. 26, which illustrates a charge carrier distribution 2602 of an exemplary organic memory cell with a chart 2600 according to an aspect of the present invention. The parameters of this graph 2600 are set as follows: forward voltage = 0.12V and current flow J = 0. The Cu2-xSy end has a higher voltage than the other end (organic polymer). This drives the charge carriers away from the Cu2-xSy layer and makes the charge carrier concentration an increasing function of x. Even at the lowest concentration p (0), it is not the minimum value in this example (for example, the value shown in FIG. 26 is 3.32 × 1019 / cubic centimeter). This may explain why when a forward voltage is applied, the polymer can be an excellent electrical conductor. In the same way, a graph is drawn using Equation 6 in a constant electric field mode. The points shown are independent of the assumed constant electric field.FIG. 27 illustrates, as a function of the distance between the Cu2-xSy and organic polymer interface, an example memory cell charge carrier distribution 2702 and another graph 2700 according to one aspect of the invention. In this graph, the parameters are set to reverse voltage = 0.28V and current flow J = 0. Under reverse voltage, its charge carriers are concentrated at the Cu2-xSy polymer interface, and the concentration decreases rapidly when it is away from the interface, which can explain why the memory cell becomes non-conductive when a high reverse voltage is applied the reason. For the same reason, suppose that equation 6 is used to draw a graph in a constant electric field mode. The points shown are independent of the assumed constant electric field.Referring now to FIG. 28, another graph 2800 of the charge carrier distribution 2802 of an exemplary memory cell is illustrated as a function of distance according to one aspect of the invention. In the graph 2800, the parameters are set as follows: forward voltage = 0.52V and current flow J> 0 (pj = 1018 / cubic centimeter). When the current flow J> 0, the charge carrier is still an increasing function of x because the forward voltage drives the charge carrier away from the Cu2-xSy interface. The important point is that the lowest concentration p (x) is at the interface.FIG. 29 illustrates another graph 2900 of the charge carrier interface concentration 2902 of an exemplary memory cell as a function of the forward voltage V. In this graph, the parameter is set to J> 0 (pj = 1018 / cubic centimeter) and a constant electric field mode is assumed. This mode assumes that the electric field in the memory cell is constant. Therefore, the voltage V (x) is a linear function. This mode can be used when the diffusion constant of the polymer is extremely small and the resistance is constant. According to this mode, the charge carrier concentration at the interface is derived as a function of voltage. It should be noted that when the forward voltage is sufficiently large and the interface current is controlled by charge carriers instead of charge injection, its p0 (V) tends to be constant. In this way, p (0) can be re-expressed as:This equation 10 shows that the p (0) limit is an increasing function of the thickness ratio between the Cu2-xSy layer and the polymer layer.FIG. 30 illustrates another graph 3000 according to an aspect of the present invention to illustrate the charge carrier interface concentration 3002 of an exemplary memory cell as a function of the forward voltage Vin. In this graph 3000, p (0) is a function of forward voltage and current J. Current J may be> 0 or not> 0, and is a staircase potential energy function mode. This model assumes that a step function can be used to illustrate the voltage V (x) function. This mode can be used when the diffusion constant of the polymer is extremely large. Therefore, the resistance in the memory cell is extremely small. According to this model, the charge carrier concentration at the interface is derived as a function of voltage. It should be noted that in FIG. 30, when the forward voltage is sufficiently large, its p0 (V) approaches zero. When the charge carriers at the interface control the current flow, this value is a function of voltage. This zero limit behavior (zero limit behavior) is caused by the interface boundary limit set by reaction (1). Basically, when a charge carrier is quickly transferred from the interface to the other end, a supply limit can be reached. Therefore, this limit p (0) can also be re-expressed as:Similarly, p (0) is an increasing function of the thickness ratio between the Cu2-xSy layer and the polymer layer.In summary of the above discussion, it is important to note that the migration of charge carriers determines the measured flow rate when there is limit circulation within the polymer. Under the assumption of a constant electric field, since the lowest concentration in the memory cell is at the interface, p (x) can be used when the polymer determines the limit flow rate. pj = p (0) as a function describing the charge carrier concentration. This condition results in a constant p (x). This means that the contribution of diffusion to flow in Equation 5 is zero. Under the assumption of a step potential function, another function can be used to describe the charge carrier concentration p (x). The initial charge carrier concentration p (0) has a value that is substantially smaller than other regions. Therefore, J is still determined by p (0). Another point to note is the boundary conditions. Unlike semiconductors, it only applies to the concentration of the interface, not elsewhere. This boundary condition limits the total number of charge carriers generated in the memory cell.The above equations (Equations 1 to 7) and FIGS. 27 to 30 are descriptions of polymer storage units and their model behaviors. This model can be used to interpret measurement data and can be applied to materials other than Cu2-xSy. In addition, this model can be used to think about how to improve retention and response time, as well as the design of other devices such as transistors. Furthermore, this model can be applied to develop threshold voltages for various set conductivity levels (eg, set states), read conductivity levels, and erase conductivity levels, thereby enabling writing or writing of memory devices Program, read and erase actions.The above are one or more aspects of the invention. Although the present invention cannot explain every possible combination of each component or method, those skilled in the art can understand many other combinations and alternatives of the present invention. Therefore, all the above-mentioned modifications, improvements and changes fall within the scope covered by the appended claims of the present invention. In addition, although only a specific feature of the present invention is described in the examples, in fact, this feature can be combined with one or more other features as needed, and can be applied to any known or specific examples. In addition, the word "includes" in the detailed description and claims has a broad meaning similar to that covered by "comprising". |
Self-aligning fabrication methods for forming memory access devices comprising a doped chalcogenide material. The methods may be used for forming three-dimensionally stacked cross point memory arrays. The method includes forming an insulating material over a first conductive electrode, patterning the insulating material to form vias that expose portions of the first conductive electrode, forming a memory access device within the vias of the insulating material and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device. The memory access device is formed of a doped chalcogenide material and formed using a self-aligned fabrication method. |
CLAIMS What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device, wherein the memory access device is formed of a doped chalcogenide material, and the memory access device is formed using a self-aligned fabrication method. 2. The method of claim 1, wherein the doped chalcogenide material comprises one of the group consisting of a Cu-doped combination of Se and/or Te alloyed with one or more of Sb, In and Ge. 3. The method of claim 1, wherein the doped chalcogenide material comprises one of the group consisting of a Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In and Ge. 4. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device further comprises depositing the doped chalcogenide material using electrochemical deposition. 5. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device further comprises depositing the doped chalcogenide material using vapor phase deposition. 6. The method of claim 4, wherein during the electrochemical deposition process, the doped chalcogenide material is only formed on the exposed portions of the first conductive electrode. 7. The method of claim 1, wherein vias formed in the insulating material have a width of 40nm or less. 8. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device occurs at temperatures at or below 400 °C. 9. The method of claim 4, wherein forming a memory access device further comprises planarizing the electrochemically deposited doped chalcogenide material to a top surface of the insulating material. 10. The method of claim 1, wherein the first conductive electrode is a word line. 11. The method of claim 1, further comprising forming a second conductive electrode over the memory element. 12. The method of claim 1 1, wherein the second conductive electrode is a bit line. 13. The method of claim 1 1, wherein the memory device is a cross point memory. 14. The method of claim 13, further comprising forming a plurality of repeated levels of individual memory devices, each repeated level comprising the first conductive electrode, the insulating material, the memory access device, the memory element and the second conductive electrode, wherein the cross point memory device comprises multiple levels of memory elements and memory access devices such that it is a three-dimensionally stacked memory device, and wherein each memory access devices is a select device for a corresponding memory element. 15. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material using a self- aligned fabrication method; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device. 16. The method of claim 15, wherein the self-aligned fabrication method further comprises: depositing a chalcogenide material; depositing a dopant material on the chalcogenide material; and causing the chalcogenide material to become doped with the dopant material. 17. The method of claim 16, wherein the chalcogenide material is deposited using vapor phase deposition. 18. The method of claim 16, wherein the chalcogenide material is deposited using electrochemical deposition. 19. The method of claim 16, wherein the dopant material is selectively deposited on the chalcogenide material using one of electrochemical deposition or physical vapor deposition of the dopant material. 20. The method of claim 16, wherein the chalcogenide material is a combination of Se and/or Te alloyed with one or more of Sb, In and Ge. 21. The method of claim 16, wherein the dopant material is one of Cu or Ag. 22. The method of claim 18, wherein during the electrochemical deposition process, the chalcogenide material is only formed on the exposed portions of the conductive electrode. 23. The method of claim 16, further comprising planarizing the dopant material and portions of the doped chalcogenide material extending above the vias in the insulating material. 24. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material using a self- aligned fabrication method; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device, wherein the self-aligned fabrication method further comprises: depositing a chalcogenide material; infusing the chalcogenide material with Ge; and depositing a dopant material on the Ge-infused chalcogenide material with a dopant material; and causing the Ge-infused chalcogenide material to become doped with the dopant material. 25. The method of claim 24, wherein the chalcogenide material is deposited using vapor phase deposition. 26. The method of claim 24, wherein the chalcogenide material is deposited using electrochemical deposition. 27. The method of claim 24, wherein the chalcogenide material is infused with Ge using gas-cluster ion beam modification. 28. The method of claim 24, wherein the dopant material is selectively deposited on the Ge-infused chalcogenide material using one of electrochemical deposition or physical vapor deposition of the dopant material. 29. The method of claim 24, wherein the chalcogenide material is a combination of Se and/or Te alloyed with one or more of Sb and In. 30. The method of claim 24, wherein the dopant material is one of Cu or Ag. 31. The method of claim 24, wherein during the electrochemical deposition process, the chalcogenide material is only formed on the exposed portions of the conductive electrode. 32. The method of claim 24, further comprising planarizing the dopant material and portions of the doped chalcogenide material extending above the vias in the insulating material. |
METHODS OF SELF-ALIGNED GROWTH OF CHALCOGENIDE MEMORY ACCESS DEVICE FIELD OF THE INVENTION [0001] Disclosed embodiments relate generally to memory devices and more particularly to methods of forming self-aligned, chalcogenide memory access devices for use in memory devices. BACKGROUND [0001] A non-volatile memory device is capable of retaining stored information even when power to the memory device is turned off. Traditionally, non-volatile memory devices occupied large amounts of space and consumed large quantities of power. As a result, non-volatile memory devices have been mainly used in systems where limited power drain is tolerable and battery-life is not an issue. [0002] One type of non-volatile memory device includes resistive memory cells as the memory elements therein. Resistive memory elements are those where resistance states can be programmably changed to represent two or more digital values (e.g., 1, 0). Resistive memory elements store data when a physical property of the memory elements is structurally or chemically changed in response to applied programming voltages, which in turn changes cell resistance. Examples of variable resistance memory devices include memory devices that include memory elements formed using, for example, variable resistance polymers, perovskite materials, doped amorphous silicon, phase-changing glasses, and doped chalcogenide glass, among others. Memory access devices, such as diodes, are used to access the data stored in these memory elements. FIG. 1 illustrates a general structure of a cross point type memory device. Memory cells are positioned between access lines 21, 22, for example word lines, and data/sense lines 11, 12, for example bit lines. Each memory cell typically includes a memory access device 31 electrically coupled to a memory element 41.[0003] As in any type of memory, it is a goal in the industry to have as dense a memory array as possible; therefore, it is desirable to increase the number of memory cells in an array of a given chip area. In pursuing this goal, some memory arrays have been designed in multiple planes in three dimensions, stacking planes of memory cells above one another. However, formation of these three-dimensional structures can be very complicated and time consuming. One of the limiting factors in forming such three-dimensional memory structures is the formation of the memory access devices. Traditional methods may require several expensive and extra processing steps and may also cause damage to previously formed materials during formation of subsequent materials. [0004] Therefore, improved fabrication methods for forming memory access devices are desired. BRIEF DESCRIPTION OF THE DRAWINGS [0002] FIG. 1 illustrates a general structure of a cross point type memory device. [0003] FIG. 2A illustrates a cross-sectional view of a cross point memory device including a memory access device according to disclosed embodiments. [0004] FIG. 2B illustrates a top view of the cross point memory device of FIG. 2A. [0005] FIG. 3 A illustrates an alternative configuration of a cross-sectional view of a cross point memory device including a memory access device according to disclosed embodiments. [0006] FIG. 3B illustrates a top view of the cross point memory device of FIG. 3 A. [0007] FIGS. 4A-4D each illustrates a cross-sectional view of an intermediate step in the fabrication of a memory device in accordance with disclosed embodiments. [0008] FIGS. 5 A and 5B are scanning electron microscope photographs showing example memory access devices formed by a disclosed embodiment.[0009] FIGS. 6 A and 6B each illustrates a cross-sectional view of an intermediate step in the fabrication of a memory device in accordance with disclosed embodiments. [0010] FIGS. 7A and 7B each illustrates a cross-sectional view of an intermediate step in the fabrication of a memory device in accordance with disclosed embodiments. [0011] FIG. 8 illustrates a processor system that includes a memory device having memory access devices according to a disclosed embodiment. DETAILED DESCRIPTION [0012] In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments that may be practiced. It should be understood that like reference numbers represent like elements throughout the drawings. These example embodiments are described in sufficient detail to enable those skilled in the art to practice them. It is to be understood that other embodiments may be utilized, and that structural, material, and electrical changes may be made, without departing from the scope of the invention, only some of which are discussed in detail below. [0013] According to disclosed embodiments, memory access devices for accessing memory elements of a memory cell are formed using self-aligning fabrication methods. Self- aligning fabrication techniques require fewer processing steps, and are thus more cost-effective, than many traditional methods, such as for example by reducing the number of masking steps required for fabrication. Self-aligned fabrication methods may also minimize the required contact area of the memory access device because they may provide superior fill capabilities. [0014] Moreover, the self-aligning methods of the disclosed embodiments allow easy three-dimensional stacking of multiple levels of memory arrays. One way in which this is possible is because the self-aligning fabrication methods are implemented at low-temperatures (e.g.,at .or below 400 °C). Low temperature formation facilitates three-dimensional stacking of multiple memory levels because it limits damage to previously formed levels. [0015] Additionally, according to the disclosed embodiments, the memory access devices are formed of Cu- or Ag- doped chalcogenide materials. Chalcogenide materials (doped with, e.g., nitride) are known in the art for use as a phase-change material for forming memory elements. However, it is also known that Cu- or Ag- doped chalcogenides, which act as electrolytes rather than as a phase-change material, are particularly suitable for use as memory access devices. In a Cu- or Ag- doped chalcogenide material, the metal "dopant" ions are mobile within the chalcogenide material. These "mobile" ions are what allows current to flow through the chalcogenide material when utilized as a memory access device. [0016] The use of Cu- or Ag- doped chalcogenide materials also provides desired benefits of high current density, e.g., greater than 106 A/cm2, and low threshold ON voltage (i.e., the minimum voltage required to "turn on" or actuate the device), e.g., less than IV. The behavior can be made to represent a diode-like select device. These aspects of a memory access device are important for appropriate operation of a high-density memory device. [0017] The memory access device 20 of the disclosed embodiments may be formed of any Cu- or Ag- doped chalcogenide material, including, for example, a Cu- or Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, O and Ge. Specific examples of appropriate chalcogenide materials (e.g., chalcogenide alloys) (which are then doped with one of copper or silver) for use in the memory access devices of the disclosed embodiments include alloys of In-Se, Sb-Te, As-Te, Al-Te, Ge-Te, Ge-S, Te-Ge-As, In-Sb-Te, Te- Sn-Se, Ge-Se-Ga, Bi-Se-Sb, Ga-Se-Te, Sn-Sb-Te, Te-Ge-Sb-S, Te-Ge-Sn-O, Sb-Te-Bi-Se, Ge-Sb- Se-Te, and Ge-Sn-Sb-Te. [0018] FIGS. 2A and 2B illustrate an example of a cross point memory device 100 including memory access devices 20 formed in accordance with the disclosed embodiments. FIG. 2 A illustrates a cross-sectional view of the cross point memory device 100 and FIG. 2B illustrates atop-down view of the cross point memory device 100. A memory access device 20, an electrode 150 and a discrete memory element 140 are stacked at the intersection of the access lines 1 10, for example word lines, and the data/sense lines 120, for example bit lines, of the cross point memory device 100. Each discrete memory element 140 is accessed via the corresponding memory access device 20. Access lines 110 and data/sense lines 120 are formed of a conductive material, such as for example, aluminum, tungsten, tantalum or platinum, or alloys of the same. Suitable materials for electrode 150 include, for example, TiN, TaN, Ta, TiAIN and TaSiN. Memory element 140 may be formed of an appropriate variable resistance material including, for example, variable resistance polymers, perovskite materials, doped amorphous silicon, phase-changing glasses, and doped chalcogenide glass, among others. An insulating material 130, such as an oxide, fills the other areas of the memory device. [0019] FIGS. 3 A and 3B illustrate cross-sectional and top-down views, respectively, of an alternative arrangement of a cross point memory device 200. In FIGS. 3 A and 3B, like elements are indicated by the same reference numerals from FIGS. 2A and 2B and are not described in detail. As can be seen in FIG. 3 A, memory element 240 is formed as a continuous layer instead of being formed as discrete elements, as in memory element 140 (FIG. 2A). This configuration further reduces the complexity of manufacturing as well as alignment problems between the memory element 140 and corresponding electrodes 150/memory access devices 20. [0020] Except for the formation of the memory access device 20, which is formed in accordance with the disclosed embodiments, the other elements of the cross point memory devices 100/200 (e.g., word lines, bit lines, electrodes, etc.) are formed using methods known in the art. An example method is now described; however any known fabrication methods may be used for the other elements of cross point memory devices 100/200. Access line 110 may be formed over any suitable substrate. The conductive material forming access lines 1 10 may be deposited with any suitable methodology, including, for example, atomic layer deposition (ALD) methods or plasma vapor deposition (PVD) methods, such as sputtering and evaporation, thermal deposition, chemical vapor deposition (CVD) methods, plasma-enhanced (PECVD) methods, and photo-organic deposition (PODM). Then the material may be patterned to form access lines 110 usingphotolithographic processing and one or more etches, or by any other suitable patterning technique. Insulating material 130 is next formed over access lines 110. The insulating material 130 may be deposited and patterned by any of the methods discussed with respect to the access lines 1 10 or other suitable techniques to form vias at locations corresponding to locations where access lines 110 and data/sense lines 120 will cross. Memory access devices 20 are then formed in the vias in accordance with the disclosed embodiments. [0021] In the fabrication of memory device 100 (FIG. 2A/2B), after formation of memory access devices 20, an additional insulating material 130 may be formed over the memory access devices 20. This insulating material 130 is patterned to form vias at locations corresponding to the memory access devices 20 and electrodes 150 and memory elements 140 are deposited within the vias. Alternatively, material for forming electrodes 150 and memory elements 140 may be deposited above the memory access devices 20 and patterned to align with memory access devices 20, followed by deposition of additional insulating material 130 in vias formed by the patterning. After formation of the electrodes 150 and memory elements 140, the data/sense lines 120 are deposited and patterned by any of the methods discussed with respect to the access lines 1 10 or using other suitable techniques. [0022] In the fabrication of memory device 200 (FIG. 3 A/3B), after formation of memory access devices 20, an insulating material 130 may formed over the memory access devices 20. This insulating material 130 is patterned to form vias at locations corresponding to the memory access devices 20 and electrodes 150 are deposited within the vias. Alternatively, a material for forming electrodes 150 may be deposited above the memory access devices 20 and patterned to align with memory access devices 20, followed by deposition of additional insulating material 130 in vias formed by the patterning. After formation of the electrodes 150, a memory element 240 is deposited with any suitable methodology. Then, the data/sense lines 120 are deposited and patterned by any of the methods discussed with respect to the access lines 110 or using other suitable techniques.[0023] Alternatively, access lines 110 may be formed by first forming a blanket bottom electrode and then, after formation of the memory access devices 20 (as described below), a cap layer is formed over the memory access device and the blanket bottom electrode is patterned to form the access lines 1 10. [0024] It should be noted that while only a single-level cross point memory structure is illustrated in FIGS. 2A/2B and 3A/3B, multiple levels may be formed one over the other, i.e., stacked to form a three-dimensional memory array, thereby increasing memory density. [0025] The memory access device 20 of the disclosed embodiments may be formed by one of several self-aligning fabrication techniques, described below. [0026] Referring to FIGS. 4 A - 4D, one method by which the memory access devices 20 of the disclosed embodiments may be formed is described. As seen in FIG. 4A, word line 110 and insulating material 130 are formed. This may be done, for example, by any suitable deposition methodology, including, for example, atomic layer deposition (ALD) methods or plasma vapor deposition (PVD) methods, such as sputtering and evaporation, thermal deposition, chemical vapor deposition (CVD) methods, plasma-enhanced (PECVD) methods, and photo-organic deposition (PODM). As seen in FIG. 4B, insulating material 130 is patterned to form vias 131 for memory access devices 20. This may be done, for example, by using photolithographic processing and one or more etches, or by any other suitable patterning technique. The vias 131 in insulating material 130 are formed to be at a sub-40nm scale. Next, as seen in FIG. 4C, a Cu- or Ag- doped chalcogenide material is deposited by electrochemical deposition. Suitable materials for deposition by this process include any Cu- or Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, O and Ge, as previously discussed. The exposed portions 22 of word line 110 provide a source for reduction/deposition for the electrochemical deposition process. The deposited Cu- or Ag- doped chalcogenide material thereby forms memory access devices 20 with the "mushroom" cap 25 overrun of the deposition process shown in FIG. 4C. After the electrochemical deposition process, the "mushroom" caps 25 are planarized, using for example a chemical mechanical planarization process, resulting in the structure shown in FIG. 4D. Afterplanarizing, memory device 100/200 is completed by forming electrodes 150, memory elements 140/240 and bit lines 120 in accordance with known methods, as discussed above with respect to FIGS. 2A/2B and 3A 3B. [0027] FIG. 5A illustrates a perspective view (scanning electron microscope) of an array of memory access devices 20 formed in accordance with this embodiment. FIG. 5B illustrates a cross-sectional view of a portion of the array shown in FIG. 5A. As can be seen in FIG. 5A, the memory access devices 20 (seen as "mushroom" caps 25 in FIG. 5A) are very reliably formed only in the desired row and column positions for forming a three-dimensionally stacked memory array. As can be seen in FIG. 5B, the contact fill (within the vias 131 in insulating material 130) is void- free, demonstrates long-range fill and the feature dimensions are at a scale that is sub-40nm. [0028] Using electrochemical deposition as a fabrication technique is inherently self- aligning because deposition occurs only on the exposed portions 22 of word lines 1 10. Further, using electrochemical deposition provides a bottom-up fill process because the exposed portions 22 of word line 1 10 are the only source for reduction during the electrochemical deposition process (e.g., deposition does not occur on the insulating material 130 located at the sides of opening 131). This results in a void-free contact fill of the high aspect ratio opening and thus a void-free memory access device 20. This process is able to be scaled to the desired sub-40nm feature dimensions because only ions in solution are required to get into the contact vias thereby growing the material being deposited, as opposed to using physical deposition techniques which require the material being deposited to fill the vias directly. [0029] Referring to FIGS. 4A, 4B, 6 A, and 6B, another method by which the memory access devices 20 of the disclosed embodiments may be formed is described. Word line 1 10 and insulating material 130 are formed (FIG. 4A) and vias 131 are formed in insulating material 130 (FIG. 4B), as previously discussed. Then, as seen in FIG. 6 A, a vapor phase deposition method is used to deposit a chalcogenide material 19 in vias 131 (FIG. 4B). Suitable materials for deposition by this process include any combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, O and Ge, as previously discussed. After deposition of the chalcogenidematerial 19, a dopant material 23 is deposited over chalcogenide material 19, seen in FIG. 6B. This may be done, for example, by electrochemical deposition or by vapor phase deposition of the dopant material 23. Dopant material 23 may be, for example, copper or silver. The chalcogenide material 19 is then doped with the dopant material 23 using, for example, an ultraviolet (UV) photodoping step. In UV photodoping, diffusion of metal atoms is photon-induced by directing electromagnetic radiation (e.g., UV light) at the metal (e.g., dopant material 23), resulting in diffusion of metal atoms from the metal into the chalcogenide material 19. Other suitable methods of doping the chalcogenide material 19 with ions from dopant material 23 may be used. The chalcogenide material 19 is thus doped with ions from dopant material 23, resulting in Cu- or Ag- doped chalcogenide material that forms memory access device 20. The dopant material 23 and the excess Cu- or Ag- doped-chalcogenide material 20 are planarized to the level of the top surface of insulating material 130, resulting in the structure illustrated in FIG. 4D. This may be done, for example, using chemical mechanical planarization (CMP), such as CuCMP in the case of a copper dopant material 23. After planarizing, memory device 100/200 is completed by forming electrodes 150, memory elements 140/240 and bit lines 120 in accordance with known methods, as discussed above with respect to FIGS. 2A/2B and 3A/3B. [0030] Referring to FIGS. 4A, 4B, 6B, 7A and 7B, another method by which the memory access devices 20 of the disclosed embodiments may be formed is disclosed. Word line 110 and insulating material 130 are formed (FIG. 4A) and vias 131 are formed in insulating material 130 (FIG. 4B), as previously discussed. Then, as seen in FIG. 7A, a chalcogenide material 19 is deposited in vias 131 (FIG. 4B) using an electrochemical deposition method. The deposition occurs as discussed above with respect to FIG. 4C. As described above, using an electrochemical deposition technique is inherently self-aligning because deposition occurs only on the exposed portions 22 of word lines 110 (FIG. 4B). In this embodiment, suitable materials for deposition include any combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, and O, as previously discussed. Then, as shown in FIG. 7B, gas-cluster ion beam (GCEB) modification is used to infuse the chalcogenide material 19 with Ge. In gas-cluster ion beam (GCEB) modification, an accelerated gas-cluster ion beam including Ge is accelerated onto thesurface of the chalcogenide material 19 to infuse the Ge into the surface of the chalcogenide material 19. After infusion of Ge in the chalcogenide material 19, the Ge-infused chalcogenide material 19 is doped with a dopant material 23. This may be accomplished as previously described with respect to FIG. 6B. Then, the dopant material 23 and the excess Ge-infused, Cu- or Ag- doped-chalcogenide material 20 are planarized to the level of the top surface of insulating material 130, resulting in the structure illustrated in FIG. 4D. This may be done, for example, using chemical mechanical planarization (CMP), such as CuCMP in the case of a copper dopant material 23. After planarizing, memory device 100/200 is completed by forming electrodes 150, memory elements 140/240 and bit lines 120 in accordance with known methods, as discussed above with respect to FIGS. 2A/2B and 3A/3B. [0031] Alternatively to each of the above-described methods, a thicker insulating material 130 may be initially formed. In this instance, the electrochemical or vapor phase deposition of the Cu- or Ag- doped chalcogenide material 20 would not entirely fill the vias 131. Then, electrode 150 (and in the instance of memory device 100, memory element 140) may also be formed within via 131, allowing the entire portion of memory device 100/200 to be self-aligned. [0032] Memory access devices formed in accordance with any of the previously disclosed embodiments may be formed at low temperatures, such as at or below 400 °C. The manufacturing process of memory access devices, such as for example, conventional silicon-based junction diodes, requires much higher processing temperatures. Low temperature formation allows for three-dimensional stacking of multiple memory levels without destruction of previously formed levels. Additionally, because the memory access devices are formed in a self-aligned manner, the methods are very cost-effective. Additionally, the use of Cu- or Ag- doped chalcogenide materials allows the memory access devices to have high current density, e.g., greater than 106 A/cm2 while maintaining and low threshold ON voltage, e.g., less than IV. [0033] The cross point memory array 100/200 (FIGS. 2A 2B and 3A/3B) may also be fabricated as part of an integrated circuit. The corresponding integrated circuits may be utilized in a typical processor system. For example, FIG. 8 illustrates a simplified processor system 500which includes a memory device 100/200 including the self-aligned Cu- or Ag- doped chalcogenide memory access devices 20, in accordance with any of the above described embodiments. A processor system, such as a computer system, generally comprises a central processing unit (CPU) 510, such as a microprocessor, a digital signal processor, or other programmable digital logic devices, which communicates with an input/output (I/O) device 520 over a bus 590. The memory device 100/200 communicates with the CPU 510 over bus 590 typically through a memory controller. In the case of a computer system, the processor system 500 may include peripheral devices such as removable media devices 550 (e.g., CD-ROM drive or DVD drive) which communicate with CPU 510 over the bus 590. If desired, the memory device 100/200 may be combined with the processor, for example CPU 510, as a single integrated circuit. [0034] The above description and drawings should only be considered illustrative of example embodiments that achieve the features and advantages described herein. Modification and substitutions to specific process conditions and structures can be made. Accordingly, the claimed invention is not to be considered as being limited by the foregoing description and drawings, but is only limited by the scope of the appended claims. |
A short-channel metal oxide semiconductor varactor may include a source region of a first polarity having a source via contact. The varactor may further include a drain region of the first polarity having a drain via contact. The varactor may further include a channel region of the first polarity between the source region and the drain region. The channel region may include a gate. The varactor may further include at least one self-aligned contact (SAC) on the gate and between the source via contact and the drain via contact. |
CLAIMSWHAT IS CLAIMED IS:1. A short-channel metal oxide semiconductor varactor, comprising:a source region of a first polarity, including a source via contact;a drain region of the first polarity, including a drain via contact;a channel region of the first polarity between the source region and the drain region, the channel region including a gate; andat least one self-aligned contact (SAC) on the gate and between the source via contact and the drain via contact.2. The short-channel metal oxide semiconductor varactor of claim 1, in which the at least one SAC comprises multiple self-aligned contacts (MSACs).3. The short-channel metal oxide semiconductor varactor of claim 1, in which a shape of the at least one SAC is cylindrical or square shaped.4. The short-channel metal oxide semiconductor varactor of claim 1, in which the at least one SAC extends an entire length between the source via contact and the drain via contact.5. The short-channel metal oxide semiconductor varactor of claim 1, in which the at least one SAC comprises copper (Cu), tungsten (W), nickel (Ni), aluminum (Al), gold (Au), silver (Ag), titanium (Ti), or graphene.6. The short-channel metal oxide semiconductor varactor of claim 1, in which a width of the at least one SAC is greater than a length of the gate.7. The short-channel metal oxide semiconductor varactor of claim 1, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personalcommunication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.8. A radio frequency (RF) front end module, comprising:a filter, comprising a die, a substrate supporting the die, a molding compound surrounding the die, a short-channel metal oxide semiconductor (MOS) varactor, comprising a source region of a first polarity including a source via contact, a drain region of the first polarity including a drain via contact, a channel region of the first polarity between the source region and the drain region, the channel region including a gate, and at least one self-aligned contact (SAC) on the gate and between the source via contact and the drain via contact; andan antenna coupled to an output of the filter.9. The RF front end module of claim 8, in which the at least one SAC comprises multiple self-aligned contacts (MSACs).10. The RF front end module of claim 8, in which a shape of the at least one SAC is cylindrical or square shaped.11. The RF front end module of claim 8, in which the at least one SAC extends an entire length between the source via contact and the drain via contact.12. The RF front end module of claim 8, in which the at least one SAC comprises copper (Cu), tungsten (W), nickel (Ni), aluminum (Al), gold (Au), silver (Ag), titanium (Ti), or graphene, or graphene.13. The RF front end module of claim 8, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.14. A method of fabricating a short-channel metal oxide semiconductor varactor, comprising:coupling a source via contact to a source region of a first polarity;coupling a drain via contact to a drain region of the first polarity; andfabricating at least one self-aligned contact (SAC) on a gate on a channel region of the first polarity, the at least one SAC being disposed between the source via contact and the drain via contact.15. The method of claim 14, in which the short-channel metal oxide semiconductor varactor is integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and/or a fixed location data unit.16. A short-channel metal oxide semiconductor varactor, comprising:a source region of a first polarity, including a source via contact;a drain region of the first polarity, including a drain via contact;a channel region of the first polarity between the source region and the drain region, the channel region including a gate; andmeans for contacting the gate, between the source via contact and the drain via contact.17. The short-channel metal oxide semiconductor varactor of claim 16, in which the means for contacting extends an entire length between the source via contact and the drain via contact.18. The short-channel metal oxide semiconductor varactor of claim 16, in which a width of the means for contacting is greater than a length of the gate.19. The short-channel metal oxide semiconductor varactor of claim 16, in which a length of the gate is less than 50 nanometers.20. The short-channel metal oxide semiconductor varactor of claim 16, integrated into a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personalcommunication systems (PCS) unit, a portable data unit, and/or a fixed location data unit. |
SELF-ALIGNED CONTACT (SAC) ON GATE FOR IMPROVING METAL OXIDE SEMICONDUCTOR (MOS) VARACTOR QUALITY FACTORCROSS-REFERENCE TO RELATED APPLICATION[0001] The present application claims the benefit of U.S. Provisional Patent Application No. 62/522,002, filed on June 19, 2017, and titled "SELF- ALIGNED CONTACT (SAC) ON GATE FOR IMPROVING METAL OXIDE SEMICONDUCTOR (MOS) VARACTOR QUALITY FACTOR," the disclosure of which is expressly incorporated by reference herein in its entirety.BACKGROUNDField[0002] Aspects of the present disclosure relate to semiconductor devices, and more particularly to self-aligned gate contacts for improving metal oxide semiconductor (MOS) varactor quality (Q)-f actor.Background[0003] Mobile radio frequency (RF) chip designs (e.g., mobile RF transceivers) are complicated by the use of passive devices, which directly affect analog/RF performance considerations, including mismatch, noise, and other performance considerations.Passive devices may involve high performance inductor and capacitor components. For example, an RF module (e.g., an RF front end (RFFE) module) may include inductors (L) and capacitors (C) arranged to form filters and other RF devices. Arrangements of these passive devices may be selected to improve device performance, while suppressing unwanted noise (e.g., artificial harmonics) to support advanced RF applications.[0004] The design of mobile RF transceivers may include the use of a voltage- controlled capacitance and/or a tunable capacitor (e.g., a varactor) for advanced RF applications. For example, tunable capacitors may provide RF and impedance matching in RF circuits of advanced RF applications. In these advanced RF technologies, short gate length (Lg) or short-channel MOS varactors having a high quality (Q)-factor are desired. Unfortunately, short-channel/gate length MOS varactors may exhibit an undesirable quality factor due to increased gate resistance.SUMMARY[0005] A short-channel metal oxide semiconductor varactor may include a source region of a first polarity having a source via contact. The varactor may further include a drain region of the first polarity having a drain via contact. The varactor may further include a channel region of the first polarity between the source region and the drain region. The channel region may include a gate. The varactor may further include at least one self-aligned contact (SAC) on the gate and between the source via contact and the drain via contact.[0006] A radio frequency (RF) front end module may include a filter having a die. A substrate may support the die. A molding compound may surround the die. The RF front end module may further include a short-channel metal oxide semiconductor (MOS) varactor including a source region of a first polarity having a source via contact. The varactor may further include a drain region of the first polarity having a drain via contact. The varactor may further include a channel region of the first polarity between the source region and the drain region. The channel region may include a gate. The varactor may further include at least one self-aligned contact (SAC) on the gate and between the source via contact and the drain via contact. An antenna may be coupled to an output of the filter.[0007] A method of fabricating a short-channel metal oxide semiconductor varactor may include coupling a source via contact to a source region of a first polarity. The method may further include coupling a drain via contact to a drain region of the first polarity. The method may further include fabricating at least one self-aligned contact (SAC) on a gate on a channel region of the first polarity. The at least one SAC may be disposed between the source via contact and the drain via contact.[0008] A short-channel metal oxide semiconductor varactor may include a source region of a first polarity having a source via contact. The varactor may further include a drain region of the first polarity having a drain via contact. The varactor may further include a channel region of the first polarity between the source region and the drain region. The channel region may include a gate. The varactor may further include means for contacting the gate. The gate contacting means may be between the source via contact and the drain via contact.[0009] Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.BRIEF DESCRIPTION OF THE DRAWINGS[0010] For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.[0011] FIGURE 1 is a schematic diagram of a radio frequency (RF) front end (RFFE) module employing passive devices.[0012] FIGURE 2 is a schematic diagram of a radio frequency (RF) front end (RFFE) module employing passive devices for a chipset.[0013] FIGURE 3 illustrates a cross-sectional view of a metal oxide semiconductor field-effect transistor (MOSFET) device.[0014] FIGURE 4 illustrates a fin field-effect transistor (FinFET).[0015] FIGURE 5 illustrates a conventional metal oxide semiconductor (MOS) varactor. [0016] FIGURE 6A illustrates a cross-sectional view of a short-channel metal oxide semiconductor (MOS) varactor, according to aspects of the present disclosure.[0017] FIGURE 6B illustrates a top view of a short-channel metal oxide semiconductor (MOS) varactor, according to aspects of the present disclosure.[0018] FIGURES 7A-7C illustrate top views of various exemplary configurations of short-channel metal oxide semiconductor (MOS) varactors, according to aspects of the present disclosure.[0019] FIGURE 8 illustrates a top view of a short-channel metal oxide semiconductor (MOS) varactor and cross-sectional views of the MOS varactor at various cross- sections, according to aspects of the present disclosure.[0020] FIGURE 9 illustrates a method for fabricating a short-channel metal oxide semiconductor (MOS) varactor, according to aspects of the present disclosure.[0021] FIGURE 10 is a block diagram showing an exemplary wireless communication system in which an aspect of the disclosure may be advantageously employed.[0022] FIGURE 11 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a fin-based structure according to one configuration.DETAILED DESCRIPTION[0023] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. It will be apparent to those skilled in the art, however, that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0024] As described herein, the use of the term "and/or" is intended to represent an "inclusive OR", and the use of the term "or" is intended to represent an "exclusive OR". As described herein, the term "exemplary" used throughout this description means "serving as an example, instance, or illustration," and should not necessarily be construed as preferred or advantageous over other exemplary configurations. As described herein, the term "coupled" used throughout this description means"connected, whether directly or indirectly through intervening connections (e.g., a switch), electrical, mechanical, or otherwise," and is not necessarily limited to physical connections. Additionally, the connections can be such that the objects are permanently connected or releasably connected. The connections can be through switches. As described herein, the term "proximate" used throughout this description means"adjacent, very near, next to, or close to." As described herein, the term "on" used throughout this description means "directly on" in some configurations, and "indirectly on" in other configurations.[0025] Mobile radio frequency (RF) chip designs (e.g., mobile RF transceivers) are complicated by the use of passive devices, which directly affect analog/RF performance considerations, including mismatch, noise, and other performance considerations.Passive devices may involve high performance inductor and capacitor components. For example, an RF module (e.g., an RF front end (RFFE) module) may include inductors (L) and capacitors (C) arranged to form diplexers, triplexers, multiplexers, low pass filters, balun filters, and/or notch filters to prevent high order harmonics. Arrangements of these passive devices may be selected to improve device performance, while suppressing unwanted noise (e.g., artificial harmonics) to support advanced RF applications, such as carrier aggregation.[0026] Capacitors are passive elements used in integrated circuits for storing an electrical charge. Capacitors are often made using plates or structures that are conductive with an insulating material between the plates. The amount of storage, or capacitance, for a given capacitor is contingent upon the materials used to make the plates and the insulator, the area of the plates, and the spacing between the plates. The insulating material is often a dielectric material.[0027] The design of mobile RF transceivers may include the use of a voltage- controlled capacitance and/or a tunable capacitor (e.g., a varactor), for example, to provide a voltage-controlled oscillator. Varactors may also be known as variable capacitance diodes. For example, tunable capacitors may provide RF and impedance matching in RF circuits. A complementary metal oxide semiconductor (CMOS) tunable capacitor may be tuned by varying a bias across a dielectric in the capacitor, which enables variation of the capacitance.[0028] In advanced RF circuits, a MOS varactor may provide a tunable capacitor. This MOS varactor is an example of an electrical device used to store energy (e.g., charge) in an electrical field between closely spaced capacitor plates according to a capacitance value. This capacitance value provides a measure of the amount of charge stored by the capacitor at a certain voltage. In addition to their charge storing capability, capacitors are also useful as electronic filters because they enable differentiation between high frequency and low frequency signals. In a conventional varactor a plate width modulates to vary an electric field formed between the capacitor plates. This varactor provides an electrically controllable capacitance that can be used in tuned circuits.[0029] While the use of varactors is advantageous in many applications (e.g., due to small size and reduced cost), varactors generally exhibit a lower quality (Q)-factor and non-linearity because varactors are asymmetric devices. That is, the quality factor of a varactor is a significant parameter. The quality factor may be defined as the imaginary part of a varactor impedance divided by the real part of the varactor impedance. Thus, the quality factor of the varactor is improved by reducing the real part of the varactor impedance (e.g., its parasitic resistance).[0030] The real part of the varactor impedance is controlled by two factors: (1) a gate resistance—due to the contact to gate resistance; and (2) a channel resistance of the varactor channel between diffusion regions. The channel resistance may be reduced by reducing the channel length (e.g., from 150 nanometers (nm) to 80 nm). In current process nodes (e.g., 28 nm channel length), however, further reduction in the channel length actually decreases the quality factor relative to, for example, the 80 nm channel length. That is, reducing the channel resistance is not possible by further reduction of the channel length, because this reduction increases the gate resistance, which becomes the primary component of the parasitic resistance.[0031] Parasitic resistance of a MOS varactor, therefore, is generally controlled by an effective gate resistance (Rgate). For example, the quality factor of a device with a short gate length (e.g., Lg =28 nm) may be lower than that of a longer gate device (e.g., Lg = 80 nm). This discrepancy may be caused by an increased gate resistance due to a reduced channel length, which may equal the gate length. In advanced RF technologies, short-channel (or short gate length (Lg)) MOS varactors having a high quality factor are desired; however, these are not easily obtained due to increased gate resistance. This gate resistance may be affected by gate contacts from interconnect layers used to connect back-end-of-line interconnects to the gate of on an RF integrated circuit.[0032] Interconnect layers are often used to connect different devices together on an integrated circuit. Semiconductor processes for fabricating integrated circuits are often divided into three parts: a front-end-of-line (FEOL), a middle-of-line (MOL) and a back-end-of-line (BEOL). Front-end-of-line processes include wafer preparation, isolation, well formation, gate patterning, spacers, and dopant implantation. A middle- of-line process includes gate and terminal contact formation. Back-end-of-line processes include forming interconnects and dielectric layers for coupling to the FEOL devices. The gate and terminal contact formation of the middle-of-line process, however, may have a detrimental effect on the gate resistance due to the interconnect layers used to connect different devices.[0033] The interconnect layers may include front-end-of-line interconnect layers, middle-of-line interconnect layers, and back-end-of-line interconnect layers. As described herein, the middle-of-line interconnect layers may refer to the conductive interconnects for connecting a first back-end-of-line interconnect layer (e.g., metal 1 (Ml)) to the oxide diffusion (OD) layer of an integrated circuit, as well as for connecting Ml to the active devices of the integrated circuit. The middle-of-line interconnect layers for connecting Ml to the source/drain layer of an integrated circuit may be referred to as contact to active (CA) trench contacts. The middle-of-line interconnect layer for connecting Ml to the gates of an integrated circuit may be referred to as contact to open (CB) contacts.[0034] According to aspects of the present disclosure, conventional gate contacts are replaced with a long self-aligned contact (LSAC) or multiple self-aligned contacts (MSACs) on an active gate to form a short-channel metal oxide semiconductor varactor. These self-aligned contacts may reduce an effective gate resistance by more than a factor of sixteen (e.g., 16 times). This improvement results when, for example, a long self-aligned contact equals two self-aligned contacts (e.g., 1 LSAC = 2 SACs). This reduction in the effective gate resistance may significantly improve a MOS varactor quality factor (e.g., by more than 15.6 times).[0035] FIGURE 1 is a schematic diagram of a radio frequency (RF) front end (RFFE) module 100 that may include varactors. The RF front end module 100 includes power amplifiers 102, duplex er/filters 104, and a radio frequency (RF) switch module 106. The power amplifiers 102 amplify signals to a certain power level for transmission. The duplexer/filters 104 filter the input/output signals according to a variety of different parameters, including frequency, insertion loss, rejection or other like parameters. In addition, the RF switch module 106 may select certain portions of the input signals to pass on to the rest of the RF front end module 100.[0036] The radio frequency (RF) front end module 100 also includes tuner circuitry 112 (e.g., first tuner circuitry 112A and second tuner circuitry 112B), a diplexer 200, a capacitor 116, an inductor 118, a ground terminal 115, and an antenna 114. The tuner circuitry 112 (e.g., the first tuner circuitry 112A and the second tuner circuitry 112B) includes components such as a tuner, a portable data entry terminal (PDET), and a house keeping analog to digital converter (UKADC). The tuner circuitry 112 may perform impedance tuning (e.g., a voltage standing wave ratio (VSWR) optimization) for the antenna 114. The RF front end module 100 also includes a passive combiner 108 coupled to a wireless transceiver (WTR) 120. The passive combiner 108 combines the detected power from the first tuner circuitry 112A and the second tuner circuitry 112B. The wireless transceiver 120 processes the information from the passive combiner 108 and provides this information to a modem 130 (e.g., a mobile station modem (MSM)). The modem 130 provides a digital signal to an application processor (AP) 140.[0037] As shown in FIGURE 1, the diplexer 200 is between the tuner component of the tuner circuitry 112 and the capacitor 116, the inductor 118, and the antenna 114. The diplexer 200 may be placed between the antenna 1 14 and the tuner circuitry 112 to provide high system performance from the RF front end module 100 to a chipset including the wireless transceiver 120, the modem 130, and the application processor 140. The diplexer 200 also performs frequency domain multiplexing on both high band frequencies and low band frequencies. After the diplexer 200 performs its frequency multiplexing functions on the input signals, the output of the diplexer 200 is fed to an optional LC (inductor/capacitor) network including the capacitor 116 and the inductor 118. The LC network may provide extra impedance matching components for the antenna 114, when desired. Then a signal with the particular frequency is transmitted or received by the antenna 1 14. Although a single capacitor and inductor are shown, multiple components are also contemplated.[0038] FIGURE 2 is a schematic diagram of a wireless local area network (WLAN) (e.g., WiFi) module 270 including an RF front end (RFFE) module 250 for a chipset 260 to provide, for example, carrier aggregation. The WiFi module 270 includes the first diplexer 200-1 communicably coupling an antenna 292 to a wireless local area network module (e.g., WLAN module 272). The RF front end module 250 includes the second diplexer 200-2 communicably coupling an antenna 294 to the wireless transceiver (WTR) 220 through a duplexer 280. The wireless transceiver 220 and the WLAN module 272 of the WiFi module 270 are coupled to a modem (MSM, e.g., baseband modem) 230 that is powered by a power supply 252 through a power management integrated circuit (PMIC) 256. The chipset 260 also includes capacitors 262 and 264, as well as an inductor(s) 266 to provide signal integrity. The PMIC 256, the modem 230, the wireless transceiver 220, and the WLAN module 272 each include capacitors (e.g., 258, 232, 222, and 274) and operate according to a clock 254.[0039] The geometry and arrangement of the various inductor and capacitorcomponents in the chipset 260 may reduce the electromagnetic coupling between the components. Capacitors are passive elements used in integrated circuits for storing an electrical charge. The design of RF front end module 100 may include the use of a voltage-controlled capacitance and/or a tunable capacitor (e.g., a varactor), for example.[0040] FIGURE 3 illustrates a cross-sectional view of a metal oxide semiconductor field-effect transistor (MOSFET) device 300. The MOSFET device 300 may have four input terminals. The four inputs are a source 302, a gate 304, a drain 306, and a substrate 308. The source 302 and the drain 306 may be fabricated as the wells 202 and 204 in the substrate 308, or may be fabricated as areas above the substrate 308, or as part of other layers on the substrate 308. Such other structures may be a fin or other structure that protrudes from a surface of the substrate 308. Further, the substrate 308 may be the substrate of the die, but the substrate 308 may also be one or more of the layers (e.g., 210-214) that are coupled to the substrate 308. [0041] The MOSFET device 300 is a unipolar device, as electrical current is produced by only one type of charge carrier (e.g., either electrons or holes) depending on the type of MOSFET. The MOSFET device 300 operates by controlling the amount of charge carriers in the channel 310 between the source 302 and the drain 306. A voltage Vsource 312 is applied to the source 302, a voltage Vgate 314 is applied to the gate 304, and a voltage Vdrain 316 is applied to the drain 306. A separate voltage Vsubstrate 318 may also be applied to the substrate 308, although the voltage Vsubstrate 318 may be coupled to one of the voltage Vsource 312, the voltage Vgate 314 or the voltage Vdrain 316.[0042] To control the charge carriers in the channel 310, the voltage Vgate 314 creates an electric field in the channel 310 when the gate 304 accumulates charges. The opposite charge to that accumulating on the gate 304 begins to accumulate in the channel 310. The gate insulator 320 insulates the charges accumulating on the gate 304 from the source 302, the drain 306, and the channel 310. The gate 304 and the channel 310, with the gate insulator 320 in between, create a capacitor, and as the voltage Vgate 314 increases, the charge carriers on the gate 304, acting as one plate of this capacitor, begin to accumulate. This accumulation of charges on the gate 304 attracts the opposite charge carriers into the channel 310. Eventually, enough charge carriers areaccumulated in the channel 310 to provide an electrically conductive path between the source 302 and the drain 306. This condition may be referred to as opening the channel of the FET.[0043] By changing the voltage Vsource 312 and the voltage Vdrain 316, and their relationship to the voltage Vgate 314, the amount of voltage applied to the gate 304 that opens the channel 310 may vary. For example, the voltage Vsource 312 is usually of a higher potential than that of the voltage Vdrain 316. Making the voltage differential between the voltage Vsource 312 and the voltage Vdrain 316 larger will change the amount of the voltage Vgate 314 used to open the channel 310. Further, a larger voltage differential will change the amount of electromotive force moving charge carriers through the channel 310, creating a larger current through the channel 310.[0044] The gate insulator 320 material may be silicon oxide, or may be a dielectric or other material with a different dielectric constant (k) than silicon oxide. Further, the gate insulator 320 may be a combination of materials or different layers of materials. For example, the gate insulator 320 may be Aluminum Oxide, Hafnium Oxide, Hafnium Oxide Nitride, Zirconium Oxide, or laminates and/or alloys of these materials. Other materials for the gate insulator 320 may be used without departing from the scope of the present disclosure.[0045] By changing the material for the gate insulator 320, and the thickness of the gate insulator 320 (e.g., the distance between the gate 304 and the channel 310), the amount of charge on the gate 304 to open the channel 310 may vary. A symbol 322 showing the terminals of the MOSFET device 300 is also illustrated. For N-channel MOSFETs (using electrons as charge carriers in the channel 310), an arrow is applied to the substrate 308 terminal in the symbol 322 pointing away from the gate 304 terminal. For p-type MOSFETs (using holes as charge carriers in the channel 310), an arrow is applied to the substrate 308 terminal in the symbol 322 pointing toward the gate 304 terminal.[0046] The gate 304 may also be made of different materials. In some designs, the gate 304 is made from polycrystalline silicon, also referred to as polysilicon or poly, which is a conductive form of silicon. Although referred to as "poly" or "polysilicon" herein, metals, alloys, or other electrically conductive materials are contemplated as appropriate materials for the gate 304.[0047] In some MOSFET designs, a high-k value material may be desired in the gate insulator 320, and in such designs, other conductive materials may be employed. For example, and not by way of limitation, a "high-k metal gate" design may employ a metal, such as copper, for the gate 304 terminal. Although referred to as "metal," polycrystalline materials, alloys, or other electrically conductive materials are contemplated as appropriate materials for the gate 304 as described in the present disclosure.[0048] To interconnect to the MOSFET device 300, or to interconnect to other devices in the die (e.g., semiconductor substrate), interconnect traces or layers are used. These interconnect traces may be in one or more of layers (e.g., 210-214), or may be in other layers of the substrate 308 (or a die). These interconnects may affect a gate resistance, as described herein. [0049] FIGURE 4 illustrates a fin-structured FET (FinFET) 400 that operates in a similar fashion to the MOSFET device 300 described with respect to FIGURE 3. A fin 410 in the FinFET 400, however, is grown or otherwise coupled to the substrate 308. The substrate 308 may be a semiconductor substrate or other like supporting layer, for example, comprised of an oxide layer, a nitride layer, a metal oxide layer or a silicon layer. The fin 410 includes the source 302 and the drain 306. A gate 304 is disposed on the fin 410 and on the substrate 308 through a gate insulator 320. A FinFET transistor is a 3D fin-based metal oxide semiconductor field-effect transistor (MOSFET). As a result, the physical size of the FinFET 400 may be smaller than the MOSFET device 300 structure shown in FIGURE 3. This reduction in physical size allows for more devices per unit area on the die.[0050] FIGURE 5 illustrates a conventional metal oxide semiconductor (MOS) varactor 500. In advanced RF circuits, the MOS varactor 500 may provide a tunable capacitor. The MOS varactor 500 may include a source region 512, a drain region 514, and a channel 504 formed between the source region 512 and the drain region 514 in a substrate 502. A gate 510 is on the channel 504, and a gate dielectric (not shown) may be between the gate 510 and the channel 504. Middle-of-line trench contacts 520 to the source region 512 and the drain region 514 are also shown. In addition, the MOS varactor 500 includes a dielectric layer 506.[0051] This MOS varactor 500 is an example of an electrical device used to store energy (e.g., charge) in an electrical field between closely spaced capacitor plates (e.g., the gate 510 and the channel 504) according to a capacitance value. This capacitance value provides a measure of the amount of charge stored by the capacitor at a certain voltage. In the MOS varactor 500, a plate width (e.g., the channel 504) modulates (e.g., according to the source region 512 and the drain region 514) to vary an electric field formed between the capacitor plates (e.g., the gate 510 and the channel 504).[0052] The MOS varactor 500 is desirable because it proves an electrically controllable capacitance that can be used in RF circuits. While the use of varactors is advantageous in many applications (e.g., due to small size and reduced cost), varactors generally exhibit a lower quality (Q)-factor and non-linearity because varactors are asymmetric devices. One of the significant parameters for a MOS varactor is its quality factor. The quality factor may be defined as: Imaginary part of var actor impedance / real part of var actor impedance (1)[0053] As illustrated by equation (1), the quality factor of the conventional MOS varactor 500 may be improved by reducing the real part of the varactor impedance (e.g., its parasitic resistance). The real part of the varactor impedance is controlled by two factors: (1) a gate resistance—due to the contact to gate resistance; and (2) a channel resistance of the varactor channel between diffusion regions. The channel resistance may be reduced by reducing the channel length (e.g., from 150 nanometers (nm) to 80 nm). In current process nodes (e.g., 28 nm channel length), however, further reduction in the channel length actually decreases the quality factor relative to, for example, the 80 nm channel length. That is, the gate resistance becomes the primary component of the parasitic resistance because further reduction of the channel length increases the gate resistance.[0054] Parasitic resistance of a MOS varactor, therefore, is generally controlled by an effective gate resistance (Rgate). For example, the quality factor of a device with a short gate length (e.g., Lg=28 nm) may be lower than that of a longer gate device (e.g., Lg=80 nm). This discrepancy may be caused by an increased gate resistance due to a reduced channel length, which may equal the gate length. In advanced RF technologies, short-channel (or short gate length (Lg)) MOS varactors having a high quality factor are desired; however, these are not easily obtained due to increased gate resistance. This gate resistance may be affected by gate contacts from interconnect layers used to connect back-end-of-line interconnects to the gate of an RF integrated circuit.[0055] According to aspects of the present disclosure, conventional gate contacts are replaced with a long self-aligned contact (LSAC) or multiple self-aligned contacts (MSACs) on an active gate of a MOS varactor. These self-aligned contacts (SACs) may reduce an effective gate resistance by a factor of sixteen (e.g., by more than 16 times). For example, when a long self-aligned contact equals two self-aligned contacts (e.g., 1 LSAC = 2 SACs), the gate resistance is reduced by a factor of sixteen. This reduction in the effective gate resistance may significantly improve the MOS varactor quality factor (e.g., by more than 15.6 times).[0056] FIGURES 6A illustrates a cross-sectional view of a short-channel metal oxide semiconductor (MOS) varactor 600, according to aspects of the present disclosure. The MOS varactor 600 may include a source region 612, a drain region 614, and a channel 604 between the source region 612 and the drain region 614. In addition, a gate 610 is on the channel 604. In this example, a length (Lg) of the gate 610 may be less than 50 nm (e.g., 14 nm and/or 28 nm). Additionally, the source region 612 and the drain region 614 may each be doped with a first polarity (e.g., N++), and the channel 604 may also be doped with the first polarity (N+). The source region 612, the drain region 614, and the channel 604 may be formed in a substrate 602. A dielectric 606 (e.g., molding compound) may be deposited on the substrate 602.[0057] According to aspects of the present disclosure, a self-aligned gate contact 640 (e.g., at least one self-aligned contact (SAC)) may be formed through the dielectric 606 to couple to the gate 610. The self-aligned gate contact 640 may be a long self-aligned contact (LSAC) or multiple self-aligned contacts (MS ACs). A shape of the self-aligned gate contact 640 may be cylindrical or square shaped, having a width greater than the length Lg of the gate 610. For example, the self-aligned gate contact 640 may be a self- aligned gate contact via between source and drain contacts in an active area.[0058] According to aspects of the present disclosure, the self-aligned contact may be composed of copper (Cu), tungsten (W), nickel (Ni), aluminum (Al), gold (Au), silver (Ag), titanium (Ti), and/or graphene.[0059] According to an aspect, the short-channel MOS varactor 600 may be formed according to a configuration similar to the FinFET 400 of FIGURE 4. Alternatively, the short-channel MOS varactor may 600 be formed as a planar-based device according to a configuration similar to the MOSFET device 300 of FIGURE 3, or as a gate all around (GAA) MOS varactor (not shown).[0060] FIGURE 6B illustrates a top view of the short-channel metal oxidesemiconductor (MOS) varactor 600, according to aspects of the present disclosure. The gate 610 may extend over and contact multiple semiconductor fins 630. In addition, a trench source contact 620 and a trench drain contact 622 are also shown. In this arrangement, the self-aligned gate contact 640 is located between a source contact 624 (e.g., a source contact via) of the trench source contact 620 and a drain contact 626 (e.g., drain contact via) of the trench drain contact 622. [0061] For example, a length the self-aligned gate contact 640 may extend an entire length between the source contact 624 and the drain contact 626. The self-aligned gate contact 640 is between a first end gate contact 642 and a second end gate contact 644, which may be self-aligned contacts. In this example, the self-aligned gate contact 640 may be referred to as a long self-aligned contact. In addition, a width of the self-aligned gate contact 640 may be greater than a gate length Lg of the gate 610. This arrangement of the self-aligned gate contact 640 improves a quality factor of the MOS varactor 600 by reducing the contact to gate resistance due to the larger contact area of the self- aligned gate.[0062] FIGURES 7A-7C illustrate top views of various exemplary configurations of the short-channel metal oxide semiconductor (MOS) varactor 600, according to aspects of the present disclosure.[0063] FIGURE 7 A illustrates a MOS varactor 700 with a second end gate contact 644. FIGURE 7B illustrates a MOS varactor 710 with a first end gate contact 642 and a second end gate contact 644. These gate contacts may be self-aligned and on opposite ends of the gate 610. FIGURE 7C illustrates a MOS varactor 720 with a similar configuration to the MOS varactor 600 shown in FIGURE 6B. In this arrangement, the self-aligned gate contact 640 is replaced with multiple self-aligned contacts (MSACs), including a first self-aligned gate contact 650 and a second self-aligned gate contact 652, between the source contact 624 and the drain contact 626. For example, the self- aligned gate contact 640 between the source contact 624 and the drain contact 626 may be a long self-aligned contact, as shown in FIGURE 6B or multiple self-aligned contacts (e.g., 650, 652), as shown in FIGURE 7C. According to an aspect of the present disclosure, the long self-aligned contact may be equivalent to greater than four non- aligned contacts.[0064] FIGURE 8 illustrates a top view of the short-channel metal oxide semiconductor (MOS) varactor 600 of FIGURE 6B, and cross-sectional views of the MOS varactor 600 at various cross-sections, according to aspects of the present disclosure. As shown, the self-aligned gate contact 640 may be formed along the gate 610 at any location along an entire length between the source contact 624 and the drain contact 626. For example, many SACs (e.g., as many as the gate width allows) are placed on MOS varactor devices for reducing the gate resistance and increasing the quality factor. This is especially advantageous for RF circuits for 5G technology, such as millimeter wave applications (e.g., extremely high frequency (EHF) spectrum from 30 to 300 GHz).[0065] According to aspects of the present disclosure, self-aligned contacts (SACs) reduce an effective gate resistance by a factor of sixteen (e.g., more than 16 times). For example, a long self-aligned contact may be equal to two self-aligned contacts (e.g., 1 LSAC = 2 SACs). This reduction in the effective gate resistance may substantially improve the MOS varactor quality factor (e.g., by more than 15.6 times). Additionally, no other parts of the varactor or other radio frequency (RF) active devices are impacted.[0066] FIGURE 9 is a process flow diagram illustrating a method 900 of fabricating a short-channel metal oxide semiconductor (MOS) varactor, according to aspects of the present disclosure. Fabricating the MOS varactor includes forming a source region of a first polarity, a drain region of the first polarity, and a channel region of the first polarity. The method 900 begins at block 902, in which a source via contact is coupled to the source region. A block 904, a drain via contact is coupled to the drain region. For example, as shown in FIGURE 6A, the source and drain may be N doped, and the source and drain contacts may be on opposite sides of each other. In addition, an N doped channel region may be formed between the source region and the drain region. A gate of the MOS varactor may be on the channel region. At block 906, at least one self- aligned contact (SAC) is fabricated on the gate between the source via and the drain via. For example, as shown in FIGURES 6B and 7A-7C, there may be 1, 2, 4, or more than 4 self-aligned contacts.[0067] According to an aspect of the present disclosure, a short-channel MOS varactor is described. In one configuration, the short-channel MOS varactor includes means for contacting. The contacting means may be one of the described self-aligned contacts, long self-alignment contacts, or multiple self-alignment contacts. In another aspect, the aforementioned means may be any module or any apparatus or material configured to perform the functions recited by the aforementioned means.[0068] FIGURE 10 is a block diagram showing an exemplary wireless communication system 1000 in which an aspect of the disclosure may be advantageously employed. For purposes of illustration, FIGURE 10 shows three remote units 1020, 1030, and 1050 and two base stations 1040. It will be recognized that wireless communication systems may have many more remote units and base stations. Remote units 1020, 1030, and 1050 include IC devices 1025 A, 1025C, and 1025B that include the disclosed varactor. It will be recognized that other devices may also include the disclosed varactor, such as the base stations, switching devices, and network equipment. FIGURE 10 shows forward link signals 1080 from the base station 1040 to the remote units 1020, 1030, and 1050 and reverse link signals 1090 from the remote units 1020, 1030, and 1050 to base station 1040.[0069] In FIGURE 10, remote unit 1020 is shown as a mobile telephone, remote unit 1030 is shown as a portable computer, and remote unit 1050 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be a mobile phone, a hand-held personal communication systems (PCS) unit, a portable data unit such as a personal data assistant, a GPS enabled devices, a navigation device, a set top box, a music player, a video player, an entertainment unit, a fixed location data unit such as meter reading equipment, or other devices that store or retrieve data or computer instructions, or combinations thereof. Although FIGURE 10 illustrates remote units according to the aspects of the disclosure, the disclosure is not limited to these exemplary illustrated units. Aspects of the disclosure may be suitably employed in many devices, which include the disclosed varactor.[0070] FIGURE 11 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of an IC structure, such as the varactor disclosed above. A design workstation 1100 includes a hard disk 1101 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 1100 also includes a display 1102 to facilitate design of a circuit 1110 or a varactor structure 1112 including a CMOS transistor. A storage medium 1104 is provided for tangibly storing the design of the circuit 1110 or the varactor structure 1112. The design of the circuit 1110 or the varactor structure 1112 may be stored on the storage medium 1104 in a file format such as GDSII or GERBER. The storage medium 1104 may be a CD-ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 1100 includes a drive apparatus 1103 for accepting input from or writing output to the storage medium 1104.[0071] Data recorded on the storage medium 1104 may specify logic circuitconfigurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logic simulations. Providing data on the storage medium 1104 facilitates the design of the circuit 1110 or the varactor structure 1112 by decreasing the number of processes for designing semiconductor wafers.[0072] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory and executed by a processor unit. Memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to types of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to a particular type of memory or number of memories, or type of media upon which memory is stored.[0073] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be an available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0074] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.[0075] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the disclosure as defined by the appended claims. For example, relational terms, such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is inverted, above becomes below, and vice versa. Additionally, if oriented sideways, above and below may refer to sides of a substrate or electronic device.Moreover, the scope of the present application is not intended to be limited to the particular configurations of the process, machine, manufacture, and composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achievesubstantially the same result as the corresponding configurations described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.[0076] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0077] The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general- purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general- purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0078] The steps of a method or algorithm described in connection with the disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0079] In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store specified program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.[0080] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Exchanging data between a SIM device (180) and an application executed in a trusted platform (110,120,140), wherein the data to be exchanged is secured from unauthorized access. In one embodiment, the exchanging data includes exchanging an encryption key via a trusted path within a computer system(100), and exchanging data encrypted with the encryption key, via an untrusted path with the computer system. |
Claims 1) A method comprising: exchanging data between a SIM device and an application executed in a trusted platform, wherein the data to be exchanged is secured from unauthorized access. 2) The method of claim 1, wherein the exchanging of data include : exchanging an encryption key via a trusted path within a computer system; and exchanging data encrypted with the encryption key, via an untrusted path within the computer system. 3) The method of claim 2, wherein the exchanging the encryption key includes the application transmitting the encryption key to a protected section of memory within the computer system; and a SIM device accessing the encryption key from the protected section of memory. 4) The method of claim 2, wherein the exchanging the encryption key includes the application accessing the encryption key from the SIM device, the application accessing the encryption key via a trusted port of a chipset. 5) The method of claim 2, wherein the exchanging the encryption key includes exchanging multiple encryption keys, and the exchanging data includes exchanging separate units of data, with each unit of data separately encrypted with an encryption key selected from the multiple encryption keys. <Desc/Clms Page number 12> 6) The method of claim 2, wherein the exchanging data includes a host controller transmitting data from the SIM device to an unprotected section of memory. 7) The method of claim 6, wherein the exchanging data includes a driver transmitting data from the unprotected section of memory to the application. 8) The method of claim 7, wherein the host controller is a Universal Serial Bus (USB) host controller and the driver is a USB driver. 9) The method of claim 6, wherein the exchanging the encryption key includes the SIM device reading the encryption key from the protected section of memory via a trusted port of a chip set. 10) The method of claim 6 further including : the application decrypting the encrypted data using the encryption key. 11) The method of claim 7 further including prior to exchanging the encryption key, the application authenticating the SIM device. 12) The method of claim 6, further including : exchanging a new encryption key based on a predetermined event selected from a group comprising of, each new transaction, passage of a predetermined period of time, and exchange of a predetermined amount of data. 13) A system comprising: <Desc/Clms Page number 13> a processor; a memory having a protected section and an unprotected section; a SIM device; and a chipset to Exchange data between the SIM device and an application executed in a trusted platform, wherein the data to be exchanged is secured from unauthorized access. 14) The system of claim 13, wherein the exchange of data is to include an exchange of an encryption key via a trusted path within a computer system, and an exchange of data encrypted with the encryption key, via an untrusted path within the computer system. 15) The system of claim 14, wherein the exchange of the encryption key includes the application to transmit the encryption key to the protected section of memory, and the SIM device to access the encryption key from the protected section of memory. 16) The system of claim 13, wherein the exchange of the encryption key includes the application to access the encryption key from the SIM device, the application to access the encryption key via a trusted port of a chipset. 17) The system of claim 13, wherein the exchange of the encryption key includes an exchange of multiple encryption keys, and the exchange of data includes an exchange of separate units of data, with each unit of data separately encrypted with an encryption key selected from the multiple encryption keys. <Desc/Clms Page number 14> 18) The system of claim 12, wherein the system further includes a host controller to transmit data from the SIM device to an unprotected section of memory. 19) The system of claim 16, wherein the system further includes a driver to transmit data from the unprotected section of memory to the application. 20) The system of claim 17, wherein the host controller is a Universal Serial Bus (USB) host controller and the driver is a USB driver. 21) The system of claim 14, wherein the SIM device is to read the encryption key from the protected section of memory via a trusted port of the chip set. 22) The system of claim 14, wherein the application is to decrypt the encrypted data using the encryption key. 23) The system of claim 17, wherein the application is to authenticate the SIM device prior to the exchange of the encryption key. 24) The system of claim 14, wherein a new encryption key is to be exchanged based on a predetermined event selected from a group comprising of, each new transaction, passage of a predetermined period of time, and exchange of a predetermined amount of data. |
<Desc/Clms Page number 1> Method and System To Provide A Trusted Channel Within A ComputerSystem For A SIM Device Field of Invention [0001] The field of invention relates generally to trusted computer platforms ; and, more specifically, to a method and apparatus to provide a trusted channel within a computer system for a SIM device. Background [0002] Trusted operating systems (OS) and platforms are a relatively new concept. In first generation platforms, a trusted environment is created where applications can run trusted and tamper-free. The security is created through changes in the processor, chipset, and software to create an environment that cannot be seen by other applications (memory regions are protected) and cannot be tampered with (code execution flow cannot be altered). As a result, the computer system cannot be illegally accessed by anyone or compromised by viruses. [0003] In today's computing age, Subscripber Identify Modules (SIM), sometimes referred to as a smart card, are becoming more prevalent. A SIM is a credit card sized card that is typically used for Global System for Mobile communications (GSM) phones to store telephone account information and provide Authentication, Authorization and Accounting (AAA). The SIM cards also allow a user to use a borrowed or rented GSM phone as if it were their own. SIM cards can also be programmed to display custom menus on the phone's readout. In some cases, <Desc/Clms Page number 2> the SIM cards include a built-in microprocessor and memory that may be used in some cases for identification or financial transactions. When inserted into a reader, the SIM is accessible to transfer data to and from the SIM. SIM cards may also be inserted into [0004] When using a SIM card in a computer system, there is a need to securely access information from the SIM card in order to prevent accesses to the SIM from unauthorized software applications. Such accesses may be intended to learn certain SIM secrets or to break GSM authentication mechanisms and steal services provided Figures [0005] One or more embodiments are illustrated by way of example, and not limitation, in the Figures of the accompanying drawings, in which [0006] Figure 1 illustrates a computer system capable of providing a trusted platform to protect selected applications and data from unauthorized access, according to one embodiment; and [0007] Figure 2 is a flow diagram describing a process of providing a trusted channel within a computer system for a SIM device, according to one embodiment. Detailed Description [0008] A method and system to provide a trusted channel within a computer system for a SIM device is described. In one embodiment, data is exchanged between an application being executed in a trusted platform and a SIM device, <Desc/Clms Page number 3> wherein the data exchanged is protected from unauthorized access. In one embodiment, an encryption key is exchanged via a trusted channel within a computer system. Data encrypted with the encryption key is exchanged via an untrusted channel within the computer system. [0009] in the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. [0010] Reference throughout this specification to"one embodiment"or"an embodiment"indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases"in one embodiment"or"in an embodiment"in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In addition, as described herein, a trusted platform, components, units, or subunits thereof, are interchangeably referenced as a protected or secured. Trusted Platform [0011] Fig. 1 illustrates a computer system, according to one embodiment, capable of providing a trusted platform to protect selected applications and data <Desc/Clms Page number 4> from unauthorized access. System 100 of the illustrated embodiment includes a processors 110, a chipset 120 connected to processors 110 via processor bus 130, a memory 140, and a SIM device 180 to access data on a SIM card 182. In alternative embodiments, additional processors and units may be included. [0012] Processor 110 may have various elements, which may include but are not limited to, embedded key 116, page table (PT) registers 114 and cache memory (cache) 112. All or part of cache 112 may include, or be convertible to, private memory (PM) 160. Private memory is a memory with sufficient protections to prevent access to it by any unauthorized device (e. g. , any device other than the associated processor 110) while activated as a private memory. Key 116 may be an embedded key to be used for encryption, decryption, and/or validation of various blocks of data and/or code. Alternatively, the key 116 may be provided on an alternative unit within system 100. PT registers 114 may be a table in the form of registers to identify which memory pages are to be accessible only by trusted code and which memory pages are not to be so protected. In one embodiment, the memory 140 may include system memory for system 100, and in one embodiment may be implemented as volatile memory commonly referred to as random access memory (RAM). In one embodiment, the memory 140 may contain a protected memory table 142, which defines which memory blocks (where a memory block is a range of contiguously addressable memory locations) in memory 140 are to be inaccessible to direct memory access <Desc/Clms Page number 5> (DMA) transfers. Since all accesses to memory 140 go through chipset 120, chipset 120 may check protected memory table 142 before permitting any DMA transfer to take place. In a particular operation, the memory blocks protected from DMA transfers by protected memory table 142 may be the same memory blocks restricted to protected processing by PT registers 144 in processor 110. The protected memory table 142 may alternatively be stored in a memory device of an alternative unit within system 100. In one embodiment, Memory 140 also includes trusted software (S/W) monitor 144, which may monitor and control the overall trusted operating environment once the trusted operating environment has been established. In one embodiment, the trusted S/W monitor 144 may be located in memory blocks that are protected from DMA transfers by the protected memory table 142. [0016] Chipset 120 may be a logic circuit to provide an interface between processors 110, memory 140, SIM device 1801 and other devices not shown. In one embodiment, chipset 120 is implemented as one or more individual integrated circuits, but in other embodiments, chipset 120 may be implemented as a portion of a larger integrated circuit. Chipset 120 may include memory controller 122 to control accesses to memory 140. In addition, in one embodiment, the chipset 120 may have a SIM reader of the SIM device integrated on the chipset 120. In one embodiment, protected registers 126 are writable only by commands that may only be initiated by trusted microcode in processors 110. Trusted microcode is microcode whose execution may only be initiated by authorized <Desc/Clms Page number 6> instruction (s) and/or by hardware that is not controllable by unauthorized devices. In one embodiment, trusted registers 126 hold data that identifies the locations of, and/or controls access to, trusted memory table 142 and trusted S/W monitor 144. In one embodiment, trusted registers 126 include a register to enable or disable the use of trusted memory table 142 so that the DMA protections may be activated before entering a trusted operating environment and deactivated after leaving the trusted operating environment. Trusted Channel with SIM Device [0018] Fig. 2 is a flow diagram describing a process of providing a trusted channel within a computer system for a SIM device, according to one embodiment. As described herein, reference to a SIM device includes other types of related Smart cards. The processes described in the flow diagram of Fig. 2, are described with reference to the system of Fig. 1, described above. In one embodiment, in process 202, an application 150 being executed in a trusted environment of the system 100, determines information is to be accessed from a SIM device 180 of the system 100. The application 150 being executed in a trusted atmosphere can be located in a protected memory, such as protected memory 160 of cache 112, or a protected section of memory 140. In one embodiment, the SIM device 180 includes a mechanism to ascertain that the accesses are coming from the application in a trusted environment that is running on the same platform that the SIM device is physically attached to, and not from some remotely executing application. <Desc/Clms Page number 7> [0020] In process 204, the application and the SIM device perform a mutual authentication to determine that the SIM device is the correct device from which the application is to receive data, or that the application is the correct application to which the SIM device is to send the data. The mutual authentication may be conducted via a variety of processes known throughout the concerned field of technology. In process 206, following the completion of the mutual authentication, in one embodiment, the application 150 transmits an encryption key to a protected section of memory 140, via a trusted channel with the memory device, and corresponding PT entries held in the CPU. In one embodiment, the protected section of memory to store the encryption key is identifiable via the protected memory table 142. The encryption key provided by the application 150 to the protected section of memory 140, is generated by the application 150, and is applicable to one of several available encryption processes, such as the Data Encryption Standard (DAS) or the Advanced Encryption Standard (AES). In one embodiment, the encryption key is generated via utilization of the key 116 of processor 110. [0023] In process 208, the SIM device 180 accesses the encryption key from the protected section of memory 140. In one embodiment, the SIM device accesses the encryption key via a trusted port 112, of a chipset 120, which is mapped to the protected section of memory 140. In one embodiment, the trusted port may <Desc/Clms Page number 8> support one several platform bus protocols, including USB. In an alternative embodiment, the encryption key is provided by the SIM device, wherein the application accesses the encryption key from the SIM device via the trusted port of the chipset. In process 210, the SIM device 180 uses the encryption key to encrypt data to be sent to the application 150. In process 212, the encrypted packets are transferred from the SIM device 180 by a host controller 128 (e. g. , a USB host controller) of the chipset to a regular area of memory (i. e. , unprotected section of memory 148). For example, an area of memory that is used to store data packets, such as USB data packets. In one embodiment, the encrypted packets are transmitted to the memory by the host controller via a regular port 120 of the chipset (i. e. , an unprotected port), which maps to an unprotected section of memory 148. In one embodiment, the encrypted packets from the SIM device include Message Authentication Code (MAC) to provide a level of integrity protection. [0026] In process 214, a driver (e. g. , an unprotected USB driver) accesses the encrypted packets from the unprotected section of memory 148 and provides the encrypted packets to the application 150 being executed in the trusted environment. In process 216, the application 150 decrypts the encrypted packets to access the data from the SIM device, which have been securely transferred to the application via an untrusted path within the system 100. <Desc/Clms Page number 9> [0027] In one embodiment, new encryption keys may be exchanged based on predetermined events. For example, a new encryption key may be exchanged following one of, or a combination of, each new transaction (as defined based on implementation choice), the passage of a predetermined period of time, or the exchange of a predetermined amount of data. [0028] In another alternative embodiment, multiple encryption keys are exchanged between the application 150 and the SIM device 180, to be used encrypted data exchanges between the SIM device 180 and the application 150. For example, a SIM device may include multiple data pipes (e. g., bulk-in, bulk-out, and default control pipes). For each of the data pipes of the SIM device, a separate encryption key may be used to protect the data exchanges. Alternatively, the separate data pipes may all use the same encryption key. In an alternative embodiment, the data packets may be transmitted from the SIM device to the application without the use of encryption. For example, the host controller 128 transmits the data from the SIM device to the protected section of memory 140 via the trusted port 112 of the chipset 120. A trusted driver would then access the data from the protected section of memory 140 and provide the data to the application 150 via a trusted path, without having the SIM data encrypted. The processes described above can be stored in the memory of a computer system as a set of instructions to be executed. In addition, the instructions to perform the processes described above could alternatively be stored on other <Desc/Clms Page number 10> forms of machine-readable media, including magnetic and optical disks. For example, the processes described could be stored on machine-readable media, such as magnetic disks or optical disks, which are accessible via a disk drive (or computer-readable medium drive). Further, the instructions can be downloaded into a computing device over a data network in a form of compiled and linked version. Alternatively, the logic to perform the processes as discussed above could be implemented in additional computer and/or machine readable media, such as discrete hardware components as large-scale integrated circuits (LSI's), application-specific integrated circuits (ASIC's), firmware such as electrically erasable programmable read-only memory (EEPROM's); and electrical, optical, acoustical and other forms of propagated signals (e. g. , carrier waves, infrared signals, digital signals, etc. ); etc. ] In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. In particular, as described herein, the SIM device is inclusive of Smart card devices, including USB Chip/Smart Card Interface Devices (CCID). Furthermore, the architecture of the system as described herein is independent of any particular key exchange protocols that are used. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
Some novel features pertain to an integrated device that includes a substrate, a first via, and a first bump pad. The first via traverses the substrate. The first via has a first via dimension. The first bump pad is on a surface of the substrate. The first bump pad is coupled to the first via. The first bump pad has a first pad dimension that is equal or less then the first via dimension. In some implementations, the integrated device includes a second via and a second bump pad. The second via traverses the substrate. The second via has a second via dimension. The second bump pad is on the surface of the substrate. The second bump pad is coupled to the second via. The second bump pad has a second pad dimension that is equal or less then the second via dimension. |
CLAIMSWHAT IS CLAIMED IS:1. An integrated device comprising:a substrate;a first via traversing the substrate, wherein the first via has a first via dimension; anda first bump pad on a surface of the substrate, the first bump pad coupled to the first via, wherein the first bump pad has a first pad dimension that is equal or less then the first via dimension.2. The integrated device of claim 1, further comprising:a second via traversing the substrate, wherein the second via has a second via dimension; anda second bump pad on the surface of the substrate, the second bump pad coupled to the second via, wherein the second bump pad has a second pad dimension that is equal or less then the second via dimension.3. The integrated device of claim 2, wherein a pitch between the first via and the second via is about 80 microns (μιη) or less.4. The integrated device of claim 2, wherein a pitch between the first via and the second via is about 125 microns (μιη) or less.5. The integrated device of claim 1, wherein the first bump pad is configured to couple to an interconnect of a die.6. The integrated device of claim 1, wherein the first bump pad is a peripheral bump pad that is located near an edge of a die area of the substrate.7. The integrated device of claim 1, wherein the first bump pad is configured to couple to a first bump from a die.8. The integrated device of claim 7, wherein the first bump includes a first under metallization (UBM) layer, a first interconnect pillar, and a first solder ball.9. The integrated device of claim 1, wherein the substrate comprises one of at least a dielectric, glass, ceramic, and/or silicon.10. The integrated device of claim 1, wherein the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.1 1. A method for fabricating an integrated device, comprising:forming a substrate;forming a first via traversing the substrate, wherein the first via has a first via dimension; andforming a first bump pad on a surface of the substrate such that the first bump pad is coupled to the first via, wherein the first bump pad has a first pad dimension that is equal or less then the first via dimension.12. The method of claim 21 , further comprising:forming a second via traversing the substrate, wherein the second via has a second via dimension; andforming a second bump pad on the surface of the substrate such that the second bump pad is coupled to the second via, wherein the second bump pad has a second pad dimension that is equal or less then the second via dimension.13. The method of claim 12, wherein a pitch between the first via and the second via is about 80 microns (μιη) or less.14. The method of claim 22, wherein a pitch between the first via and the second via is about 125 microns (μιη) or less.15. The method of claim 11, wherein the first bump pad is configured to couple to an interconnect of a die.16. The method of claim 11 , wherein the first bump pad is a peripheral bump pad that is located near an edge of a die area of the substrate.17. The method of claim 1 1, wherein the first bump pad is configured to couple to a first bump from a die.18. The method of claim 17, wherein the first bump includes a first under metallization (UBM) layer, a first interconnect pillar, and a first solder ball.19. The method of claim 1 1, wherein the substrate comprises one of at least a dielectric, glass, ceramic, and/or silicon.20. The method of claim 11, wherein the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer. |
SUBSTRATE COMPRISING IMPROVED VIA PAD PLACEMENT IN BUMP AREACROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application claims priority to U.S. Patent Application No. 14/251,518, entitled "Substrate Comprising Improved Via Pad Placement in Bump Area," filed April 1 1, 2014, which claims priority to and benefit of U.S. Provisional Application No. 61/919, 157, entitled "Substrate Comprising Improved Via Pad Placement in Bump Area", filed December 20, 2013, which applications are hereby expressly incorporated by reference herein.BACKGROUNDField[0002] Various features relate to a substrate that includes improved via pad placement in the bump area of the substrate.Background[0003] Current manufacturing techniques limit how closely traces, vias, and/or via pads can be close to each other. Because of these limitations in manufacturing techniques, dies and substrates have to be designed in a certain way. FIG. 1 illustrates how traces, vias and/or pads are implemented in current package substrates. Specifically, FIG. 1 illustrates a plan view (e.g., top view) of a package substrate 100 that includes a substrate 102, several bump pads (e.g., pads 104, 114), several traces (e.g., traces 106, 116) and several via pads (e.g., via pads 108, 1 18). A bump pad is an interconnect that is configured to couple to a bump (e.g., copper pillar) from a die. The substrate 102 also includes several vias which are not visible from the plan view because these vias are covered by the via pads. These vias are coupled to the via pads. As further shown in FIG. 1, the bump pads, the via pads and/or traces are arranged in the package substrate 100 along different rows and columns. In some implementations, the package substrate 100 is configured to couple to one or more dies (e.g., flip chip).[0004] Current manufacturing techniques create relatively large via pads (e.g., compared to the traces), which forces vias to be created towards the outer perimeter of a die coupling area of a package substrate. Moreover, current manufacturing techniques limit the pitch between traces, vias, bump pads and/or via pads. Because of these and other limitations in the manufacturing processes, a bump pad (e.g., pad 104) is coupled to a via pad (e.g., via pad 108) through a trace (e.g., trace 106). This design causes several problems. One, it creates an integrated circuit (IC) design that takes up a lot of real estate. Second, it creates performance issues, as the extra interconnect length (e.g., extra trace) can slow the electrical performance of the IC design. Three, adding additional interconnects (e.g., traces) creates a more complex IC design.[0005] FIG. 2 illustrates a profile view (e.g., side view) of the cross-section AA of the package substrate 100 of FIG. 1. As shown in FIG. 2, the first pad 104 (e.g., bump pad), the first trace 106, and the second pad 108 (e.g., via pad) are on a first surface of the substrate 102. The package substrate 100 also includes a first via 208 that traverses the substrate 102. The first pad 104 is coupled to the first trace 106. The first trace 106 is coupled to the second pad 108. The second pad 108 is coupled to the first via 208. FIG. 2 also illustrates the third pad 1 14 (e.g., bump pad), the second trace 116, and the fourth pad 1 18 (e.g., via pad) are on the first surface of the substrate 102. The package substrate 100 also includes a second via 218 that traverses the substrate 102. The third pad 114 is coupled to the second trace 1 16. The second trace 1 16 is coupled to the fourth pad 1 18. The fourth pad 118 is coupled to the second via 218.[0006] FIG. 3 illustrates how a flip chip may be coupled to a package substrate. As shown in FIG. 3, a flip chip 300 that includes a first bump 302 and a second bump 304, is coupled to the package substrate 100. The first bump 302 may include a first under bump metallization (UBM) layer, a first interconnect pillar (e.g., copper pillar), and a first solder ball. The second bump 304 may include a second under bump metallization (UBM) layer, a second interconnect pillar (e.g., copper pillar), and a second solder ball. The first bump 302 of the flip chip 300 is coupled to the first pad 104. The second bump 304 of the flip chip 300 is coupled to the third pad 114. As shown in FIG. 3, the configuration of the flip chip 300 and the package substrate 100 can create an unnecessary large package substrate 100 and/or flip chip 300. For example, there is a lot of excess lateral space / real estate between the first bump 302 and the first via 208.[0007] Therefore, there is a need for an improved integrated device that is smaller and/or occupies a smaller real estate. Ideally, such an integrated device will have better performance than current integrated device. SUMMARY[0008] Various features, apparatus and methods described herein provide a package substrate that includes improved via pad placement in the bump area of the substrate.[0009] A first example provides an integrated device that includes a substrate, a first via, and a first bump pad. The first via traverses the substrate. The first via has a first via dimension. The first bump pad is on a surface of the substrate. The first bump pad is coupled to the first via. The first bump pad has a first pad dimension that is equal or less then the first via dimension.[0010] According to an aspect, the integrated device includes a second via traversing the substrate, where the second via has a second via dimension. The integrated device also includes a second bump pad on the surface of the substrate, where the second bump pad is coupled to the second via, where the second bump pad has a second pad dimension that is equal or less then the second via dimension. In some implementations, a pitch between the first via and the second via is about 80 microns (μιη) or less. In some implementations, a pitch between the first via and the second via is about 125 microns (μιη) or less.[0011] According to one aspect, the first bump pad is configured to couple to an interconnect of a die.[0012] According to an aspect, the first bump pad is a peripheral bump pad that is located near an edge of a die area of the substrate.[0013] According to one aspect, the first bump pad is configured to couple to a first bump from a die. In some implementations, the first bump includes a first under metallization (UBM) layer, a first interconnect pillar, and a first solder ball.[0014] According to an aspect, the substrate comprises one of at least a dielectric, glass, ceramic, and/or silicon.[0015] According to one aspect, the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.[0016] A second example provides a method for fabricating an integrated device. The method forms a substrate. The method forms a first via traversing the substrate, wherein the first via has a first via dimension. The method forms a first bump pad on a surface of the substrate such that the first bump pad is coupled to the first via, where the first bump pad has a first pad dimension that is equal or less then the first via dimension. [0017] According to an aspect, the method further forms a second via traversing the substrate, wherein the second via has a second via dimension. The method forms a second bump pad on the surface of the substrate such that the second bump pad is coupled to the second via, where the second bump pad has a second pad dimension that is equal or less then the second via dimension. In some implementations, a pitch between the first via and the second via is about 80 microns (μιη) or less. In some implementations, a pitch between the first via and the second via is about 125 microns (μιη) or less.[0018] According to one aspect, the first bump pad is configured to couple to an interconnect of a die.[0019] According to an aspect, the first bump pad is a peripheral bump pad that is located near an edge of a die area of the substrate.[0020] According to one aspect, the first bump pad is configured to couple to a first bump from a die. In some implementations, the first bump includes a first under metallization (UBM) layer, a first interconnect pillar, and a first solder ball.[0021] According to an aspect, the substrate comprises one of at least a dielectric, glass, ceramic, and/or silicon.[0022] According to one aspect, the integrated device is incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, and/or a laptop computer.DRAWINGS[0023] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.[0024] FIG. 1 illustrates a plan view of a substrate.[0025] FIG. 2 illustrates a profile view of a substrate.[0026] FIG. 3 illustrates a profile view of a substrate and a die.[0027] FIG. 4 illustrates a plan view of a substrate.[0028] FIG. 5 illustrates a profile view of a substrate.[0029] FIG. 6 illustrates a profile view of a substrate and a die.[0030] FIG. 7 illustrates a plan view of a substrate. [0031] FIG. 8 illustrates a plan view of a portion of a substrate with several pitches shown.[0032] FIG. 9 illustrates a profile view of another substrate and a die.[0033] FIG. 10 illustrates a profile view of yet another substrate and a die.[0034] FIG. 11 illustrates a profile view of a die.[0035] FIG. 12 (comprising FIG. 12 A, FIG. 12B, and FIG. 12C) illustrates a sequence for providing a substrate and a die.[0036] FIG. 13 illustrates a profile view of another substrate and a die.[0037] FIG. 14 illustrates a flow diagram of a method for providing a substrate.[0038] FIG. 15 illustrates a flow diagram of a modified semi-additive processing (mSAP) patterning process for manufacturing a substrate.[0039] FIG. 16 illustrates a sequence of a mSAP patterning process on a layer of a substrate.[0040] FIG. 17 illustrates a flow diagram of a semi-additive processing (SAP) patterning process for manufacturing a substrate.[0041] FIG. 18 illustrates a sequence of a SAP patterning process on a layer of a substrate.[0042] FIG. 19 illustrates a flow diagram of a conceptual plating process.[0043] FIG. 20 illustrates various electronic devices that may integrate an integrated device, substrate, and/or PCB described herein.DETAILED DESCRIPTION[0044] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.Overview[0045] Some novel features pertain to an integrated device (e.g., semiconductor device, die package) that includes a substrate, a first via, and a first bump pad. The first via traverses the substrate. The first via has a first via lateral dimension. The first bump pad is on a surface of the substrate. The first bump pad is coupled to the first via. The first bump pad has a first pad lateral dimension that is equal or less then the first via lateral dimension. In some implementations, the first bump pad is a peripheral bump pad that is located near an edge of a die area (e.g., flip chip area) of the substrate. In some implementations, the integrated device includes a second via and a second bump pad. The second via traverses the substrate. The second via has a second via lateral dimension. The second bump pad is on the surface of the substrate. The second bump pad is coupled to the second via. The second bump pad has a second pad lateral dimension that is equal or less then the second via lateral dimension. In some implementations, a pitch between the first via and the second via is about 40 microns (μιη) or more. In some implementations, a pitch between the first via and the second via is about 80 microns (μιη) or less. In some implementations, a pitch between the first via and the second via is about 125 microns (μιη) or less. In some implementations, a pitch between the first bump pad and the second bump pad is about 125 microns (μιη) or less. In some implementations, a pitch is defined as a center to center distance between two neighboring interconnects. Examples of pitches are further described in FIG. 8.Exemplary Package Substrate Comprising Via Pad In Bump Area[0046] FIG. 4 illustrates a plan view (e.g., top view) of a package substrate 400 that includes a substrate 402 and several interconnects (e.g., first interconnects 408, 418). An interconnect may include traces, pads and/or vias. The interconnects 408 and 418 pads that are located on a first surface of the substrate 402. In some implementations, the interconnects 408 and 418 are via pads and bump pads. The interconnects 408 and 418 are coupled to vias (e.g., through substrate vias) in the substrate 402. These vias are not visible from the plan view since the interconnects 408 and 418 are the same size (e.g., same lateral dimension) as the cross-section of the vias. In some implementations, the interconnects 408 and 418 may have a smaller cross-section than the cross-section of the vias. In such instances, these vias may be visible from a plan view. Examples of vias will be further described in FIG. 5.[0047] The interconnects 408 and 418 may be configured to couple to bumps (e.g., interconnect pillar) of a die (which will be further described below in FIG. 6). In some implementations, the interconnects 408 and 418 are peripheral bump pads that are located near an edge of a die area 420 of the substrate. In some implementations, the die area 420 of the substrate 400 is a bump area of the substrate 400. In some implementations, the bump area of the substrate 400 is an area of the substrate that a die covers or is located above the substrate when a die is coupled to the substrate. In some implementations, the size of the via in the substrate is preserved while reducing the size of the via pads (e.g., reducing the pitch of via pads) that are coupled to vias near an edge and/or periphery of a die area of the substrate.[0048] Different implementations may use different materials for the substrate 402. In some implementations, the substrate 402 is one of at least silicon, glass, ceramic, and/or dielectric. In some implementations, the package substrate 400 is configured to couple to one or more dies (e.g., flip chip). FIG. 4 also illustrates a first bump area and a second bump area. In some implementations, a bump area is a region or portion of a substrate that a bump (e.g., interconnect pillar) from a die will couple to when a die is coupled to the substrate. In some implementations, the first bump area corresponds to the area of the interconnect 408 (e.g., bump pad). In some implementations, the second bump area corresponds to the area of the interconnect 418 (e.g., bump pad).[0049] As further shown in FIG. 4, the interconnects (e.g., pads, traces) are arranged in the package substrate 400 along different rows and columns. Different implementations may use different spacing and/or pitch between interconnects. In some implementations, a pitch between two neighboring /adjacent interconnects is about 125 microns (μιη) or less. In some implementations, a pitch between two neighboring / adjacent interconnects is about 80 microns (μιη) or less. In some implementations, a pitch between two neighboring / adjacent interconnects is about 40 microns (μιη) or more. In some implementations, a pitch is defined as a center to center distance between two adjacent / neighboring interconnects (e.g., traces, vias and/or pads). In some implementations, a pitch is defined as a center to center distance between two adjacent / neighboring traces, vias and/or pads, where the adjacent / neighboring traces, vias and/or pads are in a same column of traces, vias, and/or pads. In some implementations, a pitch is defined as a center to center distance between two adjacent / neighboring traces, vias and/or pads, where the adjacent / neighboring traces, vias and/or pads are in a same row of traces, vias, and/or pads.[0050] Each of the interconnects (e.g., pads, traces) of the package substrate 400 has at least one dimension (e.g., width, length, diameter). In some implementations, a first dimension (e.g., width) of a trace is the same or less than a first dimension (e.g., diameter) of a via. In some implementations, a first dimension (e.g., width) of a pad (e.g., via pad, bump pad) is the same or less than a first dimension (e.g., diameter) of a via.[0051] It should be noted that for the same column of vias, the vias are located in alternating rows (e.g., non-adjacent rows) of vias. Similarly, it should be noted that for the same row of vias, the vias are located in alternating columns (e.g., non-adjacent columns) of vias. For example, for vias in a first column, these vias would be located in a first row, a third row, and/or a fifth row. In another example, for vias in a first row, these vias would be located in a first column, a third column, and/or a fifth column. However, vias may be located in adjacent rows and/or columns of vias.[0052] It should be noted that for the same column of via pads, the via pads are located in alternating rows (e.g., non-adjacent rows) of via pads. Similarly, it should be noted that for the same row of via pads, the via pads are located in alternating columns (e.g., non-adjacent columns) of via pads. For example, for via pads in a first column, these via pads would be located in a first row, a third row, and/or a fifth row. In another example, for via pads in a first row, these via pads would be located in a first column, a third column, and/or a fifth column. However, via pads may be located in adjacent rows and/or columns of via pads.[0053] As shown in FIG. 4, at least some of the bump pads are directly coupled to a via. As such, at least some of the bump pads bypass a trace when coupled to a via. Moreover, the pads are configured to operate as both a bump pad and a via pad. FIG. 4 illustrates that the first interconnect 408 (e.g., bump pad) is directly coupled to a first via (not visible). Similarly, FIG. 4 illustrates that a second interconnect 418 (e.g., bump pad) is directly coupled to a second via (not visible). When the first interconnect 408 is directly coupled to the first via, the first interconnect bypasses any intermediate traces. Similarly, when the second interconnect 418 is directly coupled to the second via, the second interconnect bypasses any traces. The reduction of an intermediate trace between the interconnect and the via, shortens the electrical path, thereby increasing the performance of the integrated circuit (IC) design, and also reduces the complexity of the IC design.[0054] FIG. 5 illustrates a profile view (e.g., side view) of the cross-section AA of the package substrate 400 of FIG. 4. As shown in FIG. 5, the first interconnect 408 and the second interconnect 418 are on a first surface of the substrate 402. In some implementations, the first interconnect 408 and the second interconnect 418 are bump interconnects (e.g., bump pad) configured to couple to a bump (e.g., interconnect pillar) from a die (e.g., flip chip). FIG. 5 illustrates that the substrate 402 includes a first via 508 and a second via 518. Each of the first via 508 and the second via 518 traverses the substrate 402. The first interconnect 408 is directly coupled to the first via 508. In some implementations, the size (e.g., lateral dimension) of the first interconnect 408 is the same or less than the cross-sectional size (e.g., lateral dimension) of the first via 508. The second interconnect 418 is directly coupled to the second via 518. In some implementations, the size (e.g., lateral dimension) of the second interconnect 418 is the same or less than the cross-sectional size (e.g., lateral dimension) of the second via 518. FIG. 5 also illustrates a first bump area 510 and a second bump area 520. In some implementations, the first bump area 510 corresponds to the size of the first interconnect 408. In some implementations, the second bump area 520 corresponds to the size of the second interconnect 418.[0055] In some implementations, the first via 508 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. In some implementations, the second via 518 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. Examples of first and second metal layers for the vias are described in FIGS. 12A-12C.[0056] FIG. 6 illustrates how a die may be coupled to a package substrate. As shown in FIG. 6, a die 600 (e.g., flip chip, bare die) that includes a first bump 602 and a second bump 604, is coupled to the package substrate 400. The first bump 602 may include a first under bump metallization (UBM) layer, a first interconnect pillar (e.g., copper pillar), and a first solder ball. The second bump 604 may include a second under bump metallization (UBM) layer, a second interconnect pillar (e.g., copper pillar), and a second solder ball. The first bump 602 of the die 600 is coupled to the first interconnect 408. The second bump 604 of the die 600 is coupled to the second interconnect 418. As shown in FIG. 6, the first bump 602 is coupled to the first interconnect 404 such that the first bump 602 is vertically (e.g., partially, substantially, completely) over the first via 508 of the substrate 402. Similarly, the second bump 604 is coupled to the second interconnect 418 such that the second bump 604 is vertically (e.g., partially, substantially, completely) over the second via 518 of the substrate 402. In some implementations, the first bump 602 is coupled to the first interconnect 408 without short-circuiting an electrical signal that traverses the first interconnect 408. In some implementations, the second bump 604 is coupled to the second interconnect 418 without short-circuiting an electrical signal that traverses the second interconnect 418.[0057] As show in FIG. 6, the first via 508 is located in the first bump area 510 of the substrate. Similarly, the second via 518 is located in the second bump area 520 the substrate. In some implementations, a bump area is defined as an area of the substrate (e.g., area or portion of the substrate) that a bump from a die will couple to. The first bump area 510 includes portions of the first interconnect 408 that will couple with the first bump 602 of the die 600. In some implementations, the first via 508 is underneath the first bump area 510. The second bump area 520 includes portions of the second interconnect 418 that will couple with the second bump 604 of the die 600. In some implementations, the second via 518 is underneath the second bump area 520.[0058] Different implementations may have different positions and/or configurations for the vias and/or via pads in a package substrate.[0059] FIG. 7 illustrates a plan view (e.g., top view) of a package substrate 700 that includes a substrate 702 and several interconnects (e.g., interconnects 708, 718). An interconnect may include traces, pads and/or vias. The interconnects 708 and 718 pads that are located on a first surface of the substrate 702. In some implementations, the interconnects 708 and 718 are via pads and bump pads. The interconnects 708 and 718 are coupled to vias (e.g., through substrate vias) in the substrate 402. These vias are not visible from the plan view since the interconnects 708 and 718 are the same size as the cross-section of the vias. In some implementations, the interconnects 708 and 718 may have a smaller cross-section than the cross-section of the vias. In such instances, these vias may be visible from a plan view.[0060] The interconnects 708 and 718 may be configured to couple to bumps (e.g., interconnect pillar) of a die (which will be further described below in FIGS. 9 and 10). In some implementations, the interconnects 708 and 718 are peripheral bump pads that are located near an edge of a die area 720 of the substrate. In some implementations, the die area 720 of the substrate 700 is a bump area of the substrate 700. In some implementations, the bump area of the substrate 700 is an area of the substrate that a die covers or is located above the substrate when a die is coupled to the substrate. In some implementations, the size of the via in the substrate is preserved while reducing the size of the via pads (e.g., reducing the pitch of via pads) that are coupled to vias near an edge and/or periphery of a die area of the substrate. [0061] Different implementations may use different materials for the substrate 702. In some implementations, the substrate 702 is one of at least silicon, glass, ceramic, and/or dielectric. In some implementations, the package substrate 700 is configured to couple to one or more dies (e.g., flip chip). FIG. 7 also illustrates a first bump area and a second bump area. In some implementations, a bump area is a region or portion of a substrate that a bump (e.g., interconnect pillar) from a die will couple to when a die is coupled to the substrate. In some implementations, the first bump area corresponds to the area of the interconnect 708 (e.g., bump pad). In some implementations, the second bump area corresponds to the area of the interconnect 718 (e.g., bump pad).[0062] As further shown in FIG. 7, the interconnects (e.g., pads, traces) are arranged in the package substrate 700 along different rows and columns. Different implementations may use different spacing and/or pitch between interconnects. In some implementations, a pitch between two neighboring /adjacent interconnects is about 125 microns (μιη) or less. In some implementations, a pitch between two neighboring / adjacent interconnects is about 80 microns (μιη) or less. In some implementations, a pitch between two neighboring / adjacent interconnects is about 40 microns (μιη) or more. In some implementations, a pitch is defined as a center to center distance between two adjacent / neighboring interconnects (e.g., traces, vias and/or pads). In some implementations, a pitch is defined as a center to center distance between two adjacent / neighboring traces, vias and/or pads, where the adjacent / neighboring traces, vias and/or pads are in a same column of traces, vias, and/or pads. In some implementations, a pitch is defined as a center to center distance between two adjacent / neighboring traces, vias and/or pads, where the adjacent / neighboring traces, vias and/or pads are in a same row of traces, vias, and/or pads.[0063] Each of the interconnects (e.g., pads, traces) of the package substrate 700 has at least one dimension (e.g., width, length, diameter). In some implementations, a first dimension (e.g., width) of a trace is the same or less than a first dimension (e.g., diameter) of a via. In some implementations, a first dimension (e.g., width) of a pad (e.g., via pad, bump pad) is the same or less than a first dimension (e.g., diameter) of a via.[0064] FIG. 8 illustrates how a pitch may be defined in some implementations. FIG. 8 illustrates a substrate that includes a first via pad 801, a second via pad 803, a third via pad 805, a fourth via pad 807, a first bump pad 811, a second bump pad 813, a third bump pad 815, a first interconnect 821, a second interconnect 823, and a third interconnect 825. FIG. 8 also illustrates a first pitch 830 and a second pitch 832. In some implementations, a first pitch (e.g., first pitch 830) is a center to center distance between two adjacent / neighboring interconnects (e.g., vias, traces, pads) on different rows or columns. For example, the first pitch 830 may be a center to center distance between the third via pad 805 and the second bump pad 813 or the second interconnect 823. In some implementations, the first pitch 830 may be about 40 microns (μιη) or more.[0065] In some implementations, a second pitch (e.g., second pitch 832) is a center to center distance between two neighboring / adjacent interconnects (e.g., vias, traces, pads) on the same row or column. For example, the second pitch 832 may be a center to center distance between the second via pad 803 and the third via pad 805. In another example, the second pitch may be a center to center distance between the first bump 81 1 and the second bump pad 813. In another example, a second pitch may be a center to center distance between the second via pad 803 and the second bump pad 813 or the second interconnect 823. In another example, a first pitch may be a center to center distance between the second interconnect 823 and the third interconnect 825. In some implementations, the second pitch 832 may be about 80 microns (μιη) or less.[0066] Different implementations may have different dimensions for the traces, vias, and/or via pads. For example, in some implementations, a trace may have a width of about 10 microns (μιη) - 30 microns (μιη). In some implementations, a via may have a width of about 50 microns (μιη) - 75 microns (μιη). In some implementations, a via pad may have a width of about 75 microns (μιη) or less. It should be noted that the above dimensions are merely examples, and the dimensions of the traces, vias, and/or via pads in the present disclosure should not be limited to what is described.[0067] FIG. 9 illustrates a profile view (e.g., side view) of the cross-section BB of the package substrate 700 of FIG. 7 coupled to a die. As shown in FIG. 9, a package 900 includes a substrate 902, a die 904, a solder resist layer 906, and an under fill 908. In some implementations, the die 904 is a flip chip and/or a bare die.[0068] The package substrate 902 includes a first via 910, a second via 914, a third via 918, a first interconnect 920, a second interconnect 924, a third interconnect 928, a first pad 922, and a second pad 926. The solder resist layer 906 is coupled to a first surface of the substrate 902. The first interconnect 920, the second interconnect 924, the third interconnect 928, the first pad 922, and the second pad 926 are on the first surface of the substrate 902. [0069] The first interconnect 920 is coupled to the first via 910. The first interconnect 920 is a pad (e.g., bump pad, via pad). The first interconnect 920 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the first via 910. The second interconnect 924 is coupled to the second via 914. The second interconnect 924 is a pad (e.g., bump pad, via pad). The second interconnect 924 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the second via 914. The third interconnect 928 is coupled to the third via 918. The third interconnect 928 is a pad (e.g., bump pad, via pad). The third interconnect 928 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the third via 918. The under fill 908 is between the substrate 902 and the die 904.[0070] The die 904 includes a first bump 930, a second bump 932, a third bump 934, a fourth bump 936, and a fifth bump 938. Each of the bump may include at least an under bump metallization (UBM) layer, an interconnect pillar (e.g., copper pillar), and a solder ball. As shown in FIG. 9, the first bump 930 is coupled to the first interconnect 920 such that the first bump 930 is vertically (e.g., partially, substantially, completely) over the first via 910. The second bump 932 is coupled to the first pad 922. The third bump 934 is coupled to the second interconnect 924 such that the third bump 934 is vertically (e.g., partially, substantially, completely) over the second via 914. The fourth bump 936 is coupled to the second pad 926. The fifth bump 938 is coupled to the third interconnect 928 such that the fifth bump 938 is vertically (e.g., partially, substantially, completely) over the third via 918. In some implementations, the first bump 930 is coupled to the first interconnect 920 without short-circuiting the electrical signal that traverses the first interconnect 920. In some implementations, the second bump 932 is coupled to the second interconnect 922 without short-circuiting the electrical signal that traverses the second interconnect 922. In some implementations, the third bump 934 is coupled to the third interconnect 924 without short-circuiting the electrical signal that traverses the third interconnect 924. In some implementations, the fourth bump 936 is coupled to the fourth interconnect 926 without short-circuiting the electrical signal that traverses the fourth interconnect 926. In some implementations, the fifth bump 938 is coupled to the fifth interconnect 928 without short-circuiting the electrical signal that traverses the fifth interconnect 928.[0071] In some implementations, the first via 910 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. In some implementations, the second via 914 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. In some implementations, the third via 918 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. Examples of first and second metal layers for the vias are described in FIGS. 12A-12C.[0072] FIG. 10 illustrates a profile view (e.g., side view) of the cross-section CC of the package substrate 700 of FIG. 7 coupled to a die. As shown in FIG. 10, a package 1000 includes a substrate 1002, a die 1004, a solder resist layer 1006, and an under fill 1008. In some implementations, the die 1004 is a flip chip and/or a bare die.[0073] The package substrate 1002 includes a first via 1010, a second via 1012, a third via 1014, a fourth via 1016, a fifth via 1018, a first interconnect 1020, a second interconnect 1022, a third interconnect 1024, a fourth interconnect 1026, and a fifth interconnect 1028. The solder resist layer 1006 is coupled to a first surface of the substrate 1002. The first interconnect 1020, the second interconnect 1022, the third interconnect 1024, the fourth interconnect 1026, and the fifth interconnect 1028 are on the first surface of the substrate 1002.[0074] The first interconnect 1020 is coupled to the first via 1010. The first interconnect 1020 is a pad (e.g., bump pad, via pad). The first interconnect 1020 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the first via 1010. The second interconnect 1022 is coupled to the second via 1012. The second interconnect 1022 is a pad (e.g., bump pad, via pad). The second interconnect 1022 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the second via 1012. The third interconnect 1024 is coupled to the third via 1014. The first interconnect 1020 is a pad (e.g., bump pad, via pad). The third interconnect 1024 has a first dimension (e.g., width) that is the same or less than a third dimension (e.g., width) of the third via 1014. The fourth interconnect 1026 is coupled to the fourth via 1016. The fourth interconnect 1026 is a pad (e.g., bump pad, via pad). The fourth interconnect 1026 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the fourth via 1016. The fifth interconnect 1028 is coupled to the fifth via 1018. The fifth interconnect 1028 is a pad (e.g., bump pad, via pad). The fifth interconnect 1028 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the fifth via 1018. The under fill 1008 is between the substrate 1002 and the die 1004.[0075] The die 1004 includes a first bump 1030, a second bump 1032, a third bump 1034, a fourth bump 1036, and a fifth bump 1038. Each of the bump may include at least an under bump metallization (UBM) layer, an interconnect pillar (e.g., copper pillar), and a solder ball. As shown in FIG. 10, the first bump 1030 is coupled to the first interconnect 1020 such that the first bump 1030 is vertically (e.g., partially, substantially, completely) over the first via 1010. The second bump 1032 is coupled to the second interconnect 1022 such that the second bump 1032 is vertically (e.g., partially, substantially, completely) over the second via 1012. The third bump 1034 is coupled to the third interconnect 1024 such that the third bump 1034 is vertically (e.g., partially, substantially, completely) over the third via 1014. The fourth bump 1036 is coupled to the fourth interconnect 1026 such that the fourth bump 1036 is vertically (e.g., partially, substantially, completely) over the fourth via 1016. The fifth bump 1038 is coupled to the fifth interconnect 1028 such that the fifth bump 1038 is vertically (e.g., partially, substantially, completely) over the fifth via 1018.[0076] In some implementations, the first via 1010 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. In some implementations, the second via 1012 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. In some implementations, the third via 1014 includes a first metal layer and a second metal layer. In some implementations, the first metal layer is a seed metal layer. In some implementations, the first metal layer is an electroless metal layer. Examples of first and second metal layers for the vias are described in FIGS. 12A-12C.[0077] FIG. 11 conceptually illustrates an example of a die 1100 (which is a form of an integrated device). In some implementations, the die 1100 may correspond to the flip chip 600 of FIG. 6. As shown in FIG. 1 1, the die 1100 (e.g., integrated device, bare die) includes a substrate 1 101, several lower level metal layers and dielectric layers 1 102, a first pad 1104, a second pad 1 106, a passivation layer 1108, a first insulation layer 11 10, a first under bump metallization (UBM) layer 11 12, a second under bump metallization (UBM) layer 1 114, a first interconnect 11 16 (e.g., first pillar interconnect), a second interconnect 11 18 (e.g., second pillar interconnect), a first solder 1 126, and a second solder 1128. In some implementations, the first UBM layer 1112, the first interconnect 1116, and the first solder ball 1126 may collectively be referred as a first bump for the die 1111. In some implementations, the second UBM layer 1114, the second interconnect 1118, and the second solder ball 1128 may collectively be referred as a second bump for the die.[0078] Having provided several exemplary substrates that includes vias under a bump area, a sequence for providing / manufacturing a substrate that includes a via under a bump area will now be described below.Exemplary Sequence for Providing a Substrate That Includes a Via in a Bump Area[0079] FIG. 12 (which includes FIGS. 12A-12C) illustrates an exemplary sequence for providing / manufacturing / fabricating a substrate that includes a via under a bump area. It should be noted that for the purpose of clarity and simplification, the processes of FIGS. 12A-12C do not necessarily include all the steps and/or stages of manufacturing a substrate. Moreover, in some instances, several steps and/or stages may have been combined into a single step and/or stage in order to simplify the description of the processes. It should also be noted that the shapes of the patterns, pattern features, components (e.g., composite conductive trace, vias) in FIGS. 12A-12C are merely conceptual illustrations and are not intended to necessarily represent the actual shape and form of the patterns, pattern features and components. In some implementations, the sequence of FIGS. 12A-12C illustrates a process that can fabricate traces, vias, and/or via pads having dimensions that are described in the present disclosure (e.g., dimensions described in FIG. 8).[0080] As shown in FIG. 12A, a substrate (e.g., substrate 1202) is provided (at stage 1). In some implementations, providing a substrate may include fabricating (e.g., forming) a substrate or receiving a substrate from a supplier. Different implementations may use different materials for the substrate. In some implementations, the substrate may include one of at least silicon, glass, ceramic and/or dielectric. In some implementations, the substrate may include several layers (e.g., laminate substrate that includes core layer and several prepreg layers).[0081] Next, several cavities are provided (at stage 2) in the substrate. As shown at stage 2, a first cavity 1203, a second cavity 1205, and a third cavity 1207 are provided in the substrate 1202. The first cavity 1203, the second cavity 1205, and the third cavity 1207 traverse the substrate 1202. Different implementations may provide different manufacturing processes for providing (e.g., forming, creating) the cavities. In some implementation, the cavities are provided (at stage 2) using a laser etching process.[0082] The wall surfaces of the cavities are plated (at stage 3) with a metal layer. As shown at stage 3, a first metal layer 1204 is plated on the wall surface of the first cavity 1203, a second metal layer 1206 is plated on the wall surface of the second cavity 1205, and a third metal layer 1208 is plated on the wall surface of the third cavity 1207. In some implementations, the first metal layer 1204, the second metal layer 1206, and the third metal layer 1208 are a seed layer (e.g., electroless metal layer). In some implementations, providing (e.g., forming, creating) the metal layers on the walls of the cavities includes using an electroless copper plating process.[0083] As shown in FIG. 12B, a dry film layer (e.g., dry film 1210) is provided (at stage 4) on a first surface of the substrate (e.g., substrate 1202). Next, several openings are provided (at stage 5) in the dry film layer. As shown in stage 5, a first opening 1213, a second opening 1215, a third opening 1217, a fourth opening 121 1, and fifth opening 1219 are provided in the dry film layer 1210. Different implementations may provide (e.g., forming, creating) the openings differently. In some implementations, the openings are provided using exposure and development techniques. In some implementations, the openings have a dimension (e.g., width) that is equal or less then the cavities in substrate.[0084] Several metal layers are then provided (at stage 6) in the substrate. As shown at stage 6, the first cavity 1203 is filled with metal to form the first via 1232, the second cavity 1205 is filled with metal to form the second via 1234, and the third cavity 1207 is filled with metal to form the third via 1236. In some implementations, the first via 1232, the second via 1234, the third via 1236 includes a first metal layer and a second metal layer. In some implementations, the first metal of the vias (e.g., via 1232) is a seed layer (e.g., metal layer 1204). In some implementations, the second metal layer of the vias (e.g., via 1234) is a copper metal layer that is coupled to the first metal layer.[0085] In addition, the first opening 1213, the second opening 1215, the third opening 1217, the fourth opening 121 1, and the fifth opening 1219 are filled with metal to respectively form, a first interconnect 1222, a second interconnect 1224, a third interconnect 1226, a fourth interconnect 1221, and a fifth interconnect 1229. In some implementations, providing (at stage 6) the metal includes using an electrolytic plating process. In some implementations, the interconnects 1221, 1222, 1224, 1226 and 1229 have a size (e.g., lateral dimension) that is the same or less than the cross-sectional size of a via. For example, in some implementations, the size (e.g., width) of the interconnect 1222 is the same or less than the cross-sectional size (e.g., width) of the via 1232.[0086] As shown in FIG. 12C, the dry film layer (e.g., dry film 1210) is removed (at stage 7). In some implementations, removing (at stage 7) the dry film includes etching away any remaining dry film.[0087] A solder resist layer (e.g., solder resist 1240) is selectively provided (at stage 8) on the substrate. Different implementations may selectively provide the solder resist layer. In some implementations, selectively providing (e.g., forming, creating) the solder resist layer includes providing the solder resist layer, flash etching, and/or back end processing.[0088] A die is provided and coupled (at stage 9) to the substrate. In some implementations, the die is a flip chip. The die includes several bumps. As shown at stage 9, the die is coupled to the substrate such a first bump is coupled to a first interconnect, where the first bump is at least partially vertically over the first via. Stage 9 also illustrates an under fill 1260 is between the substrate 1202 and the die 1250.Exemplary Package Substrate Comprising Via Pad In Bump Area[0089] FIG. 13 illustrates a profile view (e.g., side view) of a package substrate. As shown in FIG. 13, a package 1300 includes a substrate 1302, a die 1304, a solder resist layer 1306, and an under fill 1308. In some implementations, the die 1304 is a flip chip.[0090] The package substrate 1302 includes a first via 1310, a second via 1312, a third via 1314, a fourth via 1316, a fifth via 1318, a first interconnect 1320, a second interconnect 1322, a third interconnect 1324, a fourth interconnect 1326, and a fifth interconnect 1328. The solder resist layer 1306 is coupled to a first surface of the substrate 1302. The first interconnect 1320, the second interconnect 1322, the third interconnect 1324, the fourth interconnect 1326, and the fifth interconnect 1328 are on the first surface of the substrate 1302.[0091] The first interconnect 1320 is coupled to the first via 1310. The first interconnect 1320 is a pad (e.g., bump pad, via pad). The first interconnect 1320 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the first via 1310. The second interconnect 1322 is coupled to the second via 1312. The second interconnect 1322 is a pad (e.g., bump pad, via pad). The second interconnect 1322 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the second via 1312. The third interconnect 1324 is coupled to the third via 1314. The first interconnect 1320 is a pad (e.g., bump pad, via pad). The third interconnect 1324 has a first dimension (e.g., width) that is the same or less than a third dimension (e.g., width) of the third via 1314. The fourth interconnect 1326 is coupled to the fourth via 1316. The fourth interconnect 1326 is a pad (e.g., bump pad, via pad). The fourth interconnect 1326 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the fourth via 1316. The fifth interconnect 1328 is coupled to the fifth via 1318. The fifth interconnect 1328 is a pad (e.g., bump pad, via pad). The fifth interconnect 1328 has a first dimension (e.g., width) that is the same or less than a first dimension (e.g., width) of the fifth via 1318. The under fill 1308 is between the substrate 1302 and the die 1304.[0092] The die 1304 includes a first bump 1330, a second bump 1332, a third bump 1334, a fourth bump 1336, and a fifth bump 1338. Each of the bump may include at least an under bump metallization (UBM) layer, an interconnect pillar (e.g., copper pillar), and a solder ball. As shown in FIG. 13, the first bump 1330 is coupled to the first interconnect 1320 such that the first bump 1330 is vertically (e.g., partially, substantially, completely) over the first via 1310. The second bump 1332 is coupled to the second interconnect 1322 such that the second bump 1332 is vertically (e.g., partially, substantially, completely) over the second via 1312. The third bump 1334 is coupled to the third interconnect 1324 such that the third bump 1334 is vertically (e.g., partially, substantially, completely) over the third via 1314. The fourth bump 1336 is coupled to the fourth interconnect 1326 such that the fourth bump 1336 is vertically (e.g., partially, substantially, completely) over the fourth via 1316. The fifth bump 1338 is coupled to the fifth interconnect 1328 such that the fifth bump 1338 is vertically (e.g., partially, substantially, completely) over the fifth via 1318.[0093] In some implementations, the first via 1310 includes a first metal layer 1311 and a second metal layer 1313. In some implementations, the first metal layer 1311 is a seed metal layer. In some implementations, the first metal layer 131 1 is an electroless metal layer.Exemplary Method for Providing a Substrate That Includes a Via in a Bump Area[0094] FIG. 14 illustrates an exemplary method for providing / manufacturing / fabricating a substrate that includes a via under a bump area. It should be noted that for the purpose of clarity and simplification, the processes of FIG. 14 do not necessarily include all the steps and/or stages of manufacturing a substrate. Moreover, in some instances, several steps and/or stages may have been combined into a single step and/or stage in order to simplify the description of the processes. It should also be noted that the shapes of the patterns, pattern features, components (e.g., composite conductive trace, vias) in FIG. 14 are merely conceptual illustrations and are not intended to necessarily represent the actual shape and form of the patterns, pattern features and components.[0095] As shown in FIG. 14, a method provides (at 1405) a substrate. In some implementations, providing (at 1405) a substrate may include fabricating (e.g., forming) a substrate or receiving a substrate from a supplier. Different implementations may use different materials for the substrate. In some implementations, the substrate may include one of at least silicon, glass, ceramic and/or dielectric. In some implementations, the substrate may include several layers (e.g., laminate substrate that includes core layer and several prepreg layers).[0096] Next, the method provides (at 1410) at least one cavity in the substrate. For example, the method may provide a first cavity 1203, a second cavity 1205, and a third cavity 1207 in the substrate 1402, as shown in stage 2 of FIG. 12A. The first cavity 1403, the second cavity 1405, and the third cavity 1407 traverse the substrate 1402. Different implementations may provide different manufacturing processes for providing (e.g., forming, creating) the cavities. In some implementation, the cavities are provided (at stage 1410) using a laser etching process.[0097] The method then provides (at 1415) a first metal layer on the wall of at least one cavity. In some implementations, providing (e.g., forming) the first metal layer includes plating the wall surface of the cavities with a metal layer. In some implementations, the first metal layer is an electroless seed metal layer. Stage 3 of FIG. 12A illustrates an example of providing the first metal layer. As shown in stage 3 of FIG. 12A, a first metal layer 1204 is plated on the wall surface of the first cavity 1203, a second metal layer 1206 is plated on the wall surface of the second cavity 1205, and a third metal layer 1208 is plated on the wall surface of the third cavity 1207. In some implementations, providing (e.g., forming, creating) the metal layers on the walls of the cavities includes using an electroless copper plating process.[0098] The method further provides (at 1420) a resist layer on the substrate. In some implementations, the resist layer is dry film layer. However, different implementations may use different materials for the resist layer. Stage 4 of FIG. 12B, illustrates an example of providing a dry film layer (e.g., dry film 1210) on a first surface of the substrate (e.g., substrate 1202).[0099] The method then provides (at 1425) at least one cavity (e.g., opening) in the resist layer. Stage 5 of FIG. 12B illustrates an example of at least one cavity formed in a resist layer (e.g., dry film layer). As shown in stage 5, a first opening 1213, a second opening 1215, a third opening 1217, a fourth opening 121 1, and fifth opening 1219 are provided in the dry film layer 1210. Different implementations may provide (e.g., forming, creating) the openings differently. In some implementations, the openings are provided using exposure and development techniques. In some implementations, the openings have a dimension (e.g., width) that is equal or less then the cavities in substrate.[00100] The method further provides (at 1430) a second metal layer on the substrate. In some implementations, at least some of the second metal layer is provided on the first metal layer. Stage 6 of FIG. 12B illustrates an example of providing a second metal layer. As shown at stage 6, the first cavity 1203 is filled with metal to form the first via 1232, the second cavity 1205 is filled with metal to form the second via 1234, and the third cavity 1207 is filled with metal to form the third via 1236. In addition, the first opening 1213, the second opening 1215, the third opening 1217, the fourth opening 121 1, and the fifth opening 1219 are filled with metal to respectively form, a first interconnect 1222, a second interconnect 1224, a third interconnect 1226, a fourth interconnect 1221, and a fifth interconnect 1229. In some implementations, providing (at stage 6) the metal includes using an electrolytic plating process. In some implementations, the interconnects 1221, 1222, 1224, 1226 and 1229 have a size (e.g., lateral dimension) that is the same or less than the cross-sectional size of a via. For example, in some implementations, the size (e.g., width) of the interconnect 1222 is the same or less than the cross-sectional size (e.g., width) of the via 1232.[00101] The method then removes (at 1435) the resist layer. Stage 7 of FIG. 12C illustrates an example of removing the resist layer. As shown in stage 7 of FIG. 12C, the dry film layer (e.g., dry film 1410) is removed. In some implementations, removing the dry film includes etching away any remaining dry film.[00102] The method further selectively provides (at 1440) a solder resist layer. Stage 8 of FIG. 12C illustrates an example of selectively providing a solder resist layer. As shown in stage 8 of FIG. 12C, a solder resist layer 1240 is selectively provided on the substrate. Different implementations may selectively provide the solder resist layer. In some implementations, selectively providing (e.g., forming, creating) the solder resist layer includes providing the solder resist layer, flash etching, and/or back end processing.Exemplary Flow Diagram for Plating Process[00103] FIG. 15 illustrates a flow diagram for a modified semi-additive processing (mSAP) patterning process for manufacturing a substrate. FIG. 15 will be described with reference to FIG. 16 which illustrates a sequence of a layer (e.g., core layer, prepreg layer) of a substrate during the mSAP process of some implementations.[00104] As shown in FIG. 15, the process 1500 may start by thinning (at 1505) a metal layer (e.g., copper composite material) on a dielectric layer. The dielectric layer may be a core layer or a prepreg layer of the substrate. In some implementations, the metal layer is thinned to a thickness of about 3-5 microns (μιη). The thinning of the metal layer is illustrated in stage 1 of FIG. 16, which illustrates a dielectric layer 1602 that includes a thin copper layer 1604 (which may be a copper composite material). In some implementations, the metal layer may already be thin enough. For example, in some implementations, the core layer or dielectric layer may be provided with a thin copper foil. As such, some implementations may bypass / skip thinning the metal layer of the core layer / dielectric layer. In addition, in some implementations electroless copper seed layer plating may performed to cover the surface of any drilled vias in one or more dielectric layers.[00105] Next, the process applies (at 1510) a dry film resist (DFR) and a pattern is created (at 1515) on the DFR. Stage 2 of FIG. 16 illustrates a DFR 1606 being applied on top of the thinned metal layer 1604, while stage 3 of FIG. 16 illustrates the patterning of the DFR 1606. As shown in stage 3, the patterning creates openings 1608 in the DFR 1606.[00106] After patterning (at 1515) the DFR, the process then electrolytically plates (at 1520) a copper material (e.g., copper composite) through the pattern of the DFR. In some implementations, electrolytically plating comprises dipping the dielectric and the metal layer in a bath solution. Referring to FIG. 16, stage 4 illustrates copper materials (e.g., copper composite) 1610 being plated in the openings 1608 of the DFR 1606.[00107] Referring back to FIG. 15, the process removes (at 1525) the DFR, selectively etches (at 1530) the copper foil material (e.g., copper composite) to isolate the features (e.g., create components such vias, composite conductive traces, and/or pads) and ends. Referring to FIG. 16, stage 15 illustrates the removal of the DFR 1606, while stage 6 illustrates the defined features after the etching process. The above process of FIG.14 may be repeated for each core layer or prepreg layer (dielectric layer) of the substrate. Having described one plating process, another plating process will now be described.[00108] FIG. 17 illustrates a flow diagram for a semi-additive processing (SAP) patterning process for manufacturing a substrate. FIG. 17 will be described with reference to FIG. 18 which illustrates a sequence of a layer (e.g., core layer, prepreg layer) of a substrate during the SAP process of some implementations.[00109] As shown in FIG. 17, the process 1700 may start by providing (at 1705) a dielectric layer that includes copper layer and a primer layer (e.g., a primer coated copper foil). In some implementations, the copper foil is coated with primer and then pressed on the uncured core to form the structure. The primer coated copper foil may be a copper foil. The dielectric layer may be a core layer or a prepreg layer of a substrate. As shown in stage 1 of FIG. 18, the primer 1804 is located between the copper foil 1806 and the dielectric 1802. The copper foil 1806 may be a copper composite foil in some implementations.[00110] Next, the process drills (at 1710) the dielectric layer (e.g., core layer, prepreg layer) to create one or more openings / pattern features (e.g., via pattern features). This may be done to form one or more vias/via features that connect the front and back side of the dielectric. In some implementations, the drilling may be performed by a laser drilling operation. Moreover, in some implementations, the drilling may traverse one or more the metal layers (e.g., primer coated copper foil). In some implementations, the process may also clean the openings / pattern features (e.g., via patterns) created by the drilling operation, by, for example, de-smearing (at 1712) drilled vias / opening on the layer (e.g., core layer).[00111] The process then etches off (at 1715) the copper foil, leaving the primer on the dielectric layer (which is shown in stage 2 of FIG. 18). Next, the process electroless plates (at 1720) a copper seed layer (e.g., copper material) on the primer in some implementations. The thickness of the copper seed layer in some implementations is about 0.1-1 microns (μιη). Stage 3 of FIG. 18 illustrates a copper seed layer 1808 on the primer 1804. [00112] Next, the process applies (at 1725) a dry film resist (DFR) and a pattern is created (at 1730) on the DFR. Stage 4 of FIG. 18 illustrates a DFR 1810 being applied on top of the copper seed layer 1808, while stage 5 of FIG. 18 illustrates the patterning of the DFR 1810. As shown in stage 5, the patterning creates openings 1812 in the DFR 1810.[00113] After patterning (at 1730) the DFR, the process then electrolytically plates (at 1735) a copper material (e.g., copper composite material) through the pattern of the DFR. In some implementations, electrolytically plating comprises dipping the dielectric and the metal layer in a bath solution. Referring to FIG. 18, stage 6 illustrates copper composite materials 1820 being plated in the openings 1812 of the DFR 1810.[00114] Referring back to FIG. 17, the process removes (at 1740) the DFR, selectively etches (at 1745) the copper seed layer to isolate the features (e.g., create vias, traces, pads) and ends. Referring to FIG. 18, Stage 7 illustrates the removal of the DFR 1810, while Stage 8 illustrates the defined features (e.g., composite conductive trace) after the etching process.[00115] The above process of FIG. 17 may be repeated for each core layer or prepreg layer (dielectric layer) of the substrate.[00116] In some implementations, the SAP process may allow for finer / smaller feature (e.g., trace, vias, pads) formation since the SAP process does not require as much etching to isolate features. However, it should be noted that the mSAP process is cheaper than the SAP process in some implementations. In some implementations, the above process may be used for produce Interstitial Via Hole (IVH) in substrates and/or Blind Via Hole (BVH) in substrates.[00117] The plating processes of FIGS. 15 and 17 may be conceptually simplified to the plating process of FIG. 19 in some implementations. FIG. 19 illustrates a flow diagram for a plating method for manufacturing a substrate. As shown in FIG. 19, the method electrolytically plates (at 1905) a copper (e.g., copper composite) through a pattern in a dry film resist (DFR) on a layer of a substrate. The layer may be a dielectric layer. The layer may be a core layer or a prepreg layer of the substrate. In some implementations, the copper (e.g., copper composite) is plated over a copper seed layer, which was previously deposited on the layer (e.g., when using a SAP process). In some implementations, the copper (e.g., copper composite) is plated over a copper foil layer, which was previously on the layer (e.g., when using an mSAP process). The copper foil layer may be a copper composite material in some implementations. [00118] Next, the method removes (at 1910) the DFR from the layer. In some implementations, removing the DFR may include chemically removing the DFR. After removing (at 1910) the DFR, the method selectively etches (at 1915) the foil or seed layer to isolate / define the features of the layer and ends. As described above, the foil may be a copper composite material.[00119] In some implementations, a nickel alloy may be added (e.g., plated) over some or all of a copper layer (e.g., copper foil) during an mSAP process (e.g., methods of FIGS. 15, and 17). Similarly, a nickel alloy may also be added (e.g., plated) over some or all of a copper layer (e.g., copper foil) during a subtractive process.Exemplary Electronic Devices[00120] FIG. 20 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device (e.g., semiconductor device), integrated circuit, die, interposer and/or package. For example, a mobile telephone 2002, a laptop computer 2004, and a fixed location terminal 2006 may include an integrated device 2000 as described herein. The integrated device 2000 may be, for example, any of the integrated devices, integrated circuits, dice or packages described herein. The devices 2002, 2004, 2006 illustrated in FIG. 20 are merely exemplary. Other electronic devices may also feature the integrated device 2000 including, but not limited to, mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, GPS enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers or any other device that stores or retrieves data or computer instructions, or any combination thereof.[00121] One or more of the components, steps, features, and/or functions illustrated in FIGS. 4, 5, 6, 7, 8, 9, 10, 11, 12A-12C, 13, 14, 15, 16, 17, 18, 19 and/or 20 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the disclosure.[00122] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other.[00123] Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed.[00124] The various features of the disclosure described herein can be implemented in different systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art. |